<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Amar Abaz</title>
    <description>The latest articles on Forem by Amar Abaz (@abeamar).</description>
    <link>https://forem.com/abeamar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/abeamar"/>
    <language>en</language>
    <item>
      <title>What happens when SQL Server PAGE gets corrupted ?</title>
      <dc:creator>Amar Abaz</dc:creator>
      <pubDate>Sat, 14 Feb 2026 19:45:14 +0000</pubDate>
      <link>https://forem.com/abeamar/what-happens-when-sql-server-page-gets-corrupted--33pe</link>
      <guid>https://forem.com/abeamar/what-happens-when-sql-server-page-gets-corrupted--33pe</guid>
      <description>&lt;p&gt;In this article, I will explore how SQL Server works with PAGES, the smallest unit of data storage. We will explore how SQL Server organizes data within these pages and how to interact with them. Additionally, I am gonna simulate page corruption, run a validity check, and demonstrate how to fix it.&lt;/p&gt;

&lt;p&gt;Data files are divided into 8KB pages, which consist of&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Page Header (96 bytes); Contains metadata about the page.&lt;/li&gt;
&lt;li&gt;Data Rows (8060 bytes); Holds the actual data of your table.&lt;/li&gt;
&lt;li&gt;Slot Array (36 bytes); Manages the location of data rows within the page.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, these components make up 8192 bytes, or 8KB, which is the standard size for a SQL Server page.&lt;br&gt;
An extent in SQL Server consists of 8 contiguous 8KB pages, totaling 64KB. &lt;br&gt;
There are two types of extents UNIFORM EXTENTS, where all 8 pages are used by a single object, and MIXED EXTENTS, where the pages are shared by multiple objects. This helps optimize space usage and performance, and behaviour is menagable in your database Options.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT [name], [is_mixed_page_allocation_on] FROM sys.databases;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These pages are stored in physical files, categorized as&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Primary or Master Data File (.mdf)&lt;br&gt;
The main file where system and user data are stored by default. It contains also sys data of your database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Secondary or Next Data File (.ndf)&lt;br&gt;
Additional data files used to expand storage, but optional for better arhitecture and sizing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Transaction Log Data File (.ldf)&lt;br&gt;
Stores the transaction logs, which track changes made to the database to ensure consistency and recovery.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These files work together to manage your database storage and ensure data integrity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's get to it..
&lt;/h2&gt;

&lt;p&gt;Lets now check the allocation and structural integrity of the pages in the AdventureWorks2022 database, (skipped checking the indexes). I will now focus on table Person where we will do our &lt;strong&gt;corruption test&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC CHECKTABLE ('AdventureWorks2022.Person.Person');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr26keslw2zz6l4dvcca3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr26keslw2zz6l4dvcca3.jpg" alt=" " width="732" height="241"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC CHECKALLOC ('AdventureWorks2022', NOINDEX);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9x3t0k85brw97vk92uga.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9x3t0k85brw97vk92uga.jpg" alt=" " width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Identifying Pages in Your Database
&lt;/h2&gt;

&lt;p&gt;You can identify which pages a table occupies by using DBCC IND. &lt;br&gt;
This command provides a breakdown of where the table's data is stored across pages and files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC IND('AdventureWorks2022', 'Person.Person', -1)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm232kyd7tv95abejyfqj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm232kyd7tv95abejyfqj.jpg" alt=" " width="800" height="342"&gt;&lt;/a&gt;&lt;br&gt;
Once you've identified a page, you can go deeper into its content using the DBCC PAGE command. &lt;br&gt;
Lets show the internal structure of the page, including the Page Header, m_pageId, m_objId, and the data rows stored in it.&lt;/p&gt;

&lt;p&gt;For example, to inspect page 1314 in file 1.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC PAGE('AdventureWorks2022', 1, 1314, 1) WITH TABLERESULTS;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2q0cg2c5u1yqimadg9fw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2q0cg2c5u1yqimadg9fw.jpg" alt=" " width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Simulating Page Corruption
&lt;/h2&gt;

&lt;p&gt;To demonstrate how corruption can occur, let's intentionally modify a page.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ALTER DATABASE [AdventureWorks2022] SET  SINGLE_USER WITH NO_WAIT;
DBCC WRITEPAGE('AdventureWorks2022', 1, 1314, 0, 1, 0x11, 1);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verifying the Corruption
&lt;/h2&gt;

&lt;p&gt;Now run DBCC CHECKDB or CHECKTABLE again to see how SQL Server detects the integrity corruption. Do not use CHECKALLOC because it only checks for allocation problems not integrity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC CHECKDB('AdventureWorks2022');
DBCC CHECKTABLE ('AdventureWorks2022.Person.Person');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnjzi6bmhke6ih5n4x10.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnjzi6bmhke6ih5n4x10.jpg" alt=" " width="800" height="154"&gt;&lt;/a&gt;&lt;br&gt;
These commands will perform an integrity check on the database and report any issues they find, including corruption in the affected pages.&lt;br&gt;
If we select the whole table we have an error.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Msg 824, Level 24, State 2, Line 32&lt;br&gt;
SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0x246e3cb2; actual: 0x24663cb2). It occurred during a read of page (1:1314) in database ID 5 at offset 0x00000000a44000 in file 'C:\MSSQL\DATA\AdventureWorks2022.mdf'.  Additional messages in the SQL Server error log or operating system error log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;More about DBCC Commands &lt;strong&gt;on my earlier topic&lt;/strong&gt; here:&lt;br&gt;
&lt;a href="https://dev.to/abeamar/mssql-dbcc-how-good-are-they-really--4543"&gt;MSSQL DBCC How good are they really ?&lt;/a&gt; 👌&lt;br&gt;
.&lt;br&gt;
.&lt;br&gt;
.&lt;/p&gt;
&lt;h2&gt;
  
  
  Now lets talk about the ways how to fix this and what to keep in mind..
&lt;/h2&gt;

&lt;p&gt;🤞&lt;strong&gt;First Option:&lt;/strong&gt; Try REPAIR_REBUILD&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC CHECKTABLE ('Person.Person', REPAIR_REBUILD); 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;REPAIR_REBUILD can fix certain types of index corruption or inconsistent structures without causing data loss. It is safer than REPAIR_ALLOW_DATA_LOSS because it doesn't drop data but will rebuild corrupted indexes and structures.&lt;br&gt;
However REPAIR_REBUILD won't help in our case.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Msg 8928, Level 16, State 1, Line 39&lt;br&gt;
Object ID 2101582525, index ID 1, partition ID 72057594049724416, alloc unit ID 72057594057523200 (type In-row data): Page (1:1314) could not be processed.  See other errors for details.&lt;br&gt;
        The repair level on the DBCC statement caused this repair to be bypassed.&lt;br&gt;
Msg 8939, Level 16, State 98, Line 39&lt;br&gt;
Table error: Object ID 2101582525, index ID 1, partition ID 72057594049724416, alloc unit ID 72057594057523200 (type In-row data), page (1:1314). Test (IS_OFF (BUF_IOERR, pBUF-&amp;gt;bstat)) failed. Values are 2057 and -4.&lt;br&gt;
        Repairing this error requires other errors to be corrected first.&lt;br&gt;
Msg 8976, Level 16, State 1, Line 39&lt;br&gt;
Table error: Object ID 2101582525, index ID 1, partition ID 72057594049724416, alloc unit ID 72057594057523200 (type In-row data). Page (1:1314) was not seen in the scan although its parent (1:1568) and previous (1:1313) refer to it. Check any previous errors.&lt;br&gt;
        Repairing this error requires other errors to be corrected first.&lt;br&gt;
Msg 8978, Level 16, State 1, Line 39&lt;br&gt;
Table error: Object ID 2101582525, index ID 1, partition ID 72057594049724416, alloc unit ID 72057594057523200 (type In-row data). Page (1:1315) is missing a reference from previous page (1:1314). Possible chain linkage problem.&lt;br&gt;
        Repairing this error requires other errors to be corrected first.&lt;br&gt;
There are 19967 rows in 3807 pages for object "Person.Person".&lt;br&gt;
CHECKTABLE found 0 allocation errors and 4 consistency errors in table 'Person.Person' (object ID 2101582525).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;😥&lt;strong&gt;Second Option:&lt;/strong&gt; REPAIR_ALLOW_DATA_LOSS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC CHECKTABLE ('Person.Person', REPAIR_ALLOW_DATA_LOSS); 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;This should be your last option&lt;/strong&gt; and only to be used if you are okay with the possibility of data loss. It attempts to repair the corruption in the table, but any corrupted data that cannot be repaired will be removed. Before proceeding, it is crucial to do &lt;strong&gt;BACKUP DATABASE&lt;/strong&gt; to ensure you have a recovery point in case the repair process causes unintended consequences.&lt;/p&gt;

&lt;p&gt;🤔&lt;strong&gt;Third Option:&lt;/strong&gt; PAGE RESTORE&lt;br&gt;
Page restore is a technique used to restore specific corrupted pages in a database from an available backup. The goal is to replace the corrupted pages with uncorrupted pages from a previous backup, minimizing the impact on the rest of the database and avoiding the need for a full restore. However, to successfully restore a page, it's critical to identify when the page was last consistent and healthy.&lt;/p&gt;

&lt;p&gt;When you have regular log backups as part of your Recovery Time Objective (RTO) strategy, you can typically recover your database to the point of the last log backup. However, even when using CHECKSUM and VERIFY during backups, these options only validate the integrity of the backup file and do not detect database corruption.&lt;/p&gt;

&lt;p&gt;Now let's check and confirm the corrupt page id using command bellow.&lt;br&gt;
This select shows those pages that SQL Server has flagged as suspect, which means it gives you a history of past corruption events.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM msdb.dbo.suspect_pages;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5stazygsaks4vggd985.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5stazygsaks4vggd985.jpg" alt=" " width="537" height="46"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Execute commands below and do not forget to apply the log restore after a page restore, if not it will break the continuity of your database's transaction LOG chain. After that make another log backup and restore, and check again the whole database with &lt;strong&gt;DBCC CHECKDB&lt;/strong&gt;. This should be the way to do it with large databases, because we will not restore the whole database only page and logs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RESTORE DATABASE [AdventureWorks2022] PAGE='1:1314' 
FROM  DISK = N'Z:\BCKP\w19$SQ\AdventureWorks2022\FULL\w19$SQ_AdventureWorks2022_FULL_20260214_122842.bak' WITH  FILE = 1,  
NORECOVERY,  NOUNLOAD,  STATS = 5, ONLINE   -- online is supported by Enterprise edition only
RESTORE LOG [AdventureWorks2022] FROM  DISK = N'Z:\BCKP\w19$SQ\AdventureWorks2022\LOG\w19$SQ_AdventureWorks2022_LOG_20260214_122900.trn' WITH  FILE = 1,  
NORECOVERY,  NOUNLOAD,  STATS = 5

BACKUP LOG [AdventureWorks2022] TO DISK = N'Z:\BCKP\w19$SQ\AdventureWorks2022\LOG\LOG_AdventureWorks2022' WITH INIT
RESTORE LOG [AdventureWorks2022] FROM DISK = N'Z:\BCKP\w19$SQ\AdventureWorks2022\LOG\LOG_AdventureWorks2022' WITH RECOVERY
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxes2rhpuwvmgnajd8cvv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxes2rhpuwvmgnajd8cvv.jpg" alt=" " width="734" height="620"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🫡&lt;strong&gt;Fourth Option:&lt;/strong&gt; Remote restore and recreate&lt;br&gt;
Restore to a new location and recreate that Table (For small databases).&lt;br&gt;
Keep it simple if the corruption is isolated to a single table or a small part of the database, and the database size allows for it. An alternative approach is to restore the database to a new location and then recreate the corrupted table on primary db. While doing that you could also make a Trigger to deny any interaction if SQL Server still allows insert, update, delete to be performed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TRIGGER tr_tableDENY ON Person.Person
AFTER INSERT, UPDATE, DELETE
AS
BEGIN
    RAISERROR ('This table is temporarily locked for investigation.', 16, 1);
    ROLLBACK TRANSACTION;
END
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Final Advice
&lt;/h2&gt;

&lt;p&gt;Make sure to run DBCC CHECKDB regularly. This is crucial for maintaining the integrity of your database. However, don't rely solely on DBCC checks. Set up real time alerts for corruption detection.&lt;/p&gt;

&lt;p&gt;Here's my advice, create an SQL job that constantly &lt;strong&gt;monitors the msdb.dbo.suspect_pages&lt;/strong&gt; table for new entries and sends you an alert as soon as any corruption is detected. You can clean this table after alert was processed.&lt;/p&gt;

&lt;p&gt;Moreover, since Error 823, 824, and 825 are associated with corruption in database pages, you should set up an &lt;strong&gt;alert for these specific errors&lt;/strong&gt; as well. That way, you can be notified immediately when they happen.&lt;br&gt;
To do this go to SQL Server Agent, Right click on Alerts and select New Alert. Make Alert for every Error and set to use Operator for mail response.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhck4bjgtimvc1ur0g156.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhck4bjgtimvc1ur0g156.jpg" alt=" " width="800" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Error 823: This error indicates that SQL Server encountered an I/O error while reading or writing to a disk.&lt;/li&gt;
&lt;li&gt;Error 824: This error occurs when SQL Server detects a logical consistency error in the data read from a page.&lt;/li&gt;
&lt;li&gt;Error 825: This error signifies a page failure during an I/O operation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Reference:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/sql/relational-databases/pages-and-extents-architecture-guide?view=sql-server-ver17" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/sql/relational-databases/pages-and-extents-architecture-guide?view=sql-server-ver17&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/sql/relational-databases/databases/database-files-and-filegroups?view=sql-server-ver17" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/sql/relational-databases/databases/database-files-and-filegroups?view=sql-server-ver17&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/abeamar/mssql-dbcc-how-good-are-they-really--4543"&gt;https://dev.to/abeamar/mssql-dbcc-how-good-are-they-really--4543&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/sql/relational-databases/errors-events/mssqlserver-823-database-engine-error?view=sql-server-ver17" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/sql/relational-databases/errors-events/mssqlserver-823-database-engine-error?view=sql-server-ver17&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>database</category>
      <category>sql</category>
      <category>tutorial</category>
      <category>learning</category>
    </item>
    <item>
      <title>AbeMon MS SQL Tool v1.0.0 Monitor and Debug</title>
      <dc:creator>Amar Abaz</dc:creator>
      <pubDate>Sun, 01 Feb 2026 17:28:37 +0000</pubDate>
      <link>https://forem.com/abeamar/abemon-ms-sql-tool-v100-monitor-and-debug-60m</link>
      <guid>https://forem.com/abeamar/abemon-ms-sql-tool-v100-monitor-and-debug-60m</guid>
      <description>&lt;p&gt;AbeMon is a powerful, lightweight toolkit designed for database administration, offering all the essential checks needed for troubleshooting and maintenance in your daily tasks. &lt;br&gt;
Enjoy a 30 day free trial to explore its full potential and capabilities.&lt;br&gt;
If you find it useful, feel free to get in touch.&lt;br&gt;
This App is designed to deliver smooth performance while using minimal system resources, consuming just 15-20MB of RAM.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
git clone "https://github.com/abeamar/AbeMon.git"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Choose the installation method that works best for you!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using the .exe or .msi Installer:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
https://github.com/abeamar/AbeMon/main/AbeSetup.msi

https://github.com/abeamar/AbeMon/main/setup.exe

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Or Manual Installation (Copying the Folder abeSetup)
After copying, you can run the application directly from that folder.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What this tool provides:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Disk usage and growth&lt;/li&gt;
&lt;li&gt;Memory and CPU utilization&lt;/li&gt;
&lt;li&gt;Database health, security and size checks&lt;/li&gt;
&lt;li&gt;Backup and restore history checks&lt;/li&gt;
&lt;li&gt;Monitor active backup and restore operations with predictions&lt;/li&gt;
&lt;li&gt;Table sizes and index fragmentation checks&lt;/li&gt;
&lt;li&gt;SQL Server error logs&lt;/li&gt;
&lt;li&gt;Windows Events error logs&lt;/li&gt;
&lt;li&gt;SQL Agent job failures checks&lt;/li&gt;
&lt;li&gt;Schema object definitions&lt;/li&gt;
&lt;li&gt;Raw SQL object extraction&lt;/li&gt;
&lt;li&gt;Active sessions and blocking queries&lt;/li&gt;
&lt;li&gt;Deadlock, block tracking&lt;/li&gt;
&lt;li&gt;Custom session level monitoring for your SQL Server instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a fully functional and very useful tool to have in your DBA arsenal.&lt;/p&gt;

&lt;p&gt;😎&lt;br&gt;
Stay tuned for regular updates as I dive deeper into how this tool can be used in various DBA scenarios. Whether you're troubleshooting issues or optimizing your SQL Server environment, this tool will be your best friend.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example in Action:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CASE:&lt;/strong&gt; Tracking Restore &amp;amp; Backup Operations with Real Time Insights&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Database restore is dragging on longer than expected, and you have no idea who initiated it or which backup was used. Or maybe you've got conflicting reports about when a restore actually happened, and you're stuck chasing down clues through endless logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jkgl1j3uwuwn3nvkvii.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jkgl1j3uwuwn3nvkvii.jpg" alt=" " width="800" height="599"&gt;&lt;/a&gt;&lt;br&gt;
With just a few clicks, you can access detailed restore history that shows exactly who performed the restore, which backup was used, and when it happened. &lt;br&gt;
Or Instantly see who's performing restores, which backups are in use, and predict how long they'll take. &lt;br&gt;
Whether it's active restores or ongoing backup jobs, this tool will provides the crucial data you need.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CASE:&lt;/strong&gt; Diagnosing long running transactions when resources are stuck&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You show up to work, and your services are down. They are not processing anything. Maybe it’s just waiting for resources, or worse, it's hung up because your tempdb or transaction log is full. The usual suspects, but it's hard to figure out exactly where the bottleneck is.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvm4vzijpf79yrzdwngz6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvm4vzijpf79yrzdwngz6.jpg" alt=" " width="800" height="598"&gt;&lt;/a&gt;&lt;br&gt;
That's where this tool comes in. One quick check will show you all the active sessions in your databases, sorted by execution time, so you can pinpoint the longest running transactions that might be blocking resources. But you don't have to stop there, you can combine this with other checks for tempdb usage, overall session activity, and transaction log health to get a full picture of what's happening and address it before it affects the rest of your system.&lt;br&gt;
Sometimes the answer is hiding in plain sight, SQL Server logs and Windows Event logs often tell the full story once you know where to look.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcj3rjz35un41jfrfcsv5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcj3rjz35un41jfrfcsv5.jpg" alt=" " width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CASE:&lt;/strong&gt; Handling Max Identity value reached in your production&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Just a quick example of how this tool can save you from unexpected surprises. Imagine you're in production and service app suddenly can't insert data into a key table because your identity column (whether INT, BIGINT, or SMALLINT) has maxed out. It happens.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0d2bn34l15t3dje9r5n8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0d2bn34l15t3dje9r5n8.jpg" alt=" " width="800" height="601"&gt;&lt;/a&gt;&lt;br&gt;
With one of the many checks the tool runs this is just one of the daily checks you would keep an eye on. For every table with incremental identity column in your databases you can get a heads up before that becomes a problem. If you know this early, you can take action, like altering the column to a bigger type or creating a new table, so things don't grind to a halt.&lt;/p&gt;

&lt;p&gt;Always keep an eye on your table size.&lt;br&gt;
In SQL Server, a well maintained index and table size go a long way in ensuring fast queries and low overhead.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fol59tfl42hkhipq8hqec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fol59tfl42hkhipq8hqec.png" alt=" " width="800" height="596"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Take it for a spin..
&lt;/h2&gt;

&lt;p&gt;Download AbeMon and explore its full range of features at your own pace. From real time monitoring to detailed backup and restore insights, this tool is designed to make your workflow smoother. I am sure you'll appreciate the difference once you see it in action.&lt;/p&gt;

</description>
      <category>sql</category>
      <category>sqlserver</category>
      <category>database</category>
      <category>tooling</category>
    </item>
    <item>
      <title>MSSQL DBCC How good are they really ?</title>
      <dc:creator>Amar Abaz</dc:creator>
      <pubDate>Wed, 21 Jan 2026 20:48:58 +0000</pubDate>
      <link>https://forem.com/abeamar/mssql-dbcc-how-good-are-they-really--4543</link>
      <guid>https://forem.com/abeamar/mssql-dbcc-how-good-are-they-really--4543</guid>
      <description>&lt;p&gt;What does DBCC mean ? Well, it stands for Database Console Commands, of course.&lt;br&gt;
When you're working with SQL Server, these commands are like the secret weapons for any DBA. They're super powerful tools for troubleshooting, monitoring, and ensuring your data stays in check and safe.&lt;/p&gt;

&lt;p&gt;But how effective are these commands really ? Are they as good as they seem ?&lt;br&gt;
Let's dive into some of the most useful DBCC commands, how they work, and when you might want to use them.&lt;/p&gt;

&lt;p&gt;To explore all commands (documented and undocumented) we first need to enable the trace flag for detailed output, then use the command to view them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC TRACEON(2588) WITH NO_INFOMSGS;
DBCC HELP ('?') WITH NO_INFOMSGS;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No, if you're not sure how to use some of those commands like for example DBCC CHECKDB, you can simply run it like this,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC HELP ('CHECKTABLE') WITH NO_INFOMSGS;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There's an extensive list of DBCC commands, each serving a specific purpose. &lt;br&gt;
This will give you a complete breakdown of how to use the command and its various options.&lt;/p&gt;

&lt;p&gt;Bellow are some of the most common ones, with real practical scenarios and steps how to use them  ..&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;DBCC USEROPTIONS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Very useful command to check and troubleshoot session settings for a user, including the language, date format, read settings and other stuff on session level.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC USEROPTIONS;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl95dniz29i4kohocqcgb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl95dniz29i4kohocqcgb.jpg" alt=" " width="291" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;DBCC CHECKDB&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is the GO TO command for verifying the physical and logical integrity of an entire database. It's typically executed when corruption is suspected, or as part of regular health checks. Create an SQL job to execute this daily or weekly depending on your database size.&lt;/p&gt;

&lt;p&gt;When corruption is detected, especially if multiple objects are affected then this command often becomes your first line of defense.&lt;/p&gt;

&lt;p&gt;Before attempting any repair, isolate the database to prevent active connections from interfering with the process.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ALTER DATABASE [AbeDB] SET RESTRICTED_USER WITH ROLLBACK IMMEDIATE;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures only privileged users can access the database while SQL Server rolls back all active transactions.&lt;/p&gt;

&lt;p&gt;You can now rerun DBCC CHECKDB with a repair parameter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC CHECKDB ('AbeDB', REPAIR_FAST); --minimal repair
DBCC CHECKDB ('AbeDB', REPAIR_REBUILD); --rebuild bad index
DBCC CHECKDB ('AbeDB', REPAIR_ALLOW_DATA_LOSS); --can result dataloss
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;REPAIR_ALLOW_DATA_LOSS option should never be your first choice in production. If REPAIR_FAST fails, restoring from a clean backup is almost always the safer option.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Avoid data loss unless absolutely necessary. Ensure consistent and verified log backups.&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Also do not use command DBCC REPAIRDB. This is an older and not recommended command that has been replaced by CHECKDB (Since SQL Server 2005).&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;DBCC CHECKALLOC&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This command is especially useful when you suspect on storage level or allocation related corruption. It checks and ensures all pages are correctly allocated, for data integrity you should use CHECKDB.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC CHECKALLOC ('AbeDB'); --check
DBCC CHECKALLOC ('AbeDB', NOINDEX) WITH NO_INFOMSGS; --no index check
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsb9sx30tf8jo3oy9aw2w.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsb9sx30tf8jo3oy9aw2w.jpg" alt=" " width="800" height="513"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;DBCC CHECKTABLE &amp;amp; CHECKINDEX&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Another good option if you don't want to scan the whole database and want to focus on specic table or index, or you already know where the problem.. then use this commands with WITH EXTENDED_LOGICAL_CHECK.&lt;br&gt;
It will include better logical integrity checks and check index consistency, foreign keys, null and unique constraints.&lt;/p&gt;

&lt;p&gt;Go by logic "Start broad, then narrow down".&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC CHECKTABLE ('AbeTable');
DBCC CHECKTABLE ('AbeTable', REPAIR_REBUILD); --repair
DBCC CHECKTABLE ('AbeTable', REPAIR_ALLOW_DATA_LOSS); --data loss

ALTER INDEX IndexName ON AbeTable REBUILD; --rebuild if errors exist
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;DBCC OPENTRAN&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Used to display information about the oldest active transaction within a specified database. With this helpful command you can find blocking processes that need to be terminated to free up resources. Used when troubleshooting why the transaction log is not shrinking or is growing uncontrollably, as open transactions can prevent log truncation during backup.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC OPENTRAN('AbeDB');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;DBCC FREEPROCCACHE &amp;amp; DBCC FREESYSTEMCACHE&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;FREEPROCCACHE is used if you suspect that the cache contains bad or suboptimal execution plans. You can clear specific cached plan.&lt;br&gt;
When a query is executed, sql server reuses an existing execution plan from the cache if it exists, which can improve performance because it avoids the need to recompile the query each time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC FREEPROCCACHE (0x05004320D5B7S12701B91BBE5B5082B6) WITH NO_INFOMSGS;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;FREESYSTEMCACHE allows you to clear various caches based on the context you choose. The PLANCACHE option is less impactful than FREEPROCCACHE because it only clears the execution plans, without affecting other system caches such as the buffer pool.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC FREESYSTEMCACHE ('ALL');
DBCC DROPCLEANBUFFERS; --pages of data read from disk
DBCC FREESYSTEMCACHE ('PLANCACHE'); --recompile queries with the new index
DBCC FREESYSTEMCACHE ('COLUMNSTORE');--columnstore indexes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;DBCC SHOWCONTIG&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This command displays fragmentation info. I use this a lot to check status of indexes using T-SQL. Second parameter is for the index on the table you focus on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC SHOWCONTIG('Person.Person',2);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiu4ujk6oowqxmbmy7qy1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiu4ujk6oowqxmbmy7qy1.jpg" alt=" " width="724" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;DBCC SQLPERF&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Quickly checks the transaction log space usage in SQL Server. Great for monitoring and intervening with log backups or shrinking when needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC SQLPERF('logspace');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;DBCC TRACEON &amp;amp; TRACEOFF &amp;amp; TRACESTATUS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;For deeper analysis, these commands allow you to enable, disable, and inspect SQL Server trace flags.&lt;/p&gt;

&lt;p&gt;Trace flags are commonly used for debugging, performance troubleshooting, and internal behavior analysis. They can be enabled at the session level or globally, and you can even configure them at server startup if permanent behavior changes are required.&lt;br&gt;
&lt;strong&gt;They are primarily used to either change the behavior of the SQL Server engine or to log additional details&lt;/strong&gt; in the SQL Server logs.&lt;br&gt;
Be cautious not to overwhelm your server with excessive logging. That's why it's important to disable trace flags after you've completed your analysis.&lt;/p&gt;

&lt;p&gt;Bellow is a few examples and most commont traces you should use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trace 1222&lt;/strong&gt;: Gives detailed information about deadlock into SQL Server error log..&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DBCC TRACEON(1222); --enable flag
DBCC TRACEON(1222,-1); --enable globaly
DBCC TRACEON(1204, 1222); --enable multiple flags
DBCC TRACESTATUS(1222); --check active flags
DBCC TRACEOFF(1222); --turn off 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To ensure your configuration persists after every server restart, you need to add the trace flag at the server level,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open SQL Server Configuration Manager,&lt;/li&gt;
&lt;li&gt;Right click on SQL Server Services,&lt;/li&gt;
&lt;li&gt;Select Properties, go to the Startup Parameters tab,&lt;/li&gt;
&lt;li&gt;Add a new line with the trace flag: -T1222.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here are some other useful Trace Flags,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1222 - log deadlock in server logs&lt;/li&gt;
&lt;li&gt;1204 - locking info with quick analysis&lt;/li&gt;
&lt;li&gt;3226 - stops successful database backups from logging&lt;/li&gt;
&lt;li&gt;9567 - compression for fast AG Automatic seeding&lt;/li&gt;
&lt;li&gt;3042 - disable pre allocation of space during backup&lt;/li&gt;
&lt;li&gt;3604 - send output to client&lt;/li&gt;
&lt;li&gt;1117 - autogrow all files together&lt;/li&gt;
&lt;li&gt;460 - truncation enhance info 
.....
And many more.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  So, how good are they really ?
&lt;/h2&gt;

&lt;p&gt;While DBCC commands can catch many types of corruption, they aren't a one-stop solution. They may not catch all possible issues, and some might require deeper troubleshooting.&lt;br&gt;
The answer lies in your knowledge of commands and the specific needs of your database. &lt;br&gt;
The best way to truly understand the power of DBCC is to experiment with different commands in your own test environment.&lt;br&gt;
With the right knowledge and caution, DBCC commands can be incredibly effective tools in your DBA arsenal.&lt;/p&gt;

&lt;p&gt;Let's have a look at those arsenal again 🤯🤯&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dbcc sqlperf
dbcc inputbuffer
dbcc checkdb 
dbcc memorystatus
dbcc freeproccache
dbcc useroption
dbcc ind
dbcc page
dbcc writepage
dbcc freesystemcache
dbcc show_statistics
dbcc dropcleanbuffers
dbcc showcontif
dbcc opentran
dbcc addtempdb 
dbcc checkalloc 
dbcc checkcatalog 
dbcc checkindex 
dbcc checkstorage
dbcc checktable
dbcc checkverify 
dbcc complete_xact
dbcc engine
dbcc fix_texT
dbcc forget_xact
dbcc indexalloc
dbcc monitor
dbcc pravailabletempdbs
dbcc reindex
dbcc serverlimits
dbcc stackused
dbcc tablealloc
dbcc zapdefraginfo 
dbcc quorum
.....
...
..
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;References:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-transact-sql?view=sql-server-ver17" rel="noopener noreferrer"&gt;www.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-transact-sql?view=sql-server-ver17&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc36272.1600/doc/html/san1393050966070.html" rel="noopener noreferrer"&gt;www.infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sqlshack.com/sql-server-trace-flags-guide-from-1-to-840/" rel="noopener noreferrer"&gt;www.sqlshack.com/sql-server-trace-flags-guide-from-1-to-840/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sqlservercentral.com/blogs/automatic-seeding-compression" rel="noopener noreferrer"&gt;www.sqlservercentral.com/blogs/automatic-seeding-compression&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.mssqltips.com/sqlservertip/3626/testing-sql-server-backup-performance-with-trace-flag-3042/" rel="noopener noreferrer"&gt;www.mssqltips.com/sqlservertip/3626/testing-sql-server-backup-performance-with-trace-flag-3042/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.brentozar.com/blitz/trace-flags-enabled-globally/" rel="noopener noreferrer"&gt;www.brentozar.com/blitz/trace-flags-enabled-globally/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>sql</category>
      <category>tutorial</category>
      <category>sqlserver</category>
      <category>database</category>
    </item>
    <item>
      <title>Modular Approach to Database Performance and Maintainability</title>
      <dc:creator>Amar Abaz</dc:creator>
      <pubDate>Sun, 18 Jan 2026 12:12:03 +0000</pubDate>
      <link>https://forem.com/abeamar/modular-approach-to-database-performance-and-maintainability-4ao2</link>
      <guid>https://forem.com/abeamar/modular-approach-to-database-performance-and-maintainability-4ao2</guid>
      <description>&lt;p&gt;Let's talk about a smart, modular approach to filegroups on MSSQL. &lt;br&gt;
The idea is simple but powerful, &lt;strong&gt;isolate your schemas into their own filegroups&lt;/strong&gt;. &lt;br&gt;
This not only gives you better control over performance and backups, but it also opens up options for maintenance and recovery that you wouldn't have with a one filegroup strategy. &lt;br&gt;
Sounds good ? Let's dive in.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why You Should Care About Filegroup Architecture
&lt;/h2&gt;

&lt;p&gt;Here’s the deal, your filegroup setup might seem like a minor part of database design, but it can affect how well your database performs, how fast you can back it up, and how easily you can manage it over time. &lt;br&gt;
The way you set up your filegroups can also determine how easy it is to recover from an issue or even move data around when needed.&lt;/p&gt;

&lt;p&gt;Instead of tossing everything into the PRIMARY filegroup or continuing adding more filegroups on top (just to split size of files), isolating schemas into separate filegroups gives you more control over your environment. &lt;/p&gt;

&lt;p&gt;That means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Better Performance: Distribute the workload across different physical disks for faster I/O.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Faster and Smarter Backups: Want to restore just part of the database ? No problem.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Minimal Downtime for Maintenance: You can work on one part of the database while the rest stays up and running.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Benefits You'll Love..&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt; Selective Restore means Less Downtime&lt;/p&gt;

&lt;p&gt;One of the coolest things about isolating schemas into their own filegroups is that you can restore them individually. &lt;br&gt;
Let's say you have a corruption issue in the "[abe_eu]" schema. Because "[abe_eu]" is in its own filegroup "[abe_eu_fg]", you can just restore that filegroup without affecting the rest of the database. &lt;br&gt;
This can save you hours (or even days) of downtime, depending on your database size.&lt;/p&gt;

&lt;p&gt;Plus, because the PRIMARY filegroup holds system objects, SQL Server can still bring the database online even if other filegroups are down. &lt;br&gt;
Your database stays mostly available while you're restoring just the affected part.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.&lt;/strong&gt; Offline Filegroups for Easy Maintenance&lt;/p&gt;

&lt;p&gt;Want to move a file around, or switch up the storage location of some .ndf files ? &lt;br&gt;
If you need to do some heavy maintenance (like moving data to a new disk), you can take just one filegroup offline while leaving the rest of the database online.&lt;/p&gt;

&lt;p&gt;For example, if you need to migrate the abe_eu_fg FileGroup but don't want to disrupt everything else, you can just take abe_eu_fg offline temporarily. &lt;br&gt;
The rest of your database will still be accessible to users, and you're free to do your thing with minimal impact on the service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.&lt;/strong&gt; Better Schema Isolation and Flexibility&lt;/p&gt;

&lt;p&gt;By isolating each schema in its own filegroup, you give yourself more flexibility for a ton of future use cases. Here's why it matters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security: You can set specific access policies at the filegroup level. If you want more control over who can access what, this gives you an easy way to manage that.&lt;/li&gt;
&lt;li&gt;Data Movement: Moving a schema around becomes way easier. If you need to archive or migrate.&lt;/li&gt;
&lt;li&gt;Performance Tuning.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SQL Code example:&lt;br&gt;
&lt;em&gt;Set up schema and filegroup&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;USE [AbeDB]
CREATE SCHEMA [abe_eu]
CREATE SCHEMA [abe_us]
GO

ALTER DATABASE [AbeDB] ADD FILEGROUP [abe_eu_fg]
ALTER DATABASE [AbeDB] ADD FILEGROUP [abe_us_fg]
GO

ALTER DATABASE [AbeDB] 
ADD FILE ( NAME = N'abe_eu_fg', FILENAME = N'D:\MSSQL\YourInstance\abe_eu_fg.ndf' , SIZE = 102400KB , FILEGROWTH = 102400KB ) 
TO FILEGROUP [abe_eu_fg]
GO
ALTER DATABASE [AbeDB] 
ADD FILE ( NAME = N'abe_us_fg', FILENAME = N'D:\MSSQL\YourInstance\abe_us_fg.ndf' , SIZE = 102400KB , FILEGROWTH = 102400KB ) 
TO FILEGROUP [abe_us_fg]
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Set up your tables when creating them&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE [abe_eu].[TableTest]
(
Column1 datatype (size),
column2 datatype (size),
.
.
columnN datatype(size)
) ON [abe_eu_fg];
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Set up permissions on role&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE ROLE [abe_eu_r]
GO
ALTER ROLE [abe_eu_r] ADD MEMBER [testUser]
GO
GRANT SELECT ON SCHEMA::[abe_eu] TO [abe_eu_r]
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One of the most powerful aspects of this filegroup architecture is the ability to restore individual filegroups, rather than the entire database. &lt;br&gt;
For example, imagine you encounter a corruption in one of your schema/filegroup. Since it is isolated in its own filegroup, you can restore other filegroups, leaving the rest of the database operational.&lt;/p&gt;

&lt;p&gt;SQL Code example:&lt;br&gt;
&lt;em&gt;Do your maintenance&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ALTER DATABASE [AbeDB] MODIFY FILE ( NAME = abe_eu_fg,   
   FILENAME = 'F:\MSSQL\TempFolder\abe_eu_fg.ndf'); 
GO
ALTER DATABASE [AbeDB] MODIFY FILE (NAME = 'abe_eu_fg', OFFLINE);
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;After maintenance&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RESTORE DATABASE [AbeDB] FILEGROUP = 'abe_eu_fg' WITH RECOVERY
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By taking the time to design your filegroup strategy early on, you're not only optimizing for current requirements but future-proofing your database architecture. &lt;br&gt;
Whether you're dealing with large datasets, frequent schema changes, or high availability requirements, a modular filegroup approach will give you the tools to keep your SQL Server environment running smoothly and efficiently.&lt;br&gt;
&lt;em&gt;Check your database status, with filegroups, and sizes&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; SELECT [databasefile].NAME  AS [FileName],
  [filegroup].NAME           AS [File_Group_Name],
  [filegroup].type_desc,
  physical_name [Data File Location],
  size / 128    AS [Size_in_MB],
  state_desc    [State of FILE],
  growth        [Data file growth]
FROM   sys.database_files [databasefile]
INNER JOIN sys.filegroups [filegroup]
          ON [databasefile].data_space_id = [filegroup].data_space_id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don’t miss my other posts for more tips and best practices.&lt;/p&gt;

&lt;p&gt;Reference:&lt;br&gt;
&lt;a href="https://learn.microsoft.com/en-us/sql/relational-databases/databases/database-files-and-filegroups?view=sql-server-ver17" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/sql/relational-databases/databases/database-files-and-filegroups?view=sql-server-ver17&lt;/a&gt;&lt;/p&gt;

</description>
      <category>sqlserver</category>
      <category>filegroups</category>
      <category>tutorial</category>
      <category>sql</category>
    </item>
    <item>
      <title>MSSQL Encryptions, Certificates, A-Symmetric Keys and more (Vol 1)</title>
      <dc:creator>Amar Abaz</dc:creator>
      <pubDate>Sun, 11 Jan 2026 15:36:22 +0000</pubDate>
      <link>https://forem.com/abeamar/mssql-encryptions-certificates-a-symmetric-keys-and-more-vol-1-1ed9</link>
      <guid>https://forem.com/abeamar/mssql-encryptions-certificates-a-symmetric-keys-and-more-vol-1-1ed9</guid>
      <description>&lt;p&gt;In today's world, protecting sensitive data is a top priority.&lt;br&gt;
I'll be covering this topic in two volumes. I will explore encryption strategies within Microsoft SQL Server, focusing on best practices for securing your data. The second volume will dive into practical examples and some extra tips. In this section, we will focus on the best practices and available options, including&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Service Master Key (SMK) and Database Master Key (DMK)&lt;/strong&gt;: Key components of key management in SQL Server. Important in securing your database and ensuring key recovery.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Symmetric and Asymmetric Keys&lt;/strong&gt;: Difference and use cases for symmetric (single key) encryption versus asymmetric (public/private key) encryption, and how they protect data at rest and in transit.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Certificates&lt;/strong&gt;: They are a more secure and scalable solution for key management, and they improve your encryption strategy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Transparent Data Encryption (TDE)&lt;/strong&gt;: TDE automatically encrypts an entire database to protect data at rest.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;TLS Encryption&lt;/strong&gt;: Protecting data in transit, ensuring that the data moving between your SQL Server instance and client applications is encrypted and secure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Column-Level Encryption&lt;/strong&gt;: A look at how you can protect sensitive data at the column level, offering flexibility for encryption on specific data within your database.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Properly managing Certificates, Symmetric keys, and Asymmetric keys can be tricky. That is why it's important to avoid relying on simple passwords, and why you should instead leverage Database Master Keys (DMK), Service Master Keys (SMK), and Certificates for a more robust and recoverable encryption strategy.&lt;/p&gt;
&lt;h2&gt;
  
  
  Database Master Key (DMK) and Service Master Key (SMK)
&lt;/h2&gt;

&lt;p&gt;Both the DMK and SMK are crucial for key management in SQL Server.&lt;br&gt;
The DMK is used to protect certificates and symmetric keys within a database. It's essential for managing encryption at the database level.&lt;br&gt;
The SMK is tied to the SQL Server instance and is used to encrypt the DMK and other server level secrets. After restarting SQL Server if the SMK and DMK are in place and were correctly created, they are reloaded into memory automatically when SQL Server starts. All your certificates (including those signed by the DMK) should be accessible because the SMK and DMK are available in memory to decrypt them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Backups Matter&lt;/strong&gt;&lt;br&gt;
It's important to back up both the SMK and DMK to ensure that your encryption strategy is recoverable. Backups are critical for disaster recovery, key restoration, and server migrations. They are especially necessary if your server crashes, is rebuilt, or if keys are accidentally deleted or corrupted.&lt;/p&gt;

&lt;p&gt;To check if you have SMK and DMK use this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM sys.symmetric_keys WHERE name = '##MS_ServiceMasterKey##';
SELECT * FROM sys.symmetric_keys WHERE name = '##MS_DBMasterKey##';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Command to create them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE MASTER KEY ENCRYPTION BY PASSWORD='Password$1';

BACKUP MASTER KEY 
TO FILE = 'C:\ABEDMK' ENCRYPTION BY PASSWORD = 'Password';

BACKUP SERVICE MASTER KEY 
TO FILE = 'C:\ABESMK' ENCRYPTION BY PASSWORD = 'Password';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check whether your certificates are protected by password or DMK use this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT name, pvt_key_encryption_type_desc FROM sys.certificates;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Certificates
&lt;/h2&gt;

&lt;p&gt;While SQL Server allows you to use passwords for encryption, using Certificates for key management is a much more secure and scalable solution.&lt;br&gt;
The flexibility of certificates ensures that you can easily back up and recover keys without relying on passwords alone.&lt;br&gt;
Using certificates not only enhances encryption but also allows you to elevate User rights by signing stored procedures. This can be incredibly useful in special cases where users need elevated permissions to perform certain tasks at the system level, such as sending emails or executing jobs from user database.&lt;br&gt;
Certificates offer multiple encryption options, including algorithms like SHA-256 and RSA.&lt;br&gt;
These algorithms ensure that your encryption processes are not only secure but also meet industry standards, providing a higher level of protection for your data.&lt;/p&gt;

&lt;p&gt;Create and Backup your certificate with PK and secure it by this command.&lt;br&gt;
Don't forget to write expiry date because by default it is one year.&lt;br&gt;
RSA 2048 is the default encryption algorithm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE MASTER KEY ENCRYPTION BY PASSWORD='dbPassword$1';

CREATE CERTIFICATE ABECert
WITH SUBJECT='Data_Encryption',
EXPIRY_DATE = '2040-12-31';

BACKUP CERTIFICATE ABECert
TO FILE = 'C:\ABECert'
WITH PRIVATE KEY (file='C:\ABECert_key.pvk',
ENCRYPTION BY PASSWORD='Password');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because i haven't use ENCRYPTION BY PASSWORD, my certificate is protected by DMK.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vq73goq0gynw89np6af.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vq73goq0gynw89np6af.jpg" alt=" " width="299" height="45"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Symmetric Keys combined with Certificate (for server-side encryption)
&lt;/h2&gt;

&lt;p&gt;With symmetric keys, you can encrypt sensitive columns at the database level, ensuring that data is encrypted at rest and can only be accessed by authorized users or applications.&lt;br&gt;
My advice for you is not to use encryption by passwords while creating them but rely solely on your protected Certificate.&lt;br&gt;
This approach ensures that sensitive information is kept secure even if the database is compromised, as the data will only be readable when the permission to Certificate is available to user.&lt;br&gt;
Once set up, you can encrypt your columns and decrypt them during SELECT, UPDATE, or INSERT operations.&lt;br&gt;
We'll dive into a practical example of how to implement this case in the next volume.&lt;/p&gt;

&lt;p&gt;List all symmetric key by this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM sys.symmetric_keys;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this command, I am creating a symmetric key that will rely on my certificate that I created earlier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE SYMMETRIC KEY ABESymmKey
    WITH ALGORITHM = AES_256
    ENCRYPTION BY CERTIFICATE ABECert;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Asymmetric Keys and Always Encrypted (for client-side encryption)
&lt;/h2&gt;

&lt;p&gt;Always Encrypted is an excellent feature that uses asymmetric keys for encrypting and decrypting sensitive data transparently.&lt;br&gt;
This way allows you to encrypt sensitive columns directly from the application, ensuring that the decryption key never leaves the client-side application. This ensures maximum data protection.&lt;br&gt;
Always Encrypted ensures that data is encrypted on the client side and stays encrypted when it’s stored in the database, with no possibility of anyone (including DBAs) reading the sensitive data without the decryption keys.&lt;/p&gt;

&lt;p&gt;Check if exist by using this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM sys.column_master_keys;
SELECT * FROM sys.column_encryption_keys;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you INSERT, UPDATE or SELECT, SQL Server will automatically encrypt it using the column encryption key (CEK).&lt;br&gt;
There are two ways of encryption, DTERMINISTIC and RANDOMIZED. Difference is that with Deterministic encryption, identical values (example "Abeamar", "Abeamar") will always encrypt to the same hash.&lt;/p&gt;

&lt;p&gt;Once you create your Certificate (in my case CMK_ABEAE_Cert), import it into the server's credentials. Then, to enable Always Encrypted on a column, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE COLUMN MASTER KEY [AbeMK]
WITH
(
    KEY_STORE_PROVIDER_NAME = 'MSSQL_CERTIFICATE_STORE', 
    KEY_PATH = 'LocalMachine/My/AbeMK_Cert');

CREATE COLUMN ENCRYPTION KEY [AbeCEK]
WITH VALUES
(
    COLUMN_MASTER_KEY = [AbeMK],
    ALGORITHM = 'RSA_OAEP',
    ENCRYPTED_VALUE = 0x016E000001630075007200720065006E00740075007300650072002F006D0079002F0064003900620064006600370032003100650033003400660031003000640033006600390065003200310062003800310035003100310033003300640062003700610061003700660039003100620035001532B80FF86E6773B5DDB0E4A83B1568DA8646400A9612556FED5210F598D8198F7F89E440E891A3DC99ACE3B381FD40E92FFCE7351F1F74DADBE7C591CBA95C94F9640181AA8D3111F20B39FF4F4A0DBA34F1CB1402B7429893BCA3F89607D208FE63236FA13032CB546F5D521360E761E4555DA78D42C061D09B3C55287928CE25FE182C237CA94B57697BE6F0935655DD0F208E2EEEDA357EB99FC27816D79D217E8D913FCDD1690AF7291EF8D452D340266F4A9AE9F964F05B7718F211A400850AB4C95AF417F5F3080CFAD51CA7AE01634FE24DBF050D7C2FAB6759FB92D1302EB4467E55591C1AD44E3F44051BC545F979A6B951962C6C7237B92ABF1FD94E1197AF7C26A3D9C9BF6CF6C7C42C4FE1032084F49233743C5831F38199633A0F86145FEF95EA4187E6F98A67BF206DE5A36258D0A37B3E2D6E353ACF357F1FAEE7B9E59A81C8D223B3A4F70E55C3614AD7134A7C564F0A386C61998CB9E859423D855F23E57E069A70FD568BA9783472EC44354DACA35D5D4FA109DFC444F39143E2A4BD76810D184F7A36CFBF1449D25A0C011DB502D355AB41C47BCF48A6A74D9B53D51C68C431D4CE47FD5C7CF3A050B2CB58B540CDFBDE977C3BCB1CD3C7C7EB7783F30B8BCA9FD3BD30CBF5C8A71AC0480357CB50410E832DD70F790888FE9F14825E64639B5CA59980AAD20C20C22B9207E98AB4CF1D817EF104B4
)
GO

CREATE TABLE Abe_Users
(
    ID INT PRIMARY KEY,
    Name NVARCHAR(100),
    phones NVARCHAR(11)
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, all you need to do is provide your clients with the CEK and inform them to enable Always Encrypted so they can decrypt the data. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F922deydn4r8alrcz2hd4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F922deydn4r8alrcz2hd4.jpg" alt=" " width="627" height="180"&gt;&lt;/a&gt;&lt;br&gt;
To encrypt a column "phones", simply right click on the table and select "Encrypt Column".&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzvly9c1pqdegsv0tlwx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzvly9c1pqdegsv0tlwx.jpg" alt=" " width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Keep in mind that if your application involves frequent data imports or exports, or handling large volumes of data every day the encryption and decryption process may slow down overall performance. Each time data is accessed, it must be decrypted for processing and then encrypted again before storage, which can introduce overhead.&lt;/p&gt;
&lt;h2&gt;
  
  
  Final thoughts on A-Symmetric options
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Use Symmetric Encryption when performance is a priority and the database or infrastructure team controls the environment, with appropriate safeguards for key management.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use Always Encrypted with Asymmetric Keys if data security is more important than performance, especially if you need to ensure that even DBAs can't access the data.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And if the level of security required isn't extremely high, and you're more focused on preventing unauthorized access in a lighter way, &lt;strong&gt;data masking function&lt;/strong&gt; is an excellent option. It strikes a balance between usability and security, with practically no performance overhead, no encryption needed. Make it simple, just mask it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ALTER TABLE [Abe_Users]
ALTER COLUMN [phones] ADD MASKED WITH (FUNCTION = 'partial(2,"XXXXXXX",1)');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Transparent Data Encryption (TDE)
&lt;/h2&gt;

&lt;p&gt;This is another powerful encryption method that encrypts the entire database, including the transaction log, automatically and transparently.&lt;br&gt;
It relies on a Database Encryption Key (DEK), which is encrypted by a certificate stored in the master database.&lt;br&gt;
TDE is useful for securing data at rest, particularly in scenarios where you need to protect the physical files of the database.&lt;br&gt;
We'll dive into a practical example of how to implement TDE in the next volume.&lt;br&gt;
You can check if your database is encrypted, or how much time would it take after triggering encryption using this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        SELECT DB_NAME(database_id) AS DatabaseName, encryption_state,
        encryption_state_desc =
        CASE encryption_state
                 WHEN '0' THEN 'No encryption'
                 WHEN '1' THEN 'Unencrypted'
                 WHEN '2' THEN 'In progress'
                 WHEN '3' THEN 'Encrypted'
         WHEN '4' THEN 'Key change in progress'
         WHEN '5' THEN 'Decryption in progress'
         WHEN '6' THEN 'Protection change in progress'
                 ELSE 'No Status'
              END,
        percent_complete, 
        create_date, 
        key_algorithm, key_length,
        encryptor_thumbprint, 
        encryptor_type  
        FROM sys.dm_database_encryption_keys;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By using certificates for encryption, employing Database Master Keys (DMK) and Service Master Keys (SMK) for recovery and security, and leveraging Transparent Data Encryption (TDE) for database protection, you can create a robust, recoverable encryption strategy that ensures your data is safe even in the worst case scenario.&lt;/p&gt;

&lt;p&gt;😎 &lt;br&gt;
Don't miss my other posts for more tips and best practices.&lt;br&gt;
I'll be covering this topic in two volumes, with the second one focusing more on practical examples and also on &lt;strong&gt;TLS protections&lt;/strong&gt; to encrypt data transmitted between a SQL Server instance and a client application, ensuring data privacy and integrity.&lt;br&gt;
Stay tuned..&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/sql/t-sql/statements/create-column-encryption-key" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/sql/t-sql/statements/create-column-encryption-key&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/sql/t-sql/statements/create-certificate-transact-sql?view=sql-server-ver17" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/sql/t-sql/statements/create-certificate&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/sql/relational-databases/security/encryption/transparent-data-encryption?view=sql-server-ver17" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/sql/relational-databases/security/encryption/transparent-data-encryption&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/certificate-overview?view=sql-server-ver17" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/certificate&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/sql/t-sql/statements/create-master-key-transact-sql?view=sql-server-ver17" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/sql/t-sql/statements/create-master-key&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>sqlserver</category>
      <category>database</category>
      <category>encryption</category>
      <category>learning</category>
    </item>
    <item>
      <title>Craft and Control Resource Pools on MSSQL</title>
      <dc:creator>Amar Abaz</dc:creator>
      <pubDate>Wed, 07 Jan 2026 21:52:46 +0000</pubDate>
      <link>https://forem.com/abeamar/craft-and-control-resource-pools-on-mssql-4jci</link>
      <guid>https://forem.com/abeamar/craft-and-control-resource-pools-on-mssql-4jci</guid>
      <description>&lt;p&gt;Time to take control of your server's resources..&lt;br&gt;
On your MS SQL instance not all workloads should be treated equally.&lt;br&gt;
One heavy query executed from SQL Server Management Studio (SSMS) can easily consume CPU and memory, impacting production workloads.&lt;/p&gt;

&lt;p&gt;Resource Governor is an underused but &lt;strong&gt;powerful feature&lt;/strong&gt; in SQL Server.&lt;br&gt;
With a simple function and resource pool, you can dramatically improve stability and predictability in multi-user environments.&lt;/p&gt;

&lt;p&gt;In this post, I’ll show how to use SQL Server Resource Governor to,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detect connections coming from SSMS,&lt;/li&gt;
&lt;li&gt;Assign them to a limited workload group,&lt;/li&gt;
&lt;li&gt;Restrict their CPU and memory usage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What is Resource Governor ?&lt;/strong&gt;&lt;br&gt;
Resource Governor is a SQL Server feature that allows you to control how much CPU and memory different workloads can consume.&lt;br&gt;
This approach is especially useful for production safety, or DBA-controlled access.&lt;/p&gt;

&lt;p&gt;After this setup any query executed from Microsoft SQL Server Management Studio will be assigned to a limited resource pool.&lt;br&gt;
All other connections remain in the default pool.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This also means you can customize resource pools to suit your specific needs by modifying the function below.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can control CPU, RAM resources for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Specific SQL users,&lt;/li&gt;
&lt;li&gt;Reporting or BI tools,&lt;/li&gt;
&lt;li&gt;ETL jobs,&lt;/li&gt;
&lt;li&gt;Ad-hoc users,&lt;/li&gt;
&lt;li&gt;Third-party applications,&lt;/li&gt;
&lt;li&gt;Nightly or background workloads.
All you need to do is modify the function and route those sessions into the appropriate workload group.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's begin with example on limiting CPU &amp;amp; RAM for sessions comming from SSMS Studio,&lt;br&gt;
&lt;strong&gt;1. Enable Governor.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ALTER RESOURCE GOVERNOR RECONFIGURE;  
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Create your function.&lt;/strong&gt;&lt;br&gt;
This will decides which workload group a session should use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE FUNCTION [dbo].[ResourceGroup_Users]
RETURNS SYSNAME
WITH SCHEMABINDING
AS
BEGIN
DECLARE @WorkloadGroup SYSNAME;
DECLARE @ProgramName SYSNAME;

SET @ProgramName = CONVERT(SYSNAME, PROGRAM_NAME());
IF @ProgramName like 'Microsoft SQL Server Management Studio%'
BEGIN
    SET @WorkloadGroup = 'LimitedResourceGroup';
END
ELSE
BEGIN
    SET @WorkloadGroup = 'default';
END

RETURN @WorkloadGroup;

END
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, if you want to restrict specific SQL users, you can easily modify this function by adding an additional condition - IF, or ELSE IF.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;IF SUSER_NAME() = 'testuser' THEN ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allows you to route specific users to a resource pool with customized limits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Create a Resource Pool.&lt;/strong&gt;&lt;br&gt;
This ensures SSMS queries can never consume more than 30% CPU, 30% memory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE RESOURCE POOL [LimitedResourcePool]
WITH
(min_cpu_percent=0,
max_cpu_percent=30,
min_memory_percent=0,
max_memory_percent=30
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Create a Workload Group.&lt;/strong&gt;&lt;br&gt;
Now we link a workload group to the limited resource pool.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE WORKLOAD GROUP [LimitedResourceGroup]
USING [LimitedResourcePool];
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5. Enable the Classifier Function.&lt;/strong&gt;&lt;br&gt;
Finally, tell Resource Governor to use our function and reconfigure it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ALTER RESOURCE GOVERNOR
WITH (CLASSIFIER_FUNCTION=[dbo].[ResourceGroup_Users]);
GO
ALTER RESOURCE GOVERNOR RECONFIGURE
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you check what you have created with these commands,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM sys.resource_governor_configuration
SELECT * FROM sys.resource_governor_resource_pools
SELECT * FROM sys.resource_governor_workload_groups
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Keep in mind that,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;These changes apply only to new sessions, not to existing ones. You should restart the SQL service or terminate sessions for changes to affect.&lt;/li&gt;
&lt;li&gt;Resource Governor is available in SQL Server Standard Edition starting with SQL Server 2025. For all previous versions, it is an Enterprise-only feature. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To monitor which sessions are currently using your resource pool, you can run the following query. This will help you ensure that the sessions are properly classified and using the intended resources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT
s.session_id,
s.login_name,
rg.name AS resource_pool_name
FROM
sys.dm_exec_sessions AS s
JOIN
sys.dm_resource_governor_workload_groups AS wg
ON s.group_id = wg.group_id
JOIN
sys.dm_resource_governor_resource_pools AS rg
ON wg.pool_id = rg.pool_id
WHERE
s.is_user_process = 1;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6. Limit RAM/CPU usage for the entire SQL Server instance&lt;/strong&gt;&lt;br&gt;
While Resource Governor allows you to assign specific resource limits to workloads, it’s also important to know that you can limit the total memory and CPU usage for the entire SQL Server instance itself.&lt;br&gt;
This prevents SQL Server from consuming all the available memory, leaving resources for other processes.&lt;br&gt;
You can limit the maximum amount of RAM using this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EXEC sp_configure 'max server memory', 4096;  --Set maximum memory to 4 GB
RECONFIGURE;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can limit the CPU usage by altering the worker thread using this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EXEC sp_configure 'max worker threads', 512;  --Limit worker threads
RECONFIGURE;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now go ahead and put those resource pools to work.&lt;br&gt;
I hope this helps you keep your SQL Server running smoothly! &lt;br&gt;
Don’t miss my other posts for more tips and best practices.&lt;/p&gt;

</description>
      <category>sqlserver</category>
      <category>sql</category>
      <category>databas</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Track Blocking and Deadlocks in MSSQL with my custom script</title>
      <dc:creator>Amar Abaz</dc:creator>
      <pubDate>Sun, 04 Jan 2026 13:05:14 +0000</pubDate>
      <link>https://forem.com/abeamar/track-blocking-and-deadlocks-in-mssql-with-my-custom-script-1877</link>
      <guid>https://forem.com/abeamar/track-blocking-and-deadlocks-in-mssql-with-my-custom-script-1877</guid>
      <description>&lt;p&gt;&lt;strong&gt;Blocking&lt;/strong&gt; happens when one process holds a lock on a resource (a row or table..) and another process is waiting for that resource to be released. &lt;br&gt;
The second process is blocked and will have to wait for the first process to release the lock.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deadlocks&lt;/strong&gt;, on the other hand, occur when two or more processes are waiting for each other to release locks on resources, creating a circular dependency. &lt;br&gt;
Process 1 has Resource A locked, and Process 2 has Resource B locked. &lt;br&gt;
They each need the resource the other holds to proceed, and since neither can proceed without the other releasing a lock, the sessions are at a stalemate. One of them is going to be a victim and the other session will be processed.&lt;/p&gt;

&lt;p&gt;I wanted to have accurate information about locks on my databases. I did that using the right extended events and then created a custom SELECT query to extract the right information from the created file and present it in my custom reports.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Key columns for deadlock sessions are deadlock victim, deadlock object, sql_text and the users in process. All with clear overview.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Key columns for blocking sessions are blocking start and end time, duration, sql text and users in process. It's all displayed in ONE line, so it gives you a clear overview.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Below is an overview of the information you get at the end.&lt;/p&gt;

&lt;p&gt;--&lt;/p&gt;

&lt;p&gt;First we need to enable blocked process treshold and then create Extended Event. Treshold min value is 5 sec to check for blocks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;exec sp_configure 'show advanced options',1;
GO
RECONFIGURE;
GO
exec sp_configure 'blocked process threshold (s)',5;
GO
RECONFIGURE;
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Blocking Event
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE EVENT SESSION blckCapture
ON SERVER
ADD EVENT sqlserver.blocked_process_report(
    ACTION (
        sqlserver.sql_text,
        sqlserver.session_id,
        sqlserver.username,
        sqlserver.client_hostname
    ))
ADD TARGET package0.event_file(SET filename=N'C:\blckSessions.xel',max_file_size=(100),max_rollover_files=(5))
WITH (MAX_MEMORY=4096 kB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=36000 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=ON)
GO
ALTER EVENT SESSION blckCapture ON SERVER STATE = START;
ALTER EVENT SESSION blckCapture ON SERVER WITH (STARTUP_STATE=ON);
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deadlock Event
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE EVENT SESSION deadlckCapture
ON SERVER
ADD EVENT sqlserver.xml_deadlock_report
ADD TARGET package0.event_file(SET filename=N'C:\dlckSessions.xel',max_file_size=(10),max_rollover_files=(5))
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=ON)
GO
ALTER EVENT SESSION deadlckCapture ON SERVER STATE = START;
ALTER EVENT SESSION deadlckCapture ON SERVER WITH (STARTUP_STATE=ON);
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create Views
&lt;/h2&gt;

&lt;p&gt;Last step is to create Views in database of your choice. And after that you can create your reports.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE VIEW vw_blckSessions AS
WITH blckData AS (
    SELECT
        DATEADD(HOUR, 1, event_data.value('(event/@timestamp)[1]', 'DATETIME')) AS EventTime,
        blocked_process.value('@spid', 'INT') AS BlockedSPID,
        blocking_process.value('@spid', 'INT') AS BlockingSPID,
        blocked_process.value('@hostname', 'NVARCHAR(256)') AS BlockedHostname,
        blocked_process.value('@loginname', 'NVARCHAR(256)') AS BlockedLoginName,
        blocked_process.value('(inputbuf)[1]', 'NVARCHAR(MAX)') AS BlockedSQLText,
        blocking_process.value('@hostname', 'NVARCHAR(256)') AS BlockingHostname,
        blocking_process.value('@loginname', 'NVARCHAR(256)') AS BlockingLoginName,
        blocking_process.value('(inputbuf)[1]', 'NVARCHAR(MAX)') AS BlockingSQLText
    FROM (
        SELECT CAST(event_data AS XML) AS event_data
        FROM sys.fn_xe_file_target_read_file('C:\blckSessions*.xel', NULL, NULL, NULL)) AS Data
        CROSS APPLY event_data.nodes('//event[@name="blocked_process_report"]/data[@name="blocked_process"]/value/blocked-process-report') AS XEventData (blocked_report)
        CROSS APPLY XEventData.blocked_report.nodes('blocked-process/process') AS BlockedProcessNode (blocked_process)
    CROSS APPLY XEventData.blocked_report.nodes('blocking-process/process') AS BlockingProcessNode (blocking_process))
    ,blckData2 AS (SELECT
                    CONVERT(VARCHAR(19), MIN(EventTime), 120) AS Eventime_start,
                    CONVERT(VARCHAR(19), MAX(EventTime), 120) AS Eventime_last,
                    DATEDIFF(SECOND, MAX(EventTime), MIN(EventTime)) as Duration,
                    BlockingSPID,
            BlockingHostname,
            BlockingLoginName,
            BlockingSQLText,
            BlockedSPID,
                BlockedHostname,
                BlockedLoginName,
                BlockedSQLText
            FROM blckData
            GROUP BY BlockedSPID, BlockedHostname, BlockedLoginName, BlockedSQLText, BlockingSPID, BlockingHostname, BlockingLoginName, BlockingSQLText)
    SELECT
                    Eventime_start
                    ,Eventime_last
                    ,ABS(Duration) AS Duration
                    ,BlockingSPID
                    ,BlockedSPID
                ,BlockingHostname
                ,BlockingLoginName
                ,BlockingSQLText
                ,BlockedHostname
                ,BlockedLoginName
            ,BlockedSQLText
    FROM blckData2;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE VIEW vw_dlckSessions AS
    SELECT
    event_data.value('(event/@timestamp)[1]', 'DATETIME') AS DeadlockStartTime,
    deadlock_node.value('@hostname', 'NVARCHAR(256)') AS Hostname,
    deadlock_node.value('@loginname', 'NVARCHAR(256)') AS LoginName,
    deadlock_node.value('@spid', 'INT') AS SPID,
    deadlock_node.value('(inputbuf)[1]', 'NVARCHAR(MAX)') AS SQLText,
    resource_node.value('@objectname', 'NVARCHAR(256)') AS ObjectName,
    CASE
        WHEN deadlock_node.value('@id', 'NVARCHAR(256)') = victim_node.value('@id', 'NVARCHAR(256)')
        THEN 1
        ELSE 0
    END AS Victim,
    CASE
        WHEN deadlock_node.value('@id', 'NVARCHAR(256)') = victim_node.value('@id', 'NVARCHAR(256)')
        THEN 'Yes'
        ELSE 'No'
    END AS Evicted
    FROM (
    SELECT CAST(event_data AS XML) AS event_data
    FROM sys.fn_xe_file_target_read_file('C:\dlckSessions*.xel', NULL, NULL, NULL)) AS Data
CROSS APPLY Data.event_data.nodes('//event[@name="xml_deadlock_report"]/data[@name="xml_report"]/value/deadlock/process-list/process') AS ProcessNode (deadlock_node)
CROSS APPLY Data.event_data.nodes('//event[@name="xml_deadlock_report"]/data[@name="xml_report"]/value/deadlock/resource-list/keylock') AS ResourceNode (resource_node)
CROSS APPLY Data.event_data.nodes('//event[@name="xml_deadlock_report"]/data[@name="xml_report"]/value/deadlock/victim-list/victimProcess') AS VictimNode (victim_node);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With these two views you can monitor Session IDs on your sql instance with all the necessary information about time, history, sql text and more to debug.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM vw_dlckSessions 
ORDER BY DeadlockStartTime DESC;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fotubvgyrptxx66ex69jk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fotubvgyrptxx66ex69jk.jpg" alt=" " width="800" height="79"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM vw_blckSessions 
ORDER BY Eventime_last DESC;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fot7ytuiox4r0tgy2gkeq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fot7ytuiox4r0tgy2gkeq.jpg" alt=" " width="800" height="69"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;--&lt;/p&gt;

&lt;p&gt;To wrap things up, here are some tips to prevent deadlocks and blocking by optimizing your queries for better performance.&lt;/p&gt;

&lt;p&gt;Deadlock Prevention Tips,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lock Resources in Order. Always acquire locks in the same sequence to avoid circular waiting.&lt;/li&gt;
&lt;li&gt;Keep Transactions Short. Minimize the duration of locks by limiting transaction size.&lt;/li&gt;
&lt;li&gt;Use Retry Logic. Automatically retry transactions that are killed due to deadlocks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Blocking Prevention Tips,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Optimize Queries. Efficient queries reduce execution time and lock duration.&lt;/li&gt;
&lt;li&gt;Use Proper Indexing, speed up data access to minimize lock contention.&lt;/li&gt;
&lt;li&gt;Limit Simultaneous Writes, reduce conflicting updates.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>sqlserver</category>
      <category>sql</category>
      <category>database</category>
      <category>deadlock</category>
    </item>
    <item>
      <title>Linked Server vs OPENQUERY vs OPENROWSET vs OPENDATASOURCE</title>
      <dc:creator>Amar Abaz</dc:creator>
      <pubDate>Sat, 03 Jan 2026 23:04:58 +0000</pubDate>
      <link>https://forem.com/abeamar/linked-server-vs-openquery-vs-openrowset-vs-opendatasource-432m</link>
      <guid>https://forem.com/abeamar/linked-server-vs-openquery-vs-openrowset-vs-opendatasource-432m</guid>
      <description>&lt;p&gt;MS SQL Server provides several ways to execute remote (distributed) queries, but choosing the wrong approach can easily lead to performance bottlenecks, security exposure, and long-term maintenance issues.&lt;br&gt;
Remote queries allow you to access tables on different servers or instances, perform data filtering and transformation across servers.&lt;/p&gt;

&lt;p&gt;In this article, I will compare the four most common approaches,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/sql/relational-databases/linked-servers/linked-servers-database-engine?view=sql-server-ver17" rel="noopener noreferrer"&gt;Linked Server&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/sql/t-sql/functions/openquery-transact-sql?view=sql-server-ver17" rel="noopener noreferrer"&gt;OPENQUERY&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/sql/t-sql/functions/openrowset-transact-sql?view=sql-server-ver17" rel="noopener noreferrer"&gt;OPENROWSET&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/sql/t-sql/functions/opendatasource-transact-sql?view=sql-server-ver17" rel="noopener noreferrer"&gt;OPENDATASOURCE&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For each method, I will cover,&lt;br&gt;
How it works, where is it useful, what are the cons and a simple example.&lt;br&gt;
At the end, I will explain why &lt;strong&gt;OPENQUERY&lt;/strong&gt; is usually the best choice in real-world production systems, when combined with proper &lt;strong&gt;Linked Server&lt;/strong&gt; configuration.&lt;br&gt;
Before each test, enable statistics with command below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SET STATISTICS IO ON;
SET STATISTICS TIME ON;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These settings will help us compare Logical reads, CPU time, execution behavior.&lt;/p&gt;

&lt;p&gt;--&lt;/p&gt;

&lt;h2&gt;
  
  
  Linked Server
&lt;/h2&gt;

&lt;p&gt;This is the most common way to query remote data in SQL Server. &lt;br&gt;
Easy to write and read. No need for dynamic SQL for execution and variables.&lt;br&gt;
Linked Server passwords are not hashed, but they are stored encrypted and never exposed in clear text, it provides a secure authentication.&lt;br&gt;
Has mapping options (impersonate current login, use a specific remote login, or a fallback).&lt;br&gt;
Using Linked Servers ensures that there is a clear, controlled configuration of the remote connection with proper authentication, encryption, and validation.&lt;br&gt;
Filters are not always pushed to the remote server and this can result in large data transfers, high network usage, poor performance on large tables. In many cases, SQL Server will pull all rows from the remote table and apply the WHERE filter locally.&lt;br&gt;
Connection is reusable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sp_addlinkedserver 'w19';
SELECT * FROM [w19].[AdventureWorks2022].[Sales].[SalesOrderDetail]
WHERE ModifiedDate &amp;gt; '2011-05-31';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;CPU Time: 4717 ms&lt;br&gt;
Elapsed Time: 7734 ms&lt;/p&gt;

&lt;p&gt;--&lt;/p&gt;
&lt;h2&gt;
  
  
  OPENQUERY
&lt;/h2&gt;

&lt;p&gt;Query is executed entirely on the remote server. You must use Dynamic SQL. Filters and joins are applied remotely. It is a best choice for large tables and production workloads. &lt;strong&gt;OPENQUERY inherits Linked Server security and configuration.&lt;/strong&gt; When querying non-SQL Server databases (ORACLE, MySql, PostgreSQL) this is the way. Connection is reusable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM OPENQUERY([w19], '
SELECT * FROM [AdventureWorks2022].[Sales].[SalesOrderDetail]
WHERE ModifiedDate &amp;gt; ''2011-05-31''');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;CPU Time: 3367 ms&lt;br&gt;
Elapsed Time: 5443 ms&lt;/p&gt;

&lt;p&gt;OPENQUERY is designed to work with linked servers, meaning it sends a query to another server, which could be a remote database or a file system via a linked provider (like OLE DB or an ODBC driver) but you can do for example CSV also. &lt;strong&gt;For large files OPENROWSET tends to be faster&lt;/strong&gt;.&lt;br&gt;
You would need first to create path with Linked Server, and then you can use OPENQUERY. Code below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EXEC sp_addlinkedserver 
    @server = 'FileServer', 
    @srvproduct = '', 
    @provider = 'Microsoft.ACE.OLEDB.12.0', 
    @datasrc = 'C:\Temp\EmployeeData.csv',
    @provstr = 'Text;HDR=YES;FMT=CSVDelimited';
SELECT * 
FROM OPENQUERY(FileServer, 'SELECT * FROM [EmployeeData.csv]')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One of the major advantages of using Linked Servers with OPENQUERY is the &lt;strong&gt;ability to centrally control and redirect connections&lt;/strong&gt;. In a single script, you can point your links to different servers, for example switching between DEV, UAT, or production environments, without changing your query code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT name, provider, data_source FROM sys.servers;
DECLARE @name sysname = 'w19', @datasource sysname = 'w19\sql';
EXECUTE sp_setnetname @server = @name, @netname = @datasource;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next two options - OPENROWSET and OPENDATASOURCE require &lt;a href="https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/ad-hoc-distributed-queries-server-configuration-option?view=sql-server-ver17" rel="noopener noreferrer"&gt;AdHoc Distributed Queries&lt;/a&gt; to be enabled on the instance.&lt;br&gt;
They are disabled for security reasons. When this feature is enabled, it allows users to execute queries that can directly access remote data sources without needing to configure a Linked Server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EXEC sp_configure 'show advanced options', 1
RECONFIGURE
GO
EXEC sp_configure 'ad hoc distributed queries', 1
RECONFIGURE
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;--&lt;/p&gt;

&lt;h2&gt;
  
  
  OPENROWSET
&lt;/h2&gt;

&lt;p&gt;It is mainly designed for ad-hoc access to external data sources. Harder to govern. Unlike Linked Server and OPENQUERY, it does not reuse connections. Each execution performs a new connection handshake, authentication, query execution, and disconnect. This makes it unsuitable for frequent or high-volume workloads. &lt;strong&gt;However it is good for reading external files (CSV, Excel)&lt;/strong&gt;, temporary data access without creating a Linked Server (connection is direct). It allows 3-part-name in connection.&lt;br&gt;
DML INSERT, UPDATE, DELETE commands are unreliable and not recommended. Requires DSN or inline credentials (Password Exposure).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM OPENROWSET(
    'SQLNCLI',
    'Server=w19;Database=AdventureWorks2022;Trusted_Connection=yes;',
    'SELECT * FROM Sales.SalesOrderDetail 
WHERE ModifiedDate&amp;gt;= ''2011-05-31''''
);

-- CSV/Excel example:
SELECT * FROM OPENROWSET(
    BULK 'C:\\Temp\\EmployeeData.csv',
    FORMAT = 'CSV',
    FIRSTROW = 2
) AS data;
SELECT * FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0',
  'Excel 12.0 Xml; HDR=NO;
   Database=C:\Temp\EmployeeData.xlsx',
   [Sheet1$]);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;CPU Time: 4503 ms&lt;br&gt;
Elapsed Time: 7594 ms&lt;/p&gt;

&lt;p&gt;--&lt;/p&gt;
&lt;h2&gt;
  
  
  OPENDATASOURCE
&lt;/h2&gt;

&lt;p&gt;This is very similar to OPENROWSET and shares most of the same limitations. It also performs a new connection handshake on each execution.&lt;br&gt;
You do not send a query string, you expose the remote server as a temporary data source and reference objects (4-part-name in connection)..&lt;br&gt;
DML INSERT, UPDATE, DELETE commands are unreliable and not recommended.&lt;br&gt;
Requires DSN or inline credentials (Password Exposure).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM OPENDATASOURCE(
    'SQLNCLI',
    'Server=w19;Database=AdventureWorks2022;Trusted_Connection=yes;'
).AdventureWorks2022.Sales.SalesOrderDetail;

-- CSV example:
SELECT * FROM OPENDATASOURCE('Microsoft.ACE.OLEDB.12.0',
    'Text;Database=C:\Temp;HDR=YES')..[EmployeeData#csv]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;CPU Time: 4592 ms&lt;br&gt;
Elapsed Time: 7718 ms&lt;/p&gt;

&lt;p&gt;I was using &lt;a href="https://learn.microsoft.com/en-us/sql/samples/adventureworks-install-configure?view=sql-server-ver17&amp;amp;tabs=ssms" rel="noopener noreferrer"&gt;AdventureWorks&lt;/a&gt; sample database for these examples.&lt;/p&gt;

</description>
      <category>sqlserver</category>
      <category>database</category>
      <category>sql</category>
      <category>remotequery</category>
    </item>
    <item>
      <title>WinForms in 2025? Here's the Slick MSSQL Monitoring App I Built</title>
      <dc:creator>Amar Abaz</dc:creator>
      <pubDate>Tue, 07 Oct 2025 22:49:26 +0000</pubDate>
      <link>https://forem.com/abeamar/winforms-in-2025-heres-the-slick-mssql-monitoring-app-i-built-3epf</link>
      <guid>https://forem.com/abeamar/winforms-in-2025-heres-the-slick-mssql-monitoring-app-i-built-3epf</guid>
      <description>&lt;p&gt;&lt;strong&gt;Monitor MSSQL performance with this custom app&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ever had to dig through multiple tools just to debug a failed MSSQL backup or trace an error switching from multiple MS SQL Instances ? 😥🤯&lt;/p&gt;

&lt;p&gt;That’s why I built a small tool in Visual Studio, using C#, WinForms, Powershell, T-SQL, WMI to monitor SQL Server health, logs, and performance stats in one place.&lt;/p&gt;

&lt;p&gt;The primary focus of my project was to make an app that will have all the necessary tools for database administrator in their everyday monitoring work. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3l41jpqth2r4pqraatf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3l41jpqth2r4pqraatf.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What this tool monitors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Disk usage and growth&lt;/li&gt;
&lt;li&gt;Memory and CPU utilization&lt;/li&gt;
&lt;li&gt;Database health and size&lt;/li&gt;
&lt;li&gt;Backup and restore history&lt;/li&gt;
&lt;li&gt;Active backup and restore operations&lt;/li&gt;
&lt;li&gt;Table sizes and index fragmentation&lt;/li&gt;
&lt;li&gt;SQL Server error logs&lt;/li&gt;
&lt;li&gt;SQL Agent job failures&lt;/li&gt;
&lt;li&gt;Schema object definitions&lt;/li&gt;
&lt;li&gt;Raw SQL object extraction&lt;/li&gt;
&lt;li&gt;Active sessions and blocking queries&lt;/li&gt;
&lt;li&gt;Deadlock tracking&lt;/li&gt;
&lt;li&gt;Custom session-level monitoring for your SQL Server instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Feel free to contact me if you like what you see or have ideas what to improve or add....&lt;br&gt;
I'd love to hear your feedback! 🤞&lt;/p&gt;

&lt;p&gt;Video link: &lt;a href="https://vimeo.com/1124492178" rel="noopener noreferrer"&gt;https://vimeo.com/1124492178&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I've published the app also on,&lt;br&gt;
&lt;a href="https://www.producthunt.com/products/abemon" rel="noopener noreferrer"&gt;https://www.producthunt.com/products/abemon&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🧑‍🚀 Check the app on my GitHub:&lt;br&gt;


&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/abeamar" rel="noopener noreferrer"&gt;
        abeamar
      &lt;/a&gt; / &lt;a href="https://github.com/abeamar/AbeMon" rel="noopener noreferrer"&gt;
        AbeMon
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      App created using Visual Studio, C#, WinForms, T-SQL, WMI. This App is for Light DBA everyday work on monitoring and troubleshooting.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;AbeMon App 1.0.0&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="https://git.io/typing-svg" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/cee6a01ffa4ca9c8eec421d87d11a2d19d28b49050f81c12a19a06f1ef0fa860/68747470733a2f2f726561646d652d747970696e672d7376672e64656d6f6c61622e636f6d2f3f666f6e743d43616c6c696272696c266c696e65733d53514c2b546f6f6c2b666f722b54726f75626c6573686f6f74696e672b796f75722b656e763b4d6f6e69746f722b616e642b44656275672b7573696e672b746869732b546f6f6c2e2e" alt="Typing SVG"&gt;&lt;/a&gt;&lt;br&gt;
App created using Visual Studio, C#, WinForms, T-SQL, PowerShell, WMI. This App is for Light DBA everyday work on monitoring and troubleshooting on MSSQL.&lt;/p&gt;

 &lt;p&gt;
        &lt;a href="https://github.com/abeamar/AbeMon#about" rel="noopener noreferrer"&gt;1. About&lt;/a&gt;&lt;br&gt;
        &lt;a href="https://github.com/abeamar/AbeMon#options" rel="noopener noreferrer"&gt;2. Options&lt;/a&gt;&lt;br&gt;
        &lt;a href="https://github.com/abeamar/AbeMon#resource" rel="noopener noreferrer"&gt; 2.1. System&lt;/a&gt;&lt;br&gt;
        &lt;a href="https://github.com/abeamar/AbeMon#errors" rel="noopener noreferrer"&gt; 2.2. Errors&lt;/a&gt;&lt;br&gt;
        &lt;a href="https://github.com/abeamar/AbeMon#sessions" rel="noopener noreferrer"&gt; 2.3. Sessions&lt;/a&gt;&lt;br&gt;
        &lt;a href="https://github.com/abeamar/AbeMon#locks" rel="noopener noreferrer"&gt; 2.4. Locks&lt;/a&gt;&lt;br&gt;
        &lt;a href="https://github.com/abeamar/AbeMon#export" rel="noopener noreferrer"&gt; 2.5. Export&lt;/a&gt;&lt;br&gt;
        &lt;a href="https://github.com/abeamar/AbeMon#debug" rel="noopener noreferrer"&gt; 2.6. Debug&lt;/a&gt;&lt;br&gt;
        &lt;a href="https://github.com/abeamar/AbeMon#conclusion" rel="noopener noreferrer"&gt;3. Conclusion&lt;/a&gt;&lt;br&gt;
    &lt;/p&gt;
    
    &lt;br&gt;
     
        &lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;1. About&lt;/h2&gt;
&lt;/div&gt;
        &lt;p&gt;The primary focus of this project is to make an App that will have all the necessary tools for database administrator in their everyday monitoring work. This app is a fully functional tool. &lt;br&gt;The app is provided as a trial for 30 days, allowing you to experience its full functionality. Feel free to contact me if you like what you see. This App is designed to deliver smooth performance while using minimal system resources, consuming just 10-20MB of RAM.&lt;br&gt;Full Article on dev.to&lt;br&gt;
         &lt;a href="https://dev.to/abeamar/abemon-ms-sql-tool-v100-monitor-and-debug-60m" rel="nofollow"&gt;&lt;img src="https://camo.githubusercontent.com/f175cab14fdc323f4eecbd3bd08240ba8d9d71676e8bc1f2b2e9204a1949c919/68747470733a2f2f6465762d746f2d75706c6f6164732e73332e616d617a6f6e6177732e636f6d2f75706c6f6164732f61727469636c65732f326d7766343376306678653766337a386d3637792e6a7067" height="270" width="540"&gt;&lt;/a&gt;
         &lt;br&gt; &lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;🐔 AbeMon Install options: &lt;/h1&gt;
&lt;/div&gt;
&lt;br&gt;
         &lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;git clone "https://github.com/abeamar/AbeMon.git"&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;br&gt;
        Choose the installation method that works best for you!&lt;br&gt;
&lt;ol&gt;
  &lt;li&gt;Using the .exe or…&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/abeamar/AbeMon" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;




&lt;p&gt;Cheers,&lt;br&gt;
Amar.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>csharp</category>
      <category>sqlserver</category>
      <category>monitoring</category>
    </item>
  </channel>
</rss>
