<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Naimul Karim</title>
    <description>The latest articles on Forem by Naimul Karim (@naimulkarim).</description>
    <link>https://forem.com/naimulkarim</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/naimulkarim"/>
    <language>en</language>
    <item>
      <title>Basic Features of Calude AI</title>
      <dc:creator>Naimul Karim</dc:creator>
      <pubDate>Sun, 19 Apr 2026 19:15:20 +0000</pubDate>
      <link>https://forem.com/naimulkarim/basic-features-of-calude-ai-5e2e</link>
      <guid>https://forem.com/naimulkarim/basic-features-of-calude-ai-5e2e</guid>
      <description>&lt;p&gt;As of 2026, Claude AI has evolved from a simple chatbot into a highly capable "agentic" assistant. Through features like Claude Code, Claude Cowork, and the Model Context Protocol (MCP), its day-to-day use cases have expanded significantly beyond just writing and answering questions.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;1. Conversation Import&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Import chats or data from external sources (e.g., ChatGPT, documents)&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Examples:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Import a ChatGPT conversation and summarize key points&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;2. Structured Prompting&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Guided Chat / Structured Prompting (AIM) in Claude AI is a simple way to interact with AI using a clear structure instead of random prompts. It helps users interact more effectively with the AI. &lt;/p&gt;

&lt;p&gt;Action → What you want the AI to do&lt;br&gt;
Intent → Why you need it (context or goal)&lt;br&gt;
Method → How the output should be structured or delivered&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Examples:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Action: Plan a weekend family trip. Intent: Relaxing, budget-friendly 2-day getaway. Method: Provide a structured itinerary with activities, food options, time schedule, and estimated costs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;3. Skills&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Instead of asking random questions, you shape Claude into a role like:&lt;br&gt;
tutor, planner, coach, assistant, or writer&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Examples:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Claude as Fitness Coach &lt;/p&gt;

&lt;p&gt;Skill customization:&lt;br&gt;
“Act as a beginner fitness coach. Create safe, simple home workout plans with no equipment and focus on consistency.”&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Daily workout routines&lt;/li&gt;
&lt;li&gt;Weight loss or general fitness plans&lt;/li&gt;
&lt;li&gt;Adjusting intensity over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;4. Connectors&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Integrate Claude to external tools, APIs, or data sources.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Examples :&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect to Gmail to summarize inbox, draft replies or send emails&lt;/li&gt;
&lt;li&gt;Connect to Google Drive to read and summarize documents&lt;/li&gt;
&lt;li&gt;Pull data from spreadsheets and generate insights or summaries&lt;/li&gt;
&lt;li&gt;Integrate with calendars to help plan schedules&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;5. Projects&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Projects let you create separate workspaces for different tasks, each with its own chats and information. Inside a project, you can upload files, add context, and have focused conversations with Claude AI.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Examples :&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Organize tasks, files, and conversations into projects&lt;br&gt;
Personal Finance Workspace&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Budget planning notes&lt;/li&gt;
&lt;li&gt;Expense summaries&lt;/li&gt;
&lt;li&gt;Investment research&lt;/li&gt;
&lt;li&gt;Tax-related questions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;6. Artifacts (Generated Outputs)&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Artifacts turn a conversation into a finished output you can directly use, not just an explanation. It can generate structured outputs like code, documents, UI components.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Examples :&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build a simple website or landing page (HTML/CSS or React structure)&lt;/li&gt;
&lt;li&gt;Generate a project report or business document&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;7. Collaboration Mode (Co-work)&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Claude Cowork is a desktop tool. It connects Claude directly to your computer, files, apps, workflows. It can handle files, folders, and desktop automation tasks.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Examples :&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Working together on documents, plans, and structured outputs&lt;/li&gt;
&lt;li&gt;Managing files, folders, and project-related content&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>LLM vs RAG</title>
      <dc:creator>Naimul Karim</dc:creator>
      <pubDate>Thu, 16 Apr 2026 02:38:57 +0000</pubDate>
      <link>https://forem.com/naimulkarim/llm-vs-rag-212l</link>
      <guid>https://forem.com/naimulkarim/llm-vs-rag-212l</guid>
      <description>&lt;p&gt;&lt;strong&gt;LLM (Large Language Model)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An LLM like GPT-4 or Claude is:&lt;/p&gt;

&lt;p&gt;A pretrained model on massive text data&lt;br&gt;
Generates answers based on what it has learned during training&lt;br&gt;
Doesn’t know your private or real-time data unless provided in the prompt&lt;/p&gt;

&lt;p&gt;Limitation:&lt;/p&gt;

&lt;p&gt;Can hallucinate&lt;br&gt;
Knowledge is static (cutoff-based)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RAG (Retrieval-Augmented Generation)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;RAG is a system design pattern, not a model.&lt;/p&gt;

&lt;p&gt;It works like this:&lt;/p&gt;

&lt;p&gt;User asks a question&lt;br&gt;
System retrieves relevant data (docs, DB, APIs, vector search)&lt;br&gt;
That data is injected into the prompt&lt;br&gt;
LLM generates an answer using that context&lt;/p&gt;

&lt;p&gt;LLM can be seen as a generator&lt;br&gt;
RAG is a combination of retriever and LLM &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core Differences&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;LLM&lt;/th&gt;
&lt;th&gt;RAG&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Type&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Model&lt;/td&gt;
&lt;td&gt;Architecture / Pattern&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Knowledge Source&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Training data&lt;/td&gt;
&lt;td&gt;External + Real-time data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Accuracy&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Can hallucinate&lt;/td&gt;
&lt;td&gt;More grounded&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Updates&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Requires retraining&lt;/td&gt;
&lt;td&gt;Just update data source&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Use Case&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;General tasks&lt;/td&gt;
&lt;td&gt;Domain-specific, factual Q&amp;amp;A&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Without RAG:&lt;/p&gt;

&lt;p&gt;User: “What’s the latest interest rate?”&lt;br&gt;
LLM: Might guess or give outdated info&lt;/p&gt;

&lt;p&gt;With RAG:&lt;/p&gt;

&lt;p&gt;System fetches latest rates from DB/API&lt;br&gt;
LLM answers using that data&lt;br&gt;
Accurate and up-to-date&lt;/p&gt;

&lt;p&gt;Usage&lt;/p&gt;

&lt;p&gt;Use LLM alone when:&lt;/p&gt;

&lt;p&gt;Creative writing&lt;br&gt;
General coding help&lt;br&gt;
Brainstorming&lt;/p&gt;

&lt;p&gt;Use RAG when:&lt;/p&gt;

&lt;p&gt;You need company data / internal docs&lt;br&gt;
Accuracy matters (finance, legal, healthcare)&lt;br&gt;
Data changes frequently&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>machinelearning</category>
      <category>rag</category>
    </item>
    <item>
      <title>AI for Developers</title>
      <dc:creator>Naimul Karim</dc:creator>
      <pubDate>Wed, 15 Apr 2026 04:47:10 +0000</pubDate>
      <link>https://forem.com/naimulkarim/ai-for-developers-3co9</link>
      <guid>https://forem.com/naimulkarim/ai-for-developers-3co9</guid>
      <description>&lt;p&gt;AI has become a core part of software development, reshaping how developers write code, build systems, and deliver applications through increasingly advanced tools and automation.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;Agentic Workflows and Orchestration&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;n8n&lt;/strong&gt; helps automate workflows by connecting APIs, services, and AI models into end-to-end processes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;Chat and Assistant Tools&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude&lt;/strong&gt; helps with reasoning, long-context understanding, and code-related discussions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChatGPT&lt;/strong&gt; supports developers with coding, debugging, and general problem-solving.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini&lt;/strong&gt; provides AI assistance across research and multimodal tasks within Google’s ecosystem.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;IDE&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Copilot&lt;/strong&gt; offers real-time code suggestions inside the editor.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cursor&lt;/strong&gt; is an AI-native IDE that enables coding through natural language.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;Coding Agents&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Cursor Agents&lt;/strong&gt; execute multi-step development tasks autonomously.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Copilot Agent&lt;/strong&gt; performs coding tasks beyond simple suggestions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Codex&lt;/strong&gt; powers AI-based code generation models.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude Code&lt;/strong&gt; supports advanced coding and problem-solving tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenCode&lt;/strong&gt; runs AI coding agents directly in the terminal.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;Full-App Builders&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Replit&lt;/strong&gt; lets users build and deploy apps quickly in the cloud with AI help.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;V0&lt;/strong&gt; generates frontend components from text prompts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;AI Developer Libraries&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LangChain&lt;/strong&gt; helps build applications powered by large language models.&lt;br&gt;
&lt;strong&gt;LangGraph&lt;/strong&gt; structures AI workflows using graph-based design.&lt;br&gt;
&lt;strong&gt;LlamaIndex&lt;/strong&gt; connects AI models to external data sources.&lt;br&gt;
&lt;strong&gt;Haystack&lt;/strong&gt; builds search and QA systems using LLMs.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;Documentation &amp;amp; Knowledge Tools&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Problem solving with ML : Domains That Actually Matter</title>
      <dc:creator>Naimul Karim</dc:creator>
      <pubDate>Tue, 06 Jan 2026 14:41:58 +0000</pubDate>
      <link>https://forem.com/naimulkarim/problem-solving-with-ml-domains-that-actually-matter-202c</link>
      <guid>https://forem.com/naimulkarim/problem-solving-with-ml-domains-that-actually-matter-202c</guid>
      <description>&lt;p&gt;When people start learning machine learning, they often focus on different things. But in practice, the most important question is much simpler:&lt;/p&gt;

&lt;p&gt;What kind of work do you want to do?&lt;/p&gt;

&lt;p&gt;What problems do you want to solve using ML?&lt;/p&gt;

&lt;p&gt;When you look at machine learning from this perspective, the entire field can be cleanly divided into five main work domains.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fft01nqcrcynmardwl9gy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fft01nqcrcynmardwl9gy.jpg" alt=" " width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Predictive Modeling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This domain works primarily with structured, tabular, or numerical data. The objective is to predict an outcome based on historical patterns.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Helping businesses detect and prevent fraud and avoid financial losses&lt;/li&gt;
&lt;li&gt;Analyzing effectiveness of past promotional activity&lt;/li&gt;
&lt;li&gt;Forecasting patient admissions and readmissions&lt;/li&gt;
&lt;li&gt;Predicting energy consumption&lt;/li&gt;
&lt;li&gt;Regression, classification, and time-series forecasting all belong here. This is the oldest and most battle-tested area of machine learning, and where most practitioners build their foundation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Perception AI (Vision / Audio)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this domain, machines learn to interpret visual and auditory signals from the real world.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Counting people in public spaces using camera feeds&lt;/li&gt;
&lt;li&gt;Detecting cracks or defects in infrastructure from drone images&lt;/li&gt;
&lt;li&gt;Recognizing spoken commands in voice-controlled systems&lt;/li&gt;
&lt;li&gt;Identifying equipment malfunctions from vibration or sound patterns&lt;/li&gt;
&lt;li&gt;Identifying pedestrians and traffic signs in a self-driven car&lt;/li&gt;
&lt;li&gt;Models such as CNNs and Vision Transformers (ViT) are heavily used. Perception AI enables machines to convert raw sensory data into meaningful signals.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Language Intelligence (NLP)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This domain focuses on understanding and processing written or spoken language.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extracting key clauses from contracts and legal documents&lt;/li&gt;
&lt;li&gt;Automatically generating meeting notes from transcripts&lt;/li&gt;
&lt;li&gt;Detecting abusive content in online communities&lt;/li&gt;
&lt;li&gt;Grouping news articles by topic at scale&lt;/li&gt;
&lt;li&gt;Speech recognition, Spell check, autocomplete etc&lt;/li&gt;
&lt;li&gt;Transformer-based models like BERT and GPT-style architectures power this domain. The aim is to help machines understand nuance, context, and intent in language.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Generative AI (Multimodal)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This domain is about creation, not just prediction. Models here can generate entirely new outputs across multiple data types.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating a short story based on the style of a particular author,&lt;/li&gt;
&lt;li&gt;Generate a realistic image of a person who doesn't exist&lt;/li&gt;
&lt;li&gt;Composing a symphony in the style of a famous composer&lt;/li&gt;
&lt;li&gt;Generating test cases or documentation from source code&lt;/li&gt;
&lt;li&gt;Large Language Models (LLMs), diffusion models, and multimodal systems define this space, where language, vision, and reasoning intersect.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Decision &amp;amp; Control Systems (Reinforcement Learning)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What kind of work this is: Learning what action to take in an environment to maximize long-term reward.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dynamic pricing and bidding systems&lt;/li&gt;
&lt;li&gt;Recommendation systems with long-term user engagement&lt;/li&gt;
&lt;li&gt;Robotics control and motion planning&lt;/li&gt;
&lt;li&gt;Resource allocation and scheduling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Agentic AI: The Extension&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Agentic AI is an extension, not a standalone ML domain. It builds on Generative AI to enable systems that can reason, plan, and act.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AI assistant that schedules meetings by checking calendars and availability&lt;/li&gt;
&lt;li&gt;A system that monitors metrics, detects anomalies, and triggers alerts&lt;/li&gt;
&lt;li&gt;An AI agent that runs experiments, compares results, and selects the best approach&lt;/li&gt;
&lt;li&gt;Multiple agents collaborating to complete a complex workflow&lt;/li&gt;
&lt;li&gt;In short, Agentic AI represents AI systems that don’t just respond but take action.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>datascience</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Unit testing : How to Mock Public Methods in C# with NSubstitute</title>
      <dc:creator>Naimul Karim</dc:creator>
      <pubDate>Tue, 06 Jan 2026 02:30:31 +0000</pubDate>
      <link>https://forem.com/naimulkarim/how-to-mock-public-methods-in-c-with-nsubstitute-2629</link>
      <guid>https://forem.com/naimulkarim/how-to-mock-public-methods-in-c-with-nsubstitute-2629</guid>
      <description>&lt;p&gt;With NSubstitute, a public method can only be mocked if it is virtual (or abstract).&lt;br&gt;
If the method is public but not virtual, NSubstitute cannot mock it.&lt;/p&gt;

&lt;p&gt;The options to enables to mock it are : &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Make the method virtual&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the Minimal change and fully supported by NSubstitute&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class ProductService
{
    public virtual int GetPrice()
    {
        return 1;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var service = Substitute.For&amp;lt;MyService&amp;gt;();
service.GetValue().Returns(42);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Extract an interface (cleanest design)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Refactor for Dependency Injection. Refactor your code so that the logic in the method is moved to a dependency (e.g., a helper/service class) that can be mocked. This is the best option.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public interface IProductService
{
    int GetPrice();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inject the interface and mock it instead.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var service = Substitute.For&amp;lt;IProductService&amp;gt;();
service.GetPrice().Returns(42);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Use a wrapper / adapter (for third-party or legacy code)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public interface IProductServiceWrapper
{
    int GetPrice();
}

public class ProductServiceWrapper : IProductServiceWrapper
{
    private readonly ProductService _service;
    public int GetPrice() =&amp;gt; _service.GetValue();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Mock ProductServiceWrapper &lt;/p&gt;

</description>
      <category>csharp</category>
      <category>dotnet</category>
      <category>testing</category>
    </item>
    <item>
      <title>Why is offset pagination slow</title>
      <dc:creator>Naimul Karim</dc:creator>
      <pubDate>Mon, 05 Jan 2026 00:42:15 +0000</pubDate>
      <link>https://forem.com/naimulkarim/why-is-offset-pagination-slow-43c</link>
      <guid>https://forem.com/naimulkarim/why-is-offset-pagination-slow-43c</guid>
      <description>&lt;p&gt;Offset pagination is slow mainly because the database still has to process (read and skip) all the rows before the offset, even though it doesn’t return them.&lt;br&gt;
Since you’re a full-stack engineer working with real production data (likely large tables in fintech systems), this shows up very clearly at scale.&lt;br&gt;
What happens under the hood&lt;br&gt;
A typical offset query looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Sql
SELECT *
FROM transactions
ORDER BY created_at
LIMIT 50 OFFSET 100000;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step-by-step execution&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The database sorts rows by created_at&lt;/li&gt;
&lt;li&gt;It reads the first 100,050 rows&lt;/li&gt;
&lt;li&gt;It throws away the first 100,000&lt;/li&gt;
&lt;li&gt;It returns only 50 rows
The work done grows linearly with the offset size&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Why it gets slower as page number increases&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Rows are scanned, not jumped
Databases cannot jump directly to row 100,000. They must:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Traverse the index or table&lt;/li&gt;
&lt;li&gt;Count rows until the offset is reached&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even with indexes, the engine still walks through index entries.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sorting cost increases&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If the ORDER BY column:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is not indexed, the DB must sort everything first&lt;/li&gt;
&lt;li&gt;Is indexed, the index traversal still scales with offset
Large offsets = more CPU + memory.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Disk I/O increases
For large datasets:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Earlier rows may no longer be in cache&lt;/li&gt;
&lt;li&gt;Disk reads increase&lt;/li&gt;
&lt;li&gt;Latency spikes unpredictably
This is especially painful in high-traffic fintech APIs.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Concurrency makes it worse
Multiple users requesting:
Page 2000
Page 5000
Page 10000&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Each query repeats the same expensive skip work&lt;/li&gt;
&lt;li&gt;No reuse of previous results&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Better alternative: Keyset (Cursor) Pagination&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of skipping rows, you continue from the last seen value.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Sql
SELECT *
FROM transactions
WHERE created_at &amp;gt; :last_seen
ORDER BY created_at
LIMIT 50;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why this is fast&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses index directly&lt;/li&gt;
&lt;li&gt;No skipping&lt;/li&gt;
&lt;li&gt;Constant-time performance per page&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the standard approach for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Financial transactions&lt;/li&gt;
&lt;li&gt;Audit logs&lt;/li&gt;
&lt;li&gt;Infinite scroll APIs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When OFFSET is acceptable&lt;/strong&gt;&lt;br&gt;
OFFSET pagination is okay when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tables are small&lt;/li&gt;
&lt;li&gt;Page depth is limited (e.g., admin dashboards)&lt;/li&gt;
&lt;li&gt;You need random page access (page 3, page 7)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If users can scroll deeply or data grows unbounded don’t use OFFSET&lt;/p&gt;

</description>
      <category>backend</category>
      <category>database</category>
      <category>performance</category>
      <category>sql</category>
    </item>
    <item>
      <title>Software Defects Prediction using Machine Learning</title>
      <dc:creator>Naimul Karim</dc:creator>
      <pubDate>Wed, 31 Dec 2025 00:22:01 +0000</pubDate>
      <link>https://forem.com/naimulkarim/software-defects-prediction-using-machine-learning-5d1d</link>
      <guid>https://forem.com/naimulkarim/software-defects-prediction-using-machine-learning-5d1d</guid>
      <description>&lt;p&gt;&lt;strong&gt;Step 1: Data Loading and Initial Analysis&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  This Python 3 environment comes with many helpful analytics libraries installed
&lt;/h1&gt;

&lt;h1&gt;
  
  
  It is defined by the kaggle/python Docker image: &lt;a href="https://github.com/kaggle/docker-python" rel="noopener noreferrer"&gt;https://github.com/kaggle/docker-python&lt;/a&gt;
&lt;/h1&gt;

&lt;h1&gt;
  
  
  For example, here's several helpful packages to load
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.preprocessing import StandardScaler
import warnings
warnings.filterwarnings('ignore')

# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory

import os
for dirname, _, filenames in os.walk('/kaggle/input'):
    for filename in filenames:
        print(os.path.join(dirname, filename))

# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save &amp;amp; Run All" 
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load the Defects Dataset
train = pd.read_csv('/kaggle/input/train.csv')
test = pd.read_csv('/kaggle/input/test.csv')

# Display first few rows
print("First 5 rows of the training dataset:")
print(train.head())
print("First 5 rows of the test dataset:")
print(test.head())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbq890e7twvezsqjl9hns.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbq890e7twvezsqjl9hns.png" alt=" " width="513" height="570"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Dataset shape
print(f"\nDataset Shape: {train.shape}")
print(f"Number of samples: {train.shape[0]}")
print(f"Number of features: {train.shape[1]}")

print(f"\nDataset Shape: {test.shape}")
print(f"Number of samples: {test.shape[0]}")
print(f"Number of features: {test.shape[1]}")

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmk03w3cbwwjstvmttyo4.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmk03w3cbwwjstvmttyo4.JPG" alt=" " width="523" height="151"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Column names and data types
print("\nColumn Information for training dataset:")
print(train.info())

print("\nColumn Information for test datase
![ ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ixeyco56775g18iuo9ac.JPG)t:")
print(test.info())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Statistical summary
print("Statistical Summary:")
print(train.describe())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfzwy9p6gikajokzw6bb.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfzwy9p6gikajokzw6bb.JPG" alt=" " width="558" height="719"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Get the statistical information about the training set
import plotly.express as px
train.describe().T\
    .style.bar(subset=['mean'], color=px.colors.qualitative.G10[2])\
    .background_gradient(subset=['std'], cmap='Blues')\
    .background_gradient(subset=['50%'], cmap='Reds')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi71xw8q9op0vsk523k8n.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi71xw8q9op0vsk523k8n.JPG" alt=" " width="726" height="544"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Check for missing values
print("=" * 50)
print("Missing Values Analysis")
print("=" * 50)
missing = train.isnull().sum()
missing_pct = (train.isnull().sum() / len(train)) * 100
missing_train = pd.DataFrame({
    'Missing_Count': missing,
    'Missing_Percentage': missing_pct
})
print(missing_train[missing_train['Missing_Count'] &amp;gt; 0])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flj37ik3c1wvzejke3ekw.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flj37ik3c1wvzejke3ekw.JPG" alt=" " width="507" height="218"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Check for duplicates
print("=" * 50)
print("Duplicate Analysis")
print("=" * 50)
duplicates = train.duplicated().sum()
print(f"Number of duplicate rows: {duplicates}")
print(f"\nDuplicate rows:")
train[train.duplicated(keep=False)].sort_values('id')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jmr8wrdlmt7m9g2s0mq.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jmr8wrdlmt7m9g2s0mq.JPG" alt=" " width="800" height="132"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Check target variable distribution
print("\nTarget Variable Distribution:")
print(train['defects'].value_counts())
print(f"\nClass Balance:")
print(train['defects'].value_counts(normalize=True))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevu1e9ui9032x8wrku10.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevu1e9ui9032x8wrku10.JPG" alt=" " width="555" height="264"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# target value count
defects_count = dict(train['defects'].value_counts())
defects_count
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofep8eehvj6s72frinsq.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofep8eehvj6s72frinsq.JPG" alt=" " width="456" height="44"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Check for missing (NaN) value
print(train[train.isna().any(axis=1)])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgcdonswag14a8p1ivg9.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgcdonswag14a8p1ivg9.JPG" alt=" " width="535" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: EDA&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# visualize data distrubution
fig , axis = plt.subplots(figsize = (25,35))
fig.suptitle('Data Distribution', ha = 'center', fontsize = 20, fontweight = 'bold',y=0.90 )
for idx,col in enumerate(list(train.columns)[1:-1]):
  plt.subplot(7,3,idx+1)
  ax = sns.histplot(train[col], color = 'blue' ,stat='density' , kde = False,bins = 50)
  sns.kdeplot(train[col] , color  = 'orange' , ax = ax)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpefh61su3deil2raqquv.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpefh61su3deil2raqquv.JPG" alt=" " width="508" height="707"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# adjust label names
labels = ['not_defects', 'defects']
# visualize data target
plt.title('Defects Values')
sns.barplot(x = labels ,y = list(defects_count.values()), width = 0.3 , palette = ['blue' , 'orange'])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxp5enxeb1zxjjdvn292h.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxp5enxeb1zxjjdvn292h.JPG" alt=" " width="557" height="429"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# visualize correlation
corr = train.corr()
mask = np.triu(np.ones_like(corr))
plt.figure(figsize = (20,10))
plt.title('Correlation Matrix')
sns.heatmap(data = corr , mask = mask , annot = True , cmap = 'coolwarm',annot_kws={"color": "black", "size":9} )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fseunf5ks781xm8b7qlae.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fseunf5ks781xm8b7qlae.JPG" alt=" " width="800" height="494"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**Step 3 : Data cleaning and preparation**
# convert target to binary encoder
# Encode categorical variables
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
train['defects'] = le.fit_transform(train['defects'])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;train.defects.unique()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7zeyosg3q26jnw56ej6.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7zeyosg3q26jnw56ej6.JPG" alt=" " width="182" height="35"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Handling Missing Values&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Strategy 1: Fill missing values with mean/median
train_cleaned = train.copy()

print("Before filling missing values:")
print(train_cleaned[['i', 'b','lOBlank','uniq_Op','total_Op','branchCount']].isnull().sum())

# Fill numeric missing values with median
#train_cleaned['score'].fillna(train_cleaned['score'].median(), inplace=True)
train_cleaned['i'].fillna(train_cleaned['i'].mean(), inplace=True)
train_cleaned['b'].fillna(train_cleaned['b'].mean(), inplace=True)
train_cleaned['lOBlank'].fillna(train_cleaned['lOBlank'].mean(), inplace=True)
train_cleaned['uniq_Op'].fillna(train_cleaned['uniq_Op'].mean(), inplace=True)
train_cleaned['total_Op'].fillna(train_cleaned['total_Op'].mean(), inplace=True)
train_cleaned['branchCount'].fillna(train_cleaned['branchCount'].mean(), inplace=True)

print("\nAfter filling missing values:")
print(train_cleaned[['i', 'b','lOBlank','uniq_Op','total_Op','branchCount']].isnull().sum())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvgxm762slfkf9jv95tw.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvgxm762slfkf9jv95tw.JPG" alt=" " width="408" height="319"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**Handling Outliers**

# Visualize outliers using box plots
plt.figure(figsize=(12,12))
i=1
for col in train.columns:
    plt.subplot(6,6,i)
    train[[col]].boxplot()
    i+=1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvktpid2zkmb30pkd0jqu.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvktpid2zkmb30pkd0jqu.JPG" alt=" " width="669" height="430"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Detect outliers using IQR method
def detect_outliers_iqr(data, column):
    Q1 = data[column].quantile(0.25)
    Q3 = data[column].quantile(0.75)
    IQR = Q3 - Q1
    lower_bound = Q1 - 1.5 * IQR
    upper_bound = Q3 + 1.5 * IQR
    outliers = data[(data[column] &amp;lt; lower_bound) | (data[column] &amp;gt; upper_bound)]
    return outliers, lower_bound, upper_bound

# Detect outliers in age
outliers_b, lower_b, upper_b = detect_outliers_iqr(train_cleaned, 'b')
print(f"b outliers (outside {lower_b:.2f} - {upper_b:.2f}):")
print(outliers_b[['id', 'b']])

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffs4pqi6zgc6ukdumavvh.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffs4pqi6zgc6ukdumavvh.JPG" alt=" " width="341" height="289"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#List of numerical columns
num_cols=[col for col in train.columns if (train[col].dtype in ["int64","float64"]) &amp;amp; (train[col].nunique()&amp;gt;50)]
num_cols
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdh6qv8y07fk7donn5dp.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdh6qv8y07fk7donn5dp.JPG" alt=" " width="433" height="403"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Method to handle outliers
def handle_outliers(
    train,
    columns,
    factor=1.5,
    method="clip"  # "clip" or "remove"
):
    """
    Handle outliers using the IQR method.

    Parameters:
        train (pd.DataFrame): Input DataFrame
        columns (list): List of numeric columns
        factor (float): IQR multiplier (default 1.5)
        method (str): "clip" to cap values, "remove" to drop rows

    Returns:
        pd.DataFrame
    """
    train = train.copy()

    for col in columns:
        Q1 = train[col].quantile(0.25)
        Q3 = train[col].quantile(0.75)
        IQR = Q3 - Q1

        lower = Q1 - factor * IQR
        upper = Q3 + factor * IQR

        if method == "clip":
            train[col] = train[col].clip(lower, upper)
        elif method == "remove":
            train = train[(train[col] &amp;gt;= lower) &amp;amp; (train[col] &amp;lt;= upper)]
        else:
            raise ValueError("method must be 'clip' or 'remove'")

    return train
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Cap outliers 
print(f"Before outlier cleaning: {train_cleaned.shape}")
train_cleaned = handle_outliers(train_cleaned, num_cols, method="clip")
print(f"After outlier cleaning: {train_cleaned.shape}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmtkks1tgx783ak0d24i.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmtkks1tgx783ak0d24i.JPG" alt=" " width="325" height="49"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4 : Model Building and Prediction&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Separate features and target
X= train_cleaned.drop("defects", axis=1)
y= train_cleaned["defects"]
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.1, random_state = 42, stratify=y)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Initialize Random Forest Classifier
rf_model = RandomForestClassifier(
    n_estimators=50,
    criterion='gini',
    max_depth=20,
    min_samples_split=10,
    min_samples_leaf=5,
    max_features='sqrt',
    bootstrap=True,
    random_state=42,
    n_jobs=-1
)
# Train the model
rf_model.fit(X_train, y_train)
print("Random Forest model trained successfully!")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Display model parameters
print("\nModel Parameters:")
print(rf_model.get_params())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0z2h0biiu9lr0hm78lq5.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0z2h0biiu9lr0hm78lq5.JPG" alt=" " width="233" height="32"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model Evaluation&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Calculate accuracy scores
train_accuracy = accuracy_score(y_train, y_train_pred)
test_accuracy = accuracy_score(y_test, y_test_pred)

print("=" * 50)
print("Accuracy Scores")
print("=" * 50)
print(f"Training Accuracy: {train_accuracy:.4f} ({train_accuracy*100:.2f}%)")
print(f"Testing Accuracy: {test_accuracy:.4f} ({test_accuracy*100:.2f}%)")
print("=" * 50)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6qai6nx92lbxvvabr69q.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6qai6nx92lbxvvabr69q.JPG" alt=" " width="449" height="115"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Initialize Decision Tree Classifier
dt_model = DecisionTreeClassifier(
    criterion='gini',
    max_depth=5,
    min_samples_split=20,
    min_samples_leaf=10,
    random_state=42
)

# Train the model
dt_model.fit(X_train, y_train)

print("Decision Tree model trained successfully!")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3oydpkqbnd0ixpox8gk.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3oydpkqbnd0ixpox8gk.JPG" alt=" " width="342" height="26"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.tree import export_text

tree_rules = export_text(dt_model)
print(tree_rules)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Make predictions
y_train_pred = dt_model.predict(X_train)
y_test_pred = dt_model.predict(X_test)

print("Predictions completed!")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fritzadhih75va7ke9t2n.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fritzadhih75va7ke9t2n.JPG" alt=" " width="234" height="34"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Calculate accuracy scores
train_accuracy = accuracy_score(y_train, y_train_pred)
test_accuracy = accuracy_score(y_test, y_test_pred)

print("=" * 50)
print("Accuracy Scores")
print("=" * 50)
print(f"Training Accuracy: {train_accuracy:.4f} ({train_accuracy*100:.2f}%)")
print(f"Testing Accuracy: {test_accuracy:.4f} ({test_accuracy*100:.2f}%)")
print("=" * 50)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcogot9e1n2usvck6hb2y.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcogot9e1n2usvck6hb2y.JPG" alt=" " width="471" height="124"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#linear Regression
def lr(X_train, X_test, y_train, y_test):
    model = LinearRegression()
    model.fit(X_train, y_train.values.ravel())
    y_pred = model.predict(X_test)
    error = mean_squared_error(y_test, y_pred)
    print(f"Score: {error}")
    return model
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Submission&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Submission
X_test = test[X_train.columns]
y_test_pred = rf_model.predict(X_test)
submission = pd.DataFrame({
    "id": test["id"],
    "defects": y_test_pred
})
submission.to_csv("submission.csv", index=False)
submission
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1yn60596vgi0yhylszu.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1yn60596vgi0yhylszu.JPG" alt=" " width="207" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>machinelearning</category>
      <category>python</category>
      <category>datascience</category>
    </item>
    <item>
      <title>How to control the log level of application</title>
      <dc:creator>Naimul Karim</dc:creator>
      <pubDate>Wed, 22 Oct 2025 22:04:06 +0000</pubDate>
      <link>https://forem.com/naimulkarim/how-to-control-the-log-level-of-application-139g</link>
      <guid>https://forem.com/naimulkarim/how-to-control-the-log-level-of-application-139g</guid>
      <description>&lt;p&gt;ASP.NET (especially ASP.NET Core) supports logging via the built-in Microsoft.Extensions.Logging framework. &lt;/p&gt;

&lt;p&gt;The logging verbosity is categorized into the following levels:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1. Trace  Most detailed. Includes all diagnostic information. Useful during development or deep debugging.&lt;/li&gt;
&lt;li&gt;2. Debug  Less detailed than Trace. Used for debugging during development. Not typically enabled in production.&lt;/li&gt;
&lt;li&gt;3. Information    General flow of the application (e.g., startup, shutdown, user actions). Useful for understanding what the app is doing.&lt;/li&gt;
&lt;li&gt;4. Warning    Something unexpected or potentially problematic happened, but the app can continue running.&lt;/li&gt;
&lt;li&gt;5. Error  A failure occurred during the current operation, but the app continues.&lt;/li&gt;
&lt;li&gt;6. Critical   A fatal error or application crash. The application may be unable to continue running.&lt;/li&gt;
&lt;li&gt;7. None   Logging is completely disabled.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;1. Using appsettings.json&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the Logging section, which is used to configure the logging behavior of your application. The default configuration is :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning",
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above configuration you can control your default log level and the log level of specific namespaces. &lt;br&gt;
The default log level is applied to all namespaces that are not explicitly configured. &lt;br&gt;
Here the default log level is set to Information and the log level of the Microsoft.AspNetCore namespace is set to Warning. So when you use logger.LogWarning in code, it will be logged, but when you use logger.LogInformation it will not be logged.&lt;/p&gt;

&lt;p&gt;Suppose we have a class like this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespace ProductsDomain

public class HomeController
{
   private readonly ILogger&amp;lt;HomeController&amp;gt; _logger;

   public HomeController(ILogger&amp;lt;HomeController&amp;gt; logger)
   {
     _logger = logger;
   }

   public IActionResult Index()
   {
    _logger.LogInformation("Index page loaded.");
    _logger.LogWarning("This is a warning.");
    _logger.LogError("An error occurred.");
    return View();
   }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Normally if you call the controller method,LogWarning and LogError will not be logged, because the default log level is Information. &lt;br&gt;
But if you change the following LogLevel inside the appsettings.json file, it will be logged:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning",
      "ProductsDomain.HomeController": "Warning"
      "ProductsDomain.HomeController": "Error"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So if class name along with namespace is part of the configured namespace inside the appsettings.json file, it will override the default log level. The logger will apply the most specific log level&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Steps to Transform ASP.NET Core API into AWS Lambda Functions</title>
      <dc:creator>Naimul Karim</dc:creator>
      <pubDate>Wed, 01 Oct 2025 03:09:16 +0000</pubDate>
      <link>https://forem.com/naimulkarim/steps-to-transform-aspnet-core-api-into-aws-lambda-functions-7eb</link>
      <guid>https://forem.com/naimulkarim/steps-to-transform-aspnet-core-api-into-aws-lambda-functions-7eb</guid>
      <description>&lt;p&gt;The beauty of hosting an ASP.NET Core API behind a Lambda function is that you can write an ASP.NET Core API using the skills you already have and AWS's logic will provide a bridge to run each controller same as existing API Controllers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnqym5x1081ryhphjjm6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnqym5x1081ryhphjjm6.png" alt=" " width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The below steps will guide you to transform a ASP.Net CoreAPI to a Serverless Application.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;1. Install the AWS Toolkit extension for Visual Studio *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ser3s1cv5d8d01u7g89.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ser3s1cv5d8d01u7g89.png" alt=" " width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create the new project, selecting the template ‘AWS Serverless Application (.NET Core)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7ez6iuts8eha3qr07v8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7ez6iuts8eha3qr07v8.png" alt=" " width="537" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyloe1nchx1k79syzyp3j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyloe1nchx1k79syzyp3j.png" alt=" " width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The below tag will be added to in csproj fileLambda&lt;/p&gt;

&lt;p&gt;And the AWS nuget package reference will also be added.Amazon.Lambda.AspNetCoreServer(5.3.0)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Configuring Amazon API Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;serverless.template:&lt;/strong&gt; An AWS CloudFormation Serverless Application Model template file for declaring Serverless functions and other AWS resources. It  contains one function definition configured to be exposed by API Gateway using proxy integration, so all requests will go to that function. The only thing that needs to be updated is the handler field. The format for the handler field is &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;::.LambdaFunction::FunctionHandlerAsync&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The FunctionHandlerAsync method is inherited from the base class of our LambdaFunction class.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgaq1leuwfsmy9w6nd7sq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgaq1leuwfsmy9w6nd7sq.png" alt=" " width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;The Lambda Entry Point *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction&lt;/p&gt;

&lt;p&gt;The code in this file bootstraps the ASP.NET Core hosting framework. The Lambda function is defined in the base class. When the app is run locally, it started with the LocalEntryPoint class. When this is run within the Lambda service in the cloud, this LambdaEntryPoint is what will be used.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gmdxzsxbj54zti8a312.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gmdxzsxbj54zti8a312.png" alt=" " width="800" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating the API and service&lt;/strong&gt; &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Copy the codes from existing API and service classes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9gr06g3e848wbbj0gx6t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9gr06g3e848wbbj0gx6t.png" alt=" " width="800" height="626"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run Locally&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Now build and run the project from IIS and see if it works&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6r4krjw4ixcifk35sg6k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6r4krjw4ixcifk35sg6k.png" alt=" " width="416" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check the controller Post and Get responses from postman.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zi31wbwwkbr0i1bppg7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zi31wbwwkbr0i1bppg7.png" alt=" " width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All good locally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploying the WeApi to AWS Lambda&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create an AWS Profile from AWS Explorer in View&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5o8vmgplsrhp7k59s1a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5o8vmgplsrhp7k59s1a.png" alt=" " width="781" height="583"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Right click your project and select “Publish To AWS Lambda”.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Put a Stack Name and create a S3 Bucket.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjy1bm2fyl6wzhzpmwxn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjy1bm2fyl6wzhzpmwxn.png" alt=" " width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Publishing to AWS is in progress.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ts7xu5xu0nr1ka40az0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ts7xu5xu0nr1ka40az0.png" alt=" " width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;S3 Bucket is created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5hreruffr9shuu25a98e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5hreruffr9shuu25a98e.png" alt=" " width="800" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lambda Function is deployed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F310pu99u2i9vf77v838k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F310pu99u2i9vf77v838k.png" alt=" " width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From Cloud Watch Monitor is showing the statistics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fws4saavivql62xw5kqck.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fws4saavivql62xw5kqck.png" alt=" " width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get the AWS Serverless URL from Visual Studio .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sbljet6zt4osc7waeot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sbljet6zt4osc7waeot.png" alt=" " width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check the API from postman.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmv8d34obf68dfvpv18or.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmv8d34obf68dfvpv18or.png" alt=" " width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The API is running and returning responses.&lt;/p&gt;

&lt;p&gt;You are done !&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>dotnet</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Registering Multiple Implementations of the same Interface using Autofac vs using Built-In Asp .Net Core DI</title>
      <dc:creator>Naimul Karim</dc:creator>
      <pubDate>Wed, 01 Oct 2025 03:01:06 +0000</pubDate>
      <link>https://forem.com/naimulkarim/registering-multiple-implementations-of-the-same-interface-using-autofac-vs-using-built-in-asp-net-3g27</link>
      <guid>https://forem.com/naimulkarim/registering-multiple-implementations-of-the-same-interface-using-autofac-vs-using-built-in-asp-net-3g27</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F451ktuqpxz5xslcqfifn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F451ktuqpxz5xslcqfifn.png" alt=" " width="720" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using Autofac Keyed Services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Using strings as Key multiple dependencies can be registered.&lt;/p&gt;

&lt;p&gt;For example creating a Enum where each enum value corresponds to an implementation of the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Public Enum PaymentState { Online, Cash}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The implementations are as below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class CardPaymentState : IPaymentState {}
public class CashPaymentState : IPaymentState {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The enum values can then be registered as keys for the implementations below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var builder = new ContainerBuilder()

builder.RegisterType&amp;lt;CardPaymentState&amp;gt;().Keyed&amp;lt;IPaymentState&amp;gt;PaymentState.Online);

builder.RegisterType&amp;lt;CashPaymentState&amp;gt;().Keyed&amp;lt;IPaymentState&amp;gt;(PaymentState.Cash);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now they can be resolved like below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var cardPayment = container.ResolveKeyed&amp;lt;IPaymentState&amp;gt;(PaymentState.Online)

var cashPayment = container.ResolveKeyed&amp;lt;IPaymentState&amp;gt;(PaymentState.Cash);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;*&lt;em&gt;Using .Net Core Built-In DI *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Firstly a shared delegate has to be declared.&lt;/p&gt;

&lt;p&gt;Public delegate IPaymentService PaymentServiceResolver(string key);&lt;br&gt;
Then in startup.cs, the multiple concrete registrations and a manual mapping of those types have to be set.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services.AddTransient&amp;lt;CreditCardService&amp;gt;()
services.AddTransient&amp;lt;CashService&amp;gt;();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services.AddTransient&amp;lt;PaymentServiceResolver&amp;gt;(serviceProvider =&amp;gt; key =
{
    switch (key)
    {
        case "online":
            return serviceProvider.GetService&amp;lt;CreditCardService&amp;gt;();

        case "cash":
            return serviceProvider.GetService&amp;lt;CashService&amp;gt;();

        default:
            throw new KeyNotFoundException();
    }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now it can be used from any class registered with DI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class Payment
{
    private readonly IPaymentService  paymentService;
    public Consumer(PaymentServiceResolver paymentServiceResover)
    {
       paymentService= paymentServiceResover("Online");
    }
    public void PayByCard()
    {
       paymentService.ProcessPayment();
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>dotnet</category>
      <category>csharp</category>
      <category>designpatterns</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Simple steps to build a CI/CD pipeline with ASP.Net Core, GitHub Actions, Docker and a Linux server</title>
      <dc:creator>Naimul Karim</dc:creator>
      <pubDate>Wed, 01 Oct 2025 02:56:59 +0000</pubDate>
      <link>https://forem.com/naimulkarim/simple-steps-to-build-a-cicd-pipeline-with-aspnet-core-github-actions-docker-and-a-linux-server-12gj</link>
      <guid>https://forem.com/naimulkarim/simple-steps-to-build-a-cicd-pipeline-with-aspnet-core-github-actions-docker-and-a-linux-server-12gj</guid>
      <description>&lt;p&gt;&lt;strong&gt;The Pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Build is run automatically at each commit to GutHub.&lt;br&gt;
A docker container is created and pushed to docker hub.&lt;br&gt;
Finally the container is deployed to a Linux Server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build the Application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a docker file with the following codes and put it in the same folder as the sln file of your application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0w017jbwppjm3dh4upzo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0w017jbwppjm3dh4upzo.png" alt=" " width="456" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The docker file contains metadata for two images. First one contains a .net sdk to build the application. The second one only contains the ASP.NET Core runtime and application binaries to be used for production deployment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run the docker container locally&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run the following commands to see your docker container is created and running in your machine.&lt;/p&gt;

&lt;p&gt;docker build . -t cicddemo&lt;/p&gt;

&lt;p&gt;docker run -p 5000:80 -it cicddemo&lt;/p&gt;

&lt;p&gt;This will create a container from the image. Check your webpage from &lt;a href="http://localhost:5000" rel="noopener noreferrer"&gt;http://localhost:5000&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also can check from docker desktop if the containers are running.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Commit and push to GitHub&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the page is loading, commit all changes and push everything to GitHub.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build the container with GitHub Actions and push it to Docker Hub&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1.Set Secrets for your docker hub username and password.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51p4phtg09vcws5sfsdq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51p4phtg09vcws5sfsdq.png" alt=" " width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create the directory “.github/workflows” in the root of your git repository. Inside this folder, add a file called “cicddemo.yaml”. My cicddemo.yaml contains the following :&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fglziov9nt1qv50o1ztey.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fglziov9nt1qv50o1ztey.png" alt=" " width="621" height="762"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.Save this file to “.github/workflows/CICD.yaml” , commit all changes and push them to GitHub.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now if you look into your GitHub account, you should see the workflow under the “Actions” tab. And if you click you will find your jobs running.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbsej2qgxpdjk02ac4pd1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbsej2qgxpdjk02ac4pd1.png" alt=" " width="800" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faanm3rap0cqk535ubq3m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faanm3rap0cqk535ubq3m.png" alt=" " width="614" height="742"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After it is completed login to your docker hub account and you will find the image under your repository.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fra48mvn9hvg7f1nwiihu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fra48mvn9hvg7f1nwiihu.png" alt=" " width="800" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No alt text provided for this image&lt;br&gt;
You are done !&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Integrating Datadog in .NET Project using Serilog</title>
      <dc:creator>Naimul Karim</dc:creator>
      <pubDate>Wed, 01 Oct 2025 02:52:50 +0000</pubDate>
      <link>https://forem.com/naimulkarim/integrating-datadog-in-net-project-using-serilog-16ae</link>
      <guid>https://forem.com/naimulkarim/integrating-datadog-in-net-project-using-serilog-16ae</guid>
      <description>&lt;p&gt;In modern cloud-native and microservices architectures, applications are highly distributed, making it difficult to detect failures and performance bottlenecks and monitoring and analytics tools are essential to ensure the performance and reliability of web services and applications. Among these tools, Datadog is a popular choice. Its is a cloud-based monitoring and security platform that provides real-time observability for applications, infrastructure, logs, and security across distributed systems. It integrates with cloud providers, containers, databases, and third-party services, allowing teams to track performance, troubleshoot issues, and ensure system reliability.&lt;/p&gt;

&lt;p&gt;Here’s how it can be set up:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Required Nuget Packages&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Serilog.AspNetCore Serilog logging for ASP.NET Core&lt;br&gt;
Serilog.Sinks.Datadog.Logs A Serilog sink that sends events and logs straight away to Datadog. By default the sink sends logs over HTTPS&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1 : Sign up for a Datadog account&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Also note the region of your Datadog site. In my case it was US5 (us5.datadoghq.com)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2 : Get the API Key&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Get the API key from Data log portal&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftd6pdtcs6ktu3ljqh8o4.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftd6pdtcs6ktu3ljqh8o4.JPG" alt=" " width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3 : Get the Logging Endpoints&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use the site selector dropdown on the right side of the page to see supported endpoints by Datadog site (&lt;a href="https://docs.datadoghq.com/logs/log_collection/?tab=host#supported-endpoints" rel="noopener noreferrer"&gt;https://docs.datadoghq.com/logs/log_collection/?tab=host#supported-endpoints&lt;/a&gt;). This is needed in configuration URL of appsettings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgmljvq2b21cpmkxkrnq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgmljvq2b21cpmkxkrnq.png" alt=" " width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Configure Serilog and Datadog in appsettings.json&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In apssettings.json  add the necessary configuration for Serilog and Datadog. The following set up includes specifying sinks for the console and Datadog, setting minimum log levels, defining service properties and the enrichment of logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvb1w6i2izu4dnr9fd6zd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvb1w6i2izu4dnr9fd6zd.png" alt=" " width="626" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Initialize Serilog in Program.cs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The following code reads the configurations from appsettings.json and adds serilog to the logging providers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp700p6mkuuctnd3w1cid.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp700p6mkuuctnd3w1cid.png" alt=" " width="697" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Use Logging in Controller and Services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Finally create logs by injecting the ILogger logger into the class where you want to use it. Hit your api controller.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7:  View logs in Datadog&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Go to Datadog portal and then navigate to Logs -&amp;gt; Explorer to view logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38q1pk4gga5mhdrn3taw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38q1pk4gga5mhdrn3taw.png" alt=" " width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Done!&lt;/p&gt;

&lt;p&gt;Thanks for reading !&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>monitoring</category>
      <category>tutorial</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
