<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Muhammad Rizwan</title>
    <description>The latest articles on Forem by Muhammad Rizwan (@muhammad_rizwan_32ec93eee).</description>
    <link>https://forem.com/muhammad_rizwan_32ec93eee</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/muhammad_rizwan_32ec93eee"/>
    <language>en</language>
    <item>
      <title>MCP, Skills, AI Agents, and New Models: The New Stack for Software Development</title>
      <dc:creator>Muhammad Rizwan</dc:creator>
      <pubDate>Fri, 01 May 2026 14:11:07 +0000</pubDate>
      <link>https://forem.com/muhammad_rizwan_32ec93eee/mcp-skills-ai-agents-and-new-models-the-new-stack-for-software-development-44ba</link>
      <guid>https://forem.com/muhammad_rizwan_32ec93eee/mcp-skills-ai-agents-and-new-models-the-new-stack-for-software-development-44ba</guid>
      <description>&lt;p&gt;Software development is moving from “AI as autocomplete” to “AI as an active teammate.” The shift is being driven by four pieces coming together at once: open integration standards like Model Context Protocol (MCP), reusable instruction bundles such as SKILL.md and AGENTS.md, increasingly capable AI agents, and a new generation of coding-focused models. Together, they are changing how engineers write, review, test, and ship software. &lt;/p&gt;

&lt;h2&gt;
  
  
  MCP: the interface layer for AI-native development
&lt;/h2&gt;

&lt;p&gt;Model Context Protocol, or MCP, is an open protocol for connecting language models to external tools, data sources, and workflows. In practice, that means an AI coding system no longer has to rely only on whatever you paste into a chat window. Through MCP, it can securely access things like repositories, documentation, databases, issue trackers, search, local files, and internal services using a standardized interface instead of one-off integrations. Anthropic describes MCP as a kind of “USB-C for AI apps,” and the formal specification frames it as a standard way to connect LLM applications to external data and tools. &lt;/p&gt;

&lt;p&gt;That standardization matters for engineering teams. Before MCP, every agent-tool connection tended to be custom: one integration for GitHub, another for Jira, another for Postgres, another for observability, and so on. MCP reduces that fragmentation. A tool can expose an MCP server once, and multiple clients or coding agents can potentially reuse it. That lowers integration cost, improves portability, and makes it easier to swap models or clients without rebuilding the whole tooling layer. &lt;/p&gt;

&lt;p&gt;For software development, MCP is especially important because coding is not just text generation. Real engineering work requires reading files, running commands, checking logs, querying systems, updating tickets, and validating outcomes. MCP gives agents a common way to do those tasks with structure and guardrails rather than brittle prompt hacks. Claude Code, for example, documents MCP support specifically as a way to connect to external tools and data sources so the model can act directly on systems developers otherwise have to copy into chat manually. &lt;/p&gt;

&lt;p&gt;Skills: turning tacit engineering know-how into reusable workflow assets&lt;/p&gt;

&lt;p&gt;The next layer above MCP is the rise of skills. A skill is not just a prompt; it is a packaged workflow. In OpenAI’s documentation, a skill is a versioned bundle of files anchored by a required SKILL.md manifest, and in Codex documentation it is described as a directory containing a SKILL.md file plus optional scripts and references. The point is to encode repeatable engineering behavior in a portable, inspectable format.&lt;/p&gt;

&lt;p&gt;This is a big deal for teams because many software processes are semi-structured but repetitive: triaging a bug, preparing a release, writing migration plans, reviewing a pull request, reproducing a flaky test, or generating changelog entries. Instead of hoping the agent “remembers” how your team likes those jobs done, you can give it a skill with explicit instructions, required inputs, validation checks, and output format. The result is more consistency and less prompt drift.&lt;/p&gt;

&lt;p&gt;A SKILL.md file typically acts as the playbook. It can define when a skill should trigger, what it should do, which steps it should follow, what tools it may use, and how it should verify completion. Because it is plain Markdown, it is easy to store in Git, review in pull requests, version over time, and share across projects. OpenAI’s docs also note that skills use progressive disclosure: systems can begin with lightweight metadata such as name and description, and load the full instructions only when the task matches. That helps control context usage while still making specialized workflows available.&lt;/p&gt;

&lt;p&gt;Closely related is AGENTS.md. Where a skill captures a reusable workflow, AGENTS.md captures standing instructions for how an AI agent should operate in a repository or directory. OpenAI documents that Codex reads AGENTS.md files before starting work, with more specific files overriding broader ones. This makes AGENTS.md a practical place to encode repo conventions: which tests to run, how to navigate the codebase, preferred architecture rules, formatting expectations, safety boundaries, and when to stop and ask for human review.&lt;/p&gt;

&lt;p&gt;For software organizations, the combination is powerful: MCP connects the agent to tools, AGENTS.md gives the agent local operating rules, and SKILL.md provides reusable workflows for recurring tasks. That combination starts to look less like “prompting a chatbot” and more like building a lightweight operational system for software delivery.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI agents: from code suggestions to delegated work
&lt;/h2&gt;

&lt;p&gt;The term “AI agent” gets overused, but in software development it has a concrete meaning: a system that can plan, use tools, inspect state, take actions, check results, and continue iterating toward a goal. That is a step beyond classic code completion. Instead of merely suggesting the next line, an agent can explore a codebase, open the right files, propose a patch, run tests, inspect failures, revise the implementation, and summarize what changed. OpenAI’s agents materials and Codex docs position this as a core pattern, including support for subagents and coordinated workflows.&lt;/p&gt;

&lt;p&gt;This matters because engineering work is increasingly task-oriented rather than snippet-oriented. A product manager does not ask for “20 lines of React.” They ask for “add SSO to the admin console,” “debug why checkout fails for one region,” or “prepare the service for a schema migration.” Those tasks require decomposition, context gathering, execution, and verification. AI agents are starting to handle those loops with increasing reliability when given the right tools and constraints.&lt;/p&gt;

&lt;p&gt;The most effective teams are not treating agents as magical replacements for engineers. They are treating them as scoped operators. An agent can own bounded, reviewable work: generating boilerplate, investigating logs, preparing patches, validating docs, or running a prescribed release checklist. Humans still provide architecture, priorities, judgment, and final approval. But as skills mature and MCP ecosystems expand, the boundary of what can be delegated is widening.&lt;/p&gt;

&lt;h2&gt;
  
  
  The new models powering software development
&lt;/h2&gt;

&lt;p&gt;The model layer is also moving quickly. On the OpenAI side, the current official model catalog highlights GPT-5.4 as the flagship for agentic, coding, and professional workflows, alongside GPT-5.4 pro, GPT-5.4 mini, and GPT-5.4 nano. OpenAI’s model materials also continue to position GPT-5 and GPT-4.1 as important options, with GPT-4.1 described in release notes as especially strong at coding and precise instruction following.&lt;/p&gt;

&lt;p&gt;For developers, that lineup suggests a tiered strategy rather than a single-model strategy. Use a top-tier reasoning model such as GPT-5.4 for architecture, debugging, large refactors, and multi-step tool use; use smaller variants such as GPT-5.4 mini or nano for low-latency support work like classification, formatting, smaller code edits, or agent substeps; and use specialized coding-oriented models like GPT-4.1 when instruction precision and practical software tasks matter more than broad frontier reasoning. That is an inference from the model descriptions and positioning, but it matches how many teams now structure agent systems: one strong planner, plus cheaper executors for routine work. &lt;/p&gt;

&lt;p&gt;Anthropic’s current coding story is similarly agent-oriented. Official materials highlight Claude Sonnet 4.6 as a major upgrade across coding, computer use, long-context reasoning, and agent planning, and Claude Opus 4.6 as Anthropic’s latest Opus release for stronger coding and multi-step tasks. Anthropic has also published directly about connecting agents to tools with MCP, which reinforces how tightly model capability and integration capability now fit together. &lt;/p&gt;

&lt;p&gt;Google’s model family is also now clearly in the software-development race. Official Google materials point to Gemini 3.1 Pro as a newer high-end model for complex reasoning, while model and product pages continue to emphasize Gemini 3 Pro and Gemini 2.5 Pro for coding, long-context analysis, and developer workflows. Google has also released a Gemini 2.5 Computer Use model aimed at UI interaction tasks, which is notable for agentic software workflows that need to operate through web or desktop interfaces. &lt;/p&gt;

&lt;p&gt;Mistral is pushing hard on the open and enterprise coding side. Its current public materials highlight Mistral Small 4 for chat, coding, and agentic tasks, and its coding solutions page points to Codestral for code completion and Devstral for agentic coding. Mistral has also announced Devstral 2 and Devstral Small 2, which shows how rapidly the coding-agent segment is becoming specialized rather than relying on one general-purpose model for everything.&lt;/p&gt;

&lt;p&gt;So what are the “new models” worth naming right now for software development? A practical shortlist would include GPT-5.4, GPT-5.4 mini, GPT-5.4 nano, GPT-4.1, Claude Sonnet 4.6, Claude Opus 4.6, Gemini 3.1 Pro, Gemini 2.5 Pro, Gemini 2.5 Computer Use, Mistral Small 4, Codestral, and Devstral 2. Different teams will choose differently, but the pattern is clear: the market is converging on model families optimized for reasoning, coding, low-latency subwork, and computer-using agents rather than a single monolithic assistant.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means for engineering teams
&lt;/h2&gt;

&lt;p&gt;The strategic change is simple: the winning setup is no longer just “pick the smartest model.” It is “build the right stack.” That stack usually has four layers. First, a capable model family. Second, an agent runtime that can plan and use tools. Third, MCP connections into the systems where real work happens. Fourth, local organizational memory encoded in AGENTS.md and SKILL.md files. When those layers are in place, AI becomes much more reliable, much easier to evaluate, and much more reusable across projects.&lt;/p&gt;

&lt;p&gt;This also changes how teams should think about adoption. The first wave of AI coding focused on personal productivity: faster snippets, faster explanations, faster drafts. The next wave is operational productivity: better bug triage, repeatable release workflows, structured review processes, environment-aware debugging, and multi-agent parallelization across workstreams. OpenAI’s Codex materials explicitly describe multi-agent workflows and cloud worktrees, while both Anthropic and Google are emphasizing coding plus agent planning plus tool use. &lt;/p&gt;

&lt;h2&gt;
  
  
  The real opportunity: software development as a system of explicit instructions
&lt;/h2&gt;

&lt;p&gt;Perhaps the most important long-term effect is cultural. Skills and agent instruction files force teams to externalize how they work. Many engineering organizations run on tacit knowledge: one senior developer knows how to cut a release, another knows how to debug the build pipeline, another knows what “done” means for documentation. Once that knowledge is captured in SKILL.md and AGENTS.md, it becomes shareable, reviewable, testable, and executable by both humans and agents. That is useful even before the AI enters the picture.&lt;/p&gt;

&lt;p&gt;In that sense, MCP, skills, and AI agents are not separate trends. They are parts of the same transition: from AI that generates text about software to AI that participates inside software workflows. The best engineering teams will not merely ask models for code. They will build environments where models can access the right context, follow the right instructions, use the right tools, and hand back work that is easier to review and trust.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Software development is entering an agent-native phase. MCP is becoming the connectivity standard. SKILL.md and AGENTS.md are emerging as practical ways to package workflow knowledge. AI agents are taking on larger, more verifiable units of work. And the newest model families — from GPT-5.4 and GPT-4.1 to Claude Sonnet 4.6, Gemini 3.1 Pro, and Devstral 2 — are being designed not just to chat, but to operate inside real engineering systems. The implication is clear: the future of coding will be shaped less by raw model intelligence alone, and more by how well teams combine models, protocols, tools, and structured instructions into one coherent development stack.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>development</category>
      <category>ai</category>
    </item>
    <item>
      <title>Rethinking Test Automation for Distributed .NET Systems: Contracts, Chaos, and Confidence</title>
      <dc:creator>Muhammad Rizwan</dc:creator>
      <pubDate>Thu, 23 Jan 2025 12:37:49 +0000</pubDate>
      <link>https://forem.com/muhammad_rizwan_32ec93eee/test-automation-in-net-core-quality-and-efficiency-in-software-development-13f4</link>
      <guid>https://forem.com/muhammad_rizwan_32ec93eee/test-automation-in-net-core-quality-and-efficiency-in-software-development-13f4</guid>
      <description>&lt;p&gt;The evolution of software engineering has shifted test automation from a supplementary task to a foundational pillar of the modern development lifecycle. Within the high-performance .NET Core ecosystem, these advancements provide the essential framework for cross-platform consistency, enabling applications to scale aggressively without sacrificing architectural integrity.&lt;/p&gt;

&lt;p&gt;This article explores the frontier of .NET Core testing strategies, detailing the shift from legacy approaches to modern, cloud-native validation. We will navigate the critical path from selecting sophisticated testing frameworks to orchestrating seamless CI/CD integration. By synthesizing industrial insights with emerging best practices, this guide provides a roadmap for engineers—whether they are architecting a new automation suite or optimizing a mature strategy for long-term technical excellence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Test Automation Matters in .NET Core
&lt;/h2&gt;

&lt;p&gt;.NET Core provides a powerful, cross-platform framework for developers to build large enterprise solutions and agile startup applications. As these applications grow in size and complexity, manual testing becomes labor-intensive and prone to human error. In contrast, automated tests run consistently and speedily, which makes defect detection easier at an early stage of the development process. Early defect detection reduces rework later on, thereby reducing costs.&lt;/p&gt;

&lt;p&gt;Moreover, automated tests give teams a safety net. When developers refactor code or add new features, test suites can quickly confirm if functionality that previously worked still does what it is supposed to. This is crucial in continuous delivery environments where rapid deployments require quality checks to be consistently maintained. With.NET Core’s cross-platform reach, automated tests can be run on a multitude of operating systems, making sure any changes in code act similarly across those varied environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Foundation: Understanding .NET Core’s Testing Ecosystem
&lt;/h2&gt;

&lt;p&gt;One of the most important decisions to be made in implementing a successful test automation strategy is the selection of an appropriate testing framework. For.NET Core, several options exist, such as xUnit, NUnit, and MSTest. Each has its strengths: xUnit is praised for its modern design and alignment with.NET Core conventions, NUnit boasts rich parameterization features, and MSTest has smooth integration with Microsoft’s ecosystem.&lt;/p&gt;

&lt;p&gt;For projects in.NET Core, many developers prefer using xUnit. According to one professional, "Most of our applications are on.NET Core, and I have used xUnit mostly for unit tests since xUnit is complementary to.NET Core." Indeed, xUnit was designed in a way that aligns well with the structure and idioms of.NET Core, thus becoming easy for many teams to get on board with. When selecting a test framework, the level of project complexity, expertise in the team, and integration with a continuous integration and deployment pipeline should be considered. If your team already has experience with MSTest or NUnit, that would probably be a good option; otherwise, xUnit is often a good choice because it is simple and flexible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a Comprehensive Testing Strategy
&lt;/h2&gt;

&lt;p&gt;A full-fledged testing strategy will include several layers, one of which is unit tests. Tests verify the smallest units of functionality, like methods in controllers, services, helper classes, and domain entities. As one of the developers with experience shared: "I write unit tests for every component. Write separate tests for controllers, for helper classes, for domain entities/value objects if my application is designed with domain-driven design, domain services, and infrastructure classes." This approach helps make sure each part of your application behaves correctly in isolation before you integrate them.&lt;/p&gt;

&lt;p&gt;A very important advantage of.NET Core when it comes to testability is the built-in Dependency Injection (DI). It does this by injecting dependencies, rather than hard-coding them, and thereby allowing developers to easily switch out real dependencies with mocked or in-memory substitutes at test execution time. Libraries such as Moq make this quite easy. Mocking replaces a real external dependency-for example, a remote API or database call-with a simulated component whose behavior you control. This keeps your tests laser-focused on the logic of the class or method under scrutiny. As one practitioner explains, "Mocking any external classes or services helps a lot for unit tests. Since you just need to write tests for the method/class. With the mocking libraries you can simply mock the dependencies and their behavior."&lt;/p&gt;

&lt;p&gt;Such mock-driven approaches also reduce flakiness and speed up execution time by avoiding real network calls or system interactions. The result is a more stable, deterministic test suite that can be run repeatedly to confirm that your application logic remains consistent over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integration Testing and UI Automation
&lt;/h2&gt;

&lt;p&gt;Unit tests are the foundation of any testing strategy, but more layers need to be built on top to be confident that everything will work as expected. Integration tests will verify interactions among the different components: controllers, databases, and external APIs, which together form the boundary between them, well defined and correctly implemented. These may include anything from a simple in-memory database spun up with something like Microsoft.EntityFrameworkCore.InMemory, all the way through using mock services to test various aspects of calling out to an API. Though usually slower and a bit more involved, integration tests have the added advantage of finding things that a unit test won’t, like errors in configuration, database migrations, or in making network calls.&lt;/p&gt;

&lt;p&gt;For web applications, UI automation has emerged as a critical piece of the testing puzzle. Tools like Selenium WebDriver have long been the standard for browser-based testing. By scripting user interactions—such as filling out forms, clicking buttons, or navigating through pages—you can verify that the application’s front-end behaves correctly under real browser conditions. But more recently, a new framework has been gaining popularity that outdoes it in many ways: Playwright. Playwright natively supports multiple browsers, and is better at handling modern, JavaScript-heavy pages-especially Single Page Apps or sites reliant on client-side rendering.&lt;/p&gt;

&lt;p&gt;Whereas super valuable for ensuring user-facing flows, end-to-end and UI tests are also more prone to flakiness and take longer to run. Managing synchronization points—such as waiting for elements to load—can be challenging. Small changes to the UI can break test scripts, requiring frequent maintenance. Despite these drawbacks, the ability to confirm that the entire stack (from the front-end to the database) operates in unison is invaluable. Balancing the depth of your UI tests with the reliability of your unit and integration tests is essential for an efficient overall strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continuous Integration and Deployment
&lt;/h2&gt;

&lt;p&gt;One other benefit that comes with automated tests is the smoothness of integration they possess with the CI/CD pipeline. Probably a common workflow for quite a few development teams nowadays would be having automated tests on each commit or PR. As one team member puts it: "When we raise a Pull Request to the dev branch, all our tests get executed, and if any of the tests fail, it won’t deploy the build." This quick feedback loop prevents problematic code from merging into the main branch in the first place, ensuring that the shared codebase remains stable.&lt;/p&gt;

&lt;p&gt;Whether it’s Azure DevOps, GitHub Actions, Jenkins, or another tool, the general approach to setting up automated tests includes first running dotnet build to compile the projects, followed by dotnet test to run all the tests. Many of these platforms natively support code coverage reports via libraries such as Coverlet, which help measure how much of your code is being exercised by tests. Although it is not the only indicator of test quality, coverage can indicate areas that have been left behind and that might be needing extra attention. Many teams use a threshold-say, 70%-in order to incentivize developers to make sure the important parts of the application receive a test. In the words of another developer: "We just make sure that the code coverage must be above 70%.&lt;/p&gt;

&lt;p&gt;Also, the introduction of CI/CD pipelines has created a culture of continuous improvement. The teams are continuously refining their test suites to remove redundancies, optimize test execution time, and pragmatically balance speed with thoroughness. In time, such focus on test automation greatly reduces the incidence of production bugs and gives teams increased confidence in each deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices and Common Challenges
&lt;/h2&gt;

&lt;p&gt;While the mechanics of test automation are second nature once you’ve mastered your tooling, the art is really creating a sustainable testing culture. Tests should be written to be clear, concise, and in isolation. A given test should, in theory, test one scenario or path in code. The more complex tests do, the most difficult they will be to maintain, more likely to fail, and least informative when something breaks.&lt;/p&gt;

&lt;p&gt;Another important practice is to organize your test codebase effectively. For example, group tests by feature or layer: controllers, services, and domain logic. This makes it easier for new developers to find relevant tests. Providing clear naming conventions such as ClassName_MethodName_ExpectedOutcome helps explain the purpose of each test. Regular refactoring of test code is just as important as refactoring production code: obsolete or unused tests clutter the suite and degrade its overall usefulness.&lt;/p&gt;

&lt;p&gt;Flaky tests are the legendary annoyances. Causes range from race conditions to network latencies and may result in developer distrust in the test suite. The common remedies to flaky tests are introducing explicit waits on elements during UI testing, making parallel tests not share mutable state, or generally enhancing the mocking of external dependencies. Performing root-cause analyses on flaky tests can give some valuable insights into how to improve the overall testing strategy.&lt;/p&gt;

&lt;p&gt;Balancing between automated testing versus manual testing-the big question across many teams today. Automated tests ensure regression with all-new functional elements found quickly; still, it offers great efficiency on repeated verification of existing behavior that has been found out to work or expected behavior based on code change alone. Common strategies include automating the routine, "eye-ball," type checking and reserve a decent manual process for exploratory testing, general feedback usability, and infrequent one-time scenarios. As one developer notes, "We make sure we cover all scenarios which might occur." While 100% coverage of all possible scenarios is unreal, having a well-prioritized plan guarantees the protection of critical paths and workflows by automated means. Moving Forward: Progressing Your.NET Core Test Automation&lt;/p&gt;

&lt;h2&gt;
  
  
  The Path Forward: Evolving Your .NET Core Test Automation
&lt;/h2&gt;

&lt;p&gt;Start small as your application and your test automation practice grow. As a starting point to automated testing, focus on the core functionalities in your unit tests, then progress on to include mocking, external dependencies, and finally integration and UI checks. Track coverage trends, test runtime, and occurrence of flakiness or test failure to determine when improvement is required.&lt;/p&gt;

&lt;p&gt;If you’re not sure how to get started with integration tests, consider firing up local, in-memory versions of databases and third-party services. This will let you simulate real-world scenarios without the overhead of setting up multiple external environments. Tools like Docker also make it easier to set up short-lived test containers that closely match production, making integration testing both realistic and manageable.&lt;/p&gt;

&lt;p&gt;Simultaneously, keep an eye on emergent tools and practices. While Selenium and Playwright are strong UI automation frameworks today, newer solutions could appear tomorrow. Explore evolving best practices for domain-driven design if you’re writing tests for value objects and domain entities. Regularly share lessons learned with your teammates, incorporating feedback loops that optimize the entire development lifecycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;.NET Core test automation has evolved from a nice-to-have into an essential part in the delivery of robust, maintainable software in a fast-paced industry. Far from being a luxury, automated tests act as quality gatekeepers that protect codebases from regressions, enable continuous deployment, and free the hands of developers from laborious manual checks. By leveraging.NET Core’s powerful ecosystem, including frameworks such as xUnit, MSTest, and NUnit in addition to mocking tools and DI systems, teams can write test suites that give them fast, clear feedback on code quality and correctness.&lt;/p&gt;

&lt;p&gt;But to truly be successful with test automation, it requires careful integration with continuous integration/continuous deployment pipelines, dedication to readable and maintainable test writing, and the ability to change as the needs of your application change. Teams must confront pragmatic challenges head-on: taming the flakiness of tests, deciding which areas are worth automating and which will pay bigger dividends when explored manually.&lt;/p&gt;

&lt;p&gt;Not just about finding bugs, what it really seeks to achieve is a general raise of the bar in development itself: making better design decisions, facilitating a more collaborative workflow, and creating a lasting culture of quality. In this respect, .NET Core remains a constantly growing and innovating platform, and with it, so too will the opportunities and challenges of test automation. Whether you are a seasoned developer or just starting off, employing a thought-out automation approach changes how you build, test, and deliver software. By leveraging the correct frameworks, tightly integrating with your CI/CD environment, and balancing the various forms of testing, you will position your .NET Core projects for success for the long haul-delivering resilient, high-performing applications that will please both users and stakeholders.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>softwaredevelopment</category>
    </item>
  </channel>
</rss>
