<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Jeff Zuerlein</title>
    <description>The latest articles on Forem by Jeff Zuerlein (@jzuerlein).</description>
    <link>https://forem.com/jzuerlein</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jzuerlein"/>
    <language>en</language>
    <item>
      <title>Using T-SQL To Generate Large Amounts Of Test Data</title>
      <dc:creator>Jeff Zuerlein</dc:creator>
      <pubDate>Tue, 16 Apr 2024 04:10:02 +0000</pubDate>
      <link>https://forem.com/jzuerlein/using-t-sql-to-generate-large-amounts-of-test-data-4228</link>
      <guid>https://forem.com/jzuerlein/using-t-sql-to-generate-large-amounts-of-test-data-4228</guid>
      <description>&lt;p&gt;Being able to generate millions of rows of data in your SQL database, in a few seconds, opens up a world of options.&lt;/p&gt;

&lt;p&gt;It means you can validate your data access strategy, and UI design early in the development process.  The goal is to fail fast, to minimize risk.  Having a set of data that mirrors the size you expect in production means you’ll quickly see what works, and what doesn’t.&lt;/p&gt;

&lt;p&gt;In cases where you need to have data that matches real world data, you could leverage open source datasets of names and locations to provide information that conforms to your software’s business rules.&lt;/p&gt;

&lt;p&gt;ChatGPT can also provide sets of information that may not be accurate, but close enough for load testing.&lt;/p&gt;

&lt;p&gt;I’ve created a video to discuss the benefits of testing an application with large sets of data, and provide a demonstration of T-SQL code that you can use in your own projects.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ivNTsMw36WI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>sql</category>
      <category>testing</category>
    </item>
    <item>
      <title>What’s the difference between Span&lt;T&gt; and Memory&lt;T&gt;?</title>
      <dc:creator>Jeff Zuerlein</dc:creator>
      <pubDate>Mon, 12 Feb 2024 05:51:13 +0000</pubDate>
      <link>https://forem.com/jzuerlein/whats-the-difference-between-span-and-memory-4e</link>
      <guid>https://forem.com/jzuerlein/whats-the-difference-between-span-and-memory-4e</guid>
      <description>&lt;p&gt;The first time I used Span of T and Memory of T, I struggled to understand what the difference was.  Don’t struggle, watch my video instead.  &lt;/p&gt;

&lt;p&gt;My goal is not to tell you how awesome these features are, but to get you over the hump of understanding how they work.  I cover what Span and Memory of T do and why, along with some description of their internals.  Finally, I discuss what the owner consumer model is, and why it's relevant to Memory of T.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Hb5QPFWm8i4"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>csharp</category>
      <category>dotnet</category>
      <category>performance</category>
      <category>microsoft</category>
    </item>
    <item>
      <title>Supercharged Dapper Repositories: Boost Performance With Sorted Span&lt;T&gt;</title>
      <dc:creator>Jeff Zuerlein</dc:creator>
      <pubDate>Thu, 18 Jan 2024 05:39:05 +0000</pubDate>
      <link>https://forem.com/jzuerlein/supercharged-dapper-repositories-boost-performance-with-sorted-span-5dd4</link>
      <guid>https://forem.com/jzuerlein/supercharged-dapper-repositories-boost-performance-with-sorted-span-5dd4</guid>
      <description>&lt;p&gt;Dapper handles the mapping of object properties, but if your aggregate has child collections, you have to manually do that.  When those collections get large, there is an opportunity to improve performance!  See what the benchmarks prove about using Linq where extensions, dictionaries, and sorted spans.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/G0zRTJxtO_8"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>database</category>
      <category>csharp</category>
      <category>sqlserver</category>
      <category>dapper</category>
    </item>
  </channel>
</rss>
