<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Meg528</title>
    <description>The latest articles on Forem by Meg528 (@meg528).</description>
    <link>https://forem.com/meg528</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/meg528"/>
    <language>en</language>
    <item>
      <title>Behind the Scenes: How Database Traffic Control Works</title>
      <dc:creator>Meg528</dc:creator>
      <pubDate>Wed, 01 Apr 2026 19:13:49 +0000</pubDate>
      <link>https://forem.com/planetscale/behind-the-scenes-how-database-traffic-control-works-20pe</link>
      <guid>https://forem.com/planetscale/behind-the-scenes-how-database-traffic-control-works-20pe</guid>
      <description>&lt;p&gt;&lt;em&gt;By Patrick Reynolds&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In March, we released Database Traffic Control™, a feature for mitigating and preventing database overload due to unexpectedly expensive SQL queries. For an overview, &lt;a href="https://planetscale.com/blog/introducing-database-traffic-control" rel="noopener noreferrer"&gt;read the blog post introducing the feature&lt;/a&gt;, and to get started using it, read the &lt;a href="https://planetscale.com/docs/postgres/traffic-control/" rel="noopener noreferrer"&gt;reference documentation&lt;/a&gt;. This post is a deep dive into how the feature works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;If you already know how Postgres and Postgres extensions work internally, you can skip this section.&lt;/p&gt;

&lt;p&gt;A single Postgres server is made up of many running processes. Each client connection to Postgres gets its own dedicated worker &lt;a href="https://planetscale.com/blog/processes-and-threads" rel="noopener noreferrer"&gt;process&lt;/a&gt;, and all SQL queries from that client connection run, one at a time, in that worker process. When a client sends a SQL query, the worker process parses it, plans it, executes it, and sends any results back to the client. &lt;a href="https://planetscale.com/blog/what-is-a-query-planner" rel="noopener noreferrer"&gt;Planning&lt;/a&gt; is a key step, in which Postgres takes a parsed query and turns it into a step-by-step execution plan that specifies the indexes to use, the order to load rows from multiple tables, and the operators that will be used to filter, aggregate, and join those rows. Most queries can be run using several different plans, so it's the planner's job to estimate the cost of the possible plans and pick the cheapest one.&lt;/p&gt;

&lt;p&gt;Every part of how Postgres handles queries can be modified by extensions. Extensions can add new functions, new data types, new storage systems, and new authentication methods, among other things. (They can also &lt;a href="https://www.vldb.org/pvldb/vol18/p1962-kim.pdf" rel="noopener noreferrer"&gt;add new failure modes&lt;/a&gt;, but that's a topic for another day.) Extensions can also passively observe and report on traffic, like PlanetScale's own &lt;code&gt;pginsights&lt;/code&gt; extension that powers &lt;a href="https://planetscale.com/docs/postgres/monitoring/query-insights" rel="noopener noreferrer"&gt;Query Insights&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Much of what Postgres extensions can do, they do using hooks. A hook is a function that runs before, after, or instead of existing Postgres functionality. Want to observe or replace the planner? There's a hook for that. Want to examine queries as they execute? There are three hooks for that. As of this writing, there are &lt;a href="https://github.com/search?q=repo%3Apostgres%2Fpostgres%20%2F%5E%5CS.*%5Cw_hook%20%3D%20NULL%2F&amp;amp;type=code" rel="noopener noreferrer"&gt;55 hooks&lt;/a&gt; available to anyone writing Postgres extensions.&lt;/p&gt;

&lt;p&gt;PlanetScale's &lt;code&gt;pginsights&lt;/code&gt; extension installs hooks for the &lt;code&gt;ExecutorRun&lt;/code&gt; and &lt;code&gt;ProcessUtility&lt;/code&gt; functions, among others, to run timers and measure resource consumption while SQL statements execute. Since each hook wraps the original Postgres functionality, that means &lt;code&gt;pginsights&lt;/code&gt; sees each query just before it executes and again just after it completes. Any time that has elapsed and any resources the worker process has consumed are directly attributable to that query. The extension does some aggregation, sends aggregate data periodically to a data pipeline, and returns control to Postgres to accept the next query.&lt;/p&gt;

&lt;h2&gt;
  
  
  Insights, hooks, and blocking queries
&lt;/h2&gt;

&lt;p&gt;When we first started planning for Traffic Control, we knew we would use a Postgres extension with a hook on &lt;code&gt;ExecutorRun&lt;/code&gt; to decide whether or not each statement would be allowed to run. Initially, we wrote a new extension for this. We soon realized that there are two ways to choose which queries to block: based on static analysis of the individual query, or based on cumulative measurements of resource usage over time. We split the extension along those lines. Blocking based on static analysis got merged into the project that became &lt;code&gt;pg_strict&lt;/code&gt;. Blocking based on cumulative resource usage became Traffic Control.&lt;/p&gt;

&lt;p&gt;It turns out Traffic Control needed the same hook points and much of the same information that &lt;code&gt;pginsights&lt;/code&gt; already had. So rather than duplicate all that code and impose the extra runtime overhead of another extension, we taught &lt;code&gt;pginsights&lt;/code&gt; how to block queries.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F662vpyh9ewa8kywo9dr5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F662vpyh9ewa8kywo9dr5.png" alt="Traffic Control checks" width="800" height="774"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If there are any Traffic Control rules configured, then at the beginning of each query execution, the extension does four things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It identifies all of the rules that match the &lt;a href="https://planetscale.com/docs/postgres/traffic-control/concepts#rules" rel="noopener noreferrer"&gt;tags and other metadata&lt;/a&gt; of the query. Each rule identifies a budget; multiple rules can map to the same budget.&lt;/li&gt;
&lt;li&gt;It checks to see if any of the applicable budgets has reached its concurrency limit.&lt;/li&gt;
&lt;li&gt;It checks if the query's estimated cost is higher than any applicable budget's per-query limit.&lt;/li&gt;
&lt;li&gt;It checks to see if every applicable budget has enough available capacity for the query to begin execution. In the &lt;a href="https://planetscale.com/docs/postgres/traffic-control/concepts#resource-budget-limits" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;, these parameters are described as the burst limit and the server share. As we'll see &lt;a href="https://planetscale.com/blog/behind-the-scenes-how-traffic-control-works#leaky-buckets" rel="noopener noreferrer"&gt;below&lt;/a&gt;, those parameters combine over time to describe the behavior of a leaky-bucket rate limiter.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If any budget fails any of these checks, then the query is warned or blocked, based on how the budget is configured.&lt;/p&gt;

&lt;p&gt;Blocking a query just before it begins execution means the server spends no resources on the query, beyond the cost of the planner and the decision to block it. That's an improvement over schedulers like &lt;a href="https://www.man7.org/linux/man-pages/man7/cgroups.7.html" rel="noopener noreferrer"&gt;Linux cgroups&lt;/a&gt;, which let every task begin and simply starve them of resources if higher priority tasks exist in the system. It's also an improvement over the &lt;a href="https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-STATEMENT-TIMEOUT" rel="noopener noreferrer"&gt;Postgres&lt;/a&gt; &lt;code&gt;statement_timeout&lt;/code&gt; setting, which allows any overly expensive query to consume resources until it times out. Traffic Control blocks expensive, low priority queries before they begin.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost prediction
&lt;/h2&gt;

&lt;p&gt;I glossed over something important in the last section: cost. The concurrency check is easy, because it just counts worker processes already assigned to the queries associated with a Traffic Control budget. But the other two checks — per-query cost and cumulative cost — require us to know what resources the query will consume before it even begins execution. How do we do that? We trust, but also don't trust, the planner.&lt;/p&gt;

&lt;p&gt;A SQL query planner takes a parsed SQL statement and selects what it hopes is the most efficient series of steps to execute that query. To evaluate all the possible plans, the planner has to estimate the cost of each one. When you run &lt;code&gt;EXPLAIN&lt;/code&gt; on a SQL statement, Postgres's planner shows the cost of each step in the chosen plan, as well as the overall total cost. The cost is &lt;a href="https://www.postgresql.org/docs/current/runtime-config-query.html#RUNTIME-CONFIG-QUERY-CONSTANTS" rel="noopener noreferrer"&gt;measured in dimensionless units and is based on configurable weights&lt;/a&gt; assigned to each step the plan will take. There are a lot of variables that go into the plan cost, most of which you can ignore for the purposes of understanding Traffic Control. Just remember these two things: plan costs are roughly linear (a plan with double the cost should take something like double the time and resources to execute), and the relationship between plan costs and real-world resources is heavily dependent on what query you're running, what server you run it on, and what else is happening on that server at the moment.&lt;/p&gt;

&lt;p&gt;Traffic Control compensates for those dependencies. We assume that there is an unknown constant k that we can multiply the plan cost by, to get the actual wall-clock time it will take to execute that query. But that constant is different for each &lt;a href="https://planetscale.com/blog/query-performance-analysis-with-insights" rel="noopener noreferrer"&gt;query pattern&lt;/a&gt; and for each host. The constant may also change over time as the workload mix on the server changes and as tables grow and change. So it's not exactly a constant!&lt;/p&gt;

&lt;p&gt;Traffic Control implements a hash table on each host, mapping query patterns to two averages: CPU time and planner cost estimates. Both are exponential moving averages, heavily weighting recent queries. Every time a query completes, we update both of those averages. The magical not-quite-constant k is the ratio of the two.&lt;/p&gt;

&lt;p&gt;Each time a query comes in, Traffic Control multiplies the planner's estimated cost by k to guess how much CPU and/or wall-clock time the query will take. Based on that estimate, Traffic Control decides if the query can be allowed to begin. If it does, then at the end of query execution, Traffic Control updates the two averages for that query pattern so the k value will be more recent and more precise for the next query that arrives.&lt;/p&gt;

&lt;h2&gt;
  
  
  Leaky buckets
&lt;/h2&gt;

&lt;p&gt;Two of the checks that Traffic Control performs for each query are easy: if the query's estimated cost is too high, block it. If too many queries in the same budget are already running, block it. But the final check — is there enough capacity in the budget to proceed — is harder. It's important, though! Many executions of a moderately expensive query can be even more damaging than a single very expensive query, and managing a budget over time is the best way to block queries that are only expensive in aggregate. Traffic Control considers the cumulative cost of queries in each configured budget.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxvzcgzczzcia7y1yrea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxvzcgzczzcia7y1yrea.png" alt="Traffic Control leaky bucket" width="800" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each budget is modeled as a reverse leaky bucket. Here's how that works. Each query that executes accumulates debt in the bucket. Any query that would cause the bucket to overflow with debt is blocked. Debt drains out over time, until the bucket is empty. The bucket has &lt;a href="https://planetscale.com/docs/postgres/traffic-control/concepts#resource-budget-limits" rel="noopener noreferrer"&gt;two important parameters&lt;/a&gt;: its size and its drain rate. The size dictates the burst limit, or what total resources queries under a given budget can use in a short amount of time. The drain rate dictates the server share, or what fraction of overall resources queries under a given budget can use in the long term.&lt;/p&gt;

&lt;p&gt;Traditionally, leaky buckets work the other way: they start out full, they fill (but never overflow) with credits at a configured rate, traffic consumes credits, and if a bucket is ever empty, traffic gets blocked. We inverted the model for a simple reason: an empty bucket doesn't need to be stored. Over time, we may need to store many buckets for changing rules and changing query metadata. We can drop buckets with a zero debt level, meaning that we only need to store recently active buckets, instead of every possible bucket. We store as many buckets as will fit in a configurable amount of shared memory, and we evict them implicitly when their debt falls to zero.&lt;/p&gt;

&lt;p&gt;There is no periodic task that drains debt from all buckets. Instead, each bucket is updated only when read. There is also no periodic task to evict buckets with a debt level of zero. Instead, adding a new bucket to the table evicts any that have already emptied, or whichever bucket is expected to become empty soonest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rule sets
&lt;/h2&gt;

&lt;p&gt;One important goal for Traffic Control is that it can efficiently decide when not to block a query. After all, Traffic Control has to make that decision before each query is even allowed to begin execution. So the budget here is measured in microseconds. But we also want developers and database administrators to be able to configure as many rules as it takes to manage traffic to their application. So it's crucial that we can evaluate many rules quickly. Enter rule sets: a data structure that allows evaluating &lt;code&gt;n&lt;/code&gt; rules in &lt;code&gt;O(1)&lt;/code&gt; time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49yhtf64wyt54gfi0xjx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49yhtf64wyt54gfi0xjx.png" alt="Traffic Control rule set" width="800" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each rule has the form &lt;code&gt;&amp;lt;key, value&amp;gt;&lt;/code&gt;, and it matches any query that has that same value for that same key. It's complicated a bit by the fact that value can be an IP address with a CIDR mask.&lt;/p&gt;

&lt;p&gt;A rule set maps each &lt;code&gt;&amp;lt;key, value&amp;gt;&lt;/code&gt; pair to a rule. Now, when a query comes in with metadata like &lt;code&gt;username=postgres, app=commerce, controller=api&lt;/code&gt;, the rule set can quickly identify the rule for each of those pairs. Hence, for this query, there are just three lookups in the rule set, regardless of how many rules are configured.&lt;/p&gt;

&lt;p&gt;Note that a rule set only &lt;em&gt;identifies rules to consider&lt;/em&gt;. Each rule's budget is only checked if all its conditions match the query. A rule set is all about checking as few rules as possible. So, the sequence is: the rule set identifies a list of rules, that list is narrowed down to just the rules that actually match, and then the budgets for all the matching rules get checked to see if the query can proceed.&lt;/p&gt;

&lt;p&gt;There are three exceptions to the &lt;code&gt;O(1)&lt;/code&gt; target for identifying rules:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Rules for the &lt;code&gt;remote_address&lt;/code&gt; key check for a match for each mask length. So if you have rules for ten different mask lengths, the rule set has to do as many as ten lookups to find the rule with the longest matching prefix.&lt;/li&gt;
&lt;li&gt;Any conjunction rule — that is, a rule with multiple &lt;code&gt;&amp;lt;key, value&amp;gt;&lt;/code&gt; pairs ANDed together — may be identified as a candidate for queries that match any one of the &lt;code&gt;&amp;lt;key, value&amp;gt;&lt;/code&gt; pairs in the rule. So if you have conjunction rules with overlapping &lt;code&gt;&amp;lt;key, value&amp;gt;&lt;/code&gt; pairs, the rule set may identify several or all of them as candidates for each query.&lt;/li&gt;
&lt;li&gt;It is possible to add multiple rules for the exact same &lt;code&gt;&amp;lt;key, value&amp;gt;&lt;/code&gt; pair. If you do that, any query with that exact &lt;code&gt;&amp;lt;key, value&amp;gt;&lt;/code&gt; pair will get checked against all of those rules.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Applying new rules
&lt;/h2&gt;

&lt;p&gt;Traffic Control is meant to be used both proactively and during incident response. For incident response, it's important that rules take effect quickly. And they do! Rules created or modified in the UI generally take effect at all database replicas in just 1-2 seconds. How?&lt;/p&gt;

&lt;p&gt;Rules and budgets are stored as objects in the PlanetScale app. Any change to Traffic Control rules made in the UI or the API gets stored as rows in the &lt;code&gt;planetscale&lt;/code&gt; database. Then it's serialized as JSON in the &lt;code&gt;traffic_control.rules&lt;/code&gt; and &lt;code&gt;traffic_control.budgets&lt;/code&gt; parameters for Postgres. Some Postgres parameters require restarting the server, but those two don't. So they cut the line and get sent immediately to postgresql.conf files on each database replica. Postgres reads the new config, and each worker process parses it into a rule set as soon as it completes whatever query it's executing. The rule set is in place before the next query begins.&lt;/p&gt;

&lt;p&gt;One big advantage of using Postgres configuration files, rather than sending configuration over SQL connections, is robustness on a busy server. You may want new Traffic Control rules most urgently when Postgres is using 100% of its available CPU, 100% of its worker processes, or both. Changing config files is possible even when opening a new SQL connection and issuing statements wouldn't be.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap up
&lt;/h2&gt;

&lt;p&gt;Traffic Control uses the hooks and the performance measurements that Query Insights already implemented, then bolts on a system for sorting query traffic into budgets and warning or blocking queries that exceed those budgets. Each query can be warned or blocked if it's individually too expensive, if too many other queries are already running under the same budget, or if recent and concurrent queries under the same budget have consumed too many resources in the aggregate. Traffic Control implements a dynamic model per query pattern that leverages the existing Postgres planner to estimate the real-world cost of a query before it begins to execute. Leaky buckets impose limits on both traffic bursts and the long-term average fraction of server resources assigned to any individual budget.&lt;/p&gt;

&lt;p&gt;Taken as a whole, these elements implement Traffic Control, which gives developers and database administrators powerful new tools to identify, prioritize, and limit SQL traffic.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>planetscale</category>
      <category>database</category>
      <category>sql</category>
    </item>
    <item>
      <title>AI Fatigue Has Entered the Chat: How to Innovate Without Alienating Your Brand</title>
      <dc:creator>Meg528</dc:creator>
      <pubDate>Sun, 03 Nov 2024 17:24:04 +0000</pubDate>
      <link>https://forem.com/meg528/ai-fatigue-has-entered-the-chathow-to-innovate-without-alienating-your-brand-5pm</link>
      <guid>https://forem.com/meg528/ai-fatigue-has-entered-the-chathow-to-innovate-without-alienating-your-brand-5pm</guid>
      <description>&lt;p&gt;It wasn’t &lt;em&gt;that&lt;/em&gt; long ago that AI was something sensationalized mostly by high-budget movies like &lt;em&gt;The Matrix&lt;/em&gt;. In 2024, however, we’re not living in a mind-bending alternate reality, dodging bullets and Agent Smith. Instead, we’re using artificial intelligence to optimize our blogs for search engines, create lifelike videos without any human actors, and write code within seconds to power the next app to hit the marketplace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75pszkmc9os9o5lkfzee.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75pszkmc9os9o5lkfzee.jpg" alt="computer code" width="800" height="531"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my own day-to-day work, I’m exploring how to use AI to get more done, better — a less-than-frictionless transition since my background is in writing. (When AI blew up, writers feared that they’d be the first on the chopping block, and in some cases, they were.)&lt;/p&gt;

&lt;p&gt;Brands raced to jump on the AI bandwagon — some, a little recklessly. A couple of years later, many are starting to feel the blowback: customers who want to talk to a human being, not an AI chatbot; people who want to read human-written words, not AI-generated; search engines penalizing websites for page after page of low-quality content; users who are struggling with the inescapably prolific amount of AI-created content in both Google and social media news feeds.&lt;/p&gt;

&lt;p&gt;AI fatigue is here. What is it, what are the implications, and what can we do about it?&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AI Fatigue?
&lt;/h2&gt;

&lt;p&gt;The term “AI fatigue” refers to a general hesitation toward, lack of excitement for, or even suspicion or skepticism around using AI-driven technologies.&lt;/p&gt;

&lt;p&gt;I experienced this myself very recently, while using a healthcare provider’s AI chatbot. I had the option to answer a series of questions and potentially get the information I needed, or I could speak with a representative directly. I wasn’t in the most patient mood and immediately opted for the human route. Why? Simple. The last few times I engaged with AI chatbots were a complete bust. (To be clear, I believe that AI chatbots can work stupendously and have happily utilized them in other moments.)&lt;/p&gt;

&lt;p&gt;So, while organizations are racing to adopt AI solutions, customers might be whistling a different tune.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Caused AI Fatigue?
&lt;/h2&gt;

&lt;p&gt;The speed and intensity with which technology is progressing make it hard for some of us to keep up.&lt;/p&gt;

&lt;p&gt;Think of older generations trying to figure out Facebook. Now, add AI on top of that. And technology is only gaining momentum. Some estimates [1] say that computers’ speed and power have typically doubled every 1.5 to two years since the 1960s.&lt;/p&gt;

&lt;p&gt;The proof is in the pudding: About 90% of the world’s data [2] was generated within the last few years alone.&lt;/p&gt;

&lt;p&gt;AI technology shows no signs of slowing down, which means we have to hustle to keep up. And, put simply, many people are tired of trying to do that. We’re always working so hard to try to understand the next big thing that we barely have an opportunity to slow down and just… be.&lt;/p&gt;

&lt;p&gt;Interestingly, the Gartner Hype Cycle [3] for artificial intelligence reports that the hype around AI has far outweighed what the technology has actually delivered.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Gone Astray: When Technology Backfires
&lt;/h2&gt;

&lt;p&gt;It feels like the widespread response from countless companies has been, “More AI!” And to be fair, in many cases, artificial intelligence has completely revolutionized the way some businesses are run and the experience they provide for their users.&lt;/p&gt;

&lt;p&gt;But not always.&lt;/p&gt;

&lt;p&gt;You might remember when CNET [4] was found to be publishing AI-generated articles in a less-than-transparent manner. The byline of these articles read “CNET Money Staff.” If you clicked on that byline, a popup appeared disclosing that the content was written by AI. To make matters worse, we then learned that more than half of these AI-generated articles contained significant errors and plagiarism.&lt;/p&gt;

&lt;p&gt;When CNET’s parent company, Red Ventures, went to sell it, the blemish on their reputation was a hurdle — although they did eventually sell it [5] for over $100 million. (Some sources say it was closer to $250 million [6].) This was after paying $500 million for it four years earlier.&lt;/p&gt;

&lt;p&gt;This is just one example of what can happen when we get greedy with AI. The ripple effect can be ghastly for both your bottom line and the people working to keep the lights on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Do We Go From Here?
&lt;/h2&gt;

&lt;p&gt;So, you now know what AI fatigue is. You’ve read some of the horror stories. Should you abandon AI completely? Absolutely not. For every AI failure, we can talk about many successes.&lt;/p&gt;

&lt;p&gt;Plus, this technology isn’t going anywhere. We have two choices: Embrace it, or get left behind.&lt;/p&gt;

&lt;p&gt;But here’s the key: Using it &lt;em&gt;intentionally&lt;/em&gt; is critical.&lt;/p&gt;

&lt;p&gt;What does this look like? Let’s go through some tips and examples.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Recognize That There’s a Time and a Place
&lt;/h3&gt;

&lt;p&gt;The solution to all our woes is not to replace everything with AI. The approach should be much more purposeful.&lt;/p&gt;

&lt;p&gt;I spoke with Apoorva Joshi [7], Senior AI Developer Advocate at MongoDB, who said, “The future isn’t about AI replacing humans; it’s about humans and AI working together. The path to success lies in collaboration, where human creativity and intelligence are enhanced by AI’s ability to drive innovation and help solve complex problems.”&lt;/p&gt;

&lt;p&gt;As one example, when it comes to content production, AI can be an excellent complement, rather than a replacement. MongoDB’s Developer Center [8] is a valuable resource for developers around the globe. While these authors may use AI to formulate ideas, the content is written, reviewed, and fact-checked by humans. Why? Well, put simply, at the end of the day, these authors are responsible for the content. If something goes awry, “AI did it!” is no excuse.&lt;/p&gt;

&lt;p&gt;Plus, humans do it better.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Prioritize Quality Over Quantity
&lt;/h3&gt;

&lt;p&gt;AI coding assistants have completely changed the way we build applications, speeding up the development time and taking away a lot of the heavy lifting. The same can be said for the use of AI in content production.&lt;/p&gt;

&lt;p&gt;Because the barrier to entry is now much lower, what we’ve seen is a &lt;em&gt;huge&lt;/em&gt; surge in the number of apps hitting the market, blogs on search engine results pages, and videos going live on YouTube. This would be a positive change if more of these apps, blogs, and videos were of a better quality. Instead, many of us find ourselves struggling to swim through a flood of junk.&lt;/p&gt;

&lt;p&gt;Take note: Producing more of something that’s low-quality doesn’t make it higher-quality. If you stop caring about creating amazing things and simply focus on creating more things, users will notice, and they will go somewhere else to find something better.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Keep the End Result in Mind
&lt;/h3&gt;

&lt;p&gt;If you use AI in any capacity to build something, it doesn’t change what your ultimate goal should be:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Entertain your users.&lt;/li&gt;
&lt;li&gt;Educate your users.&lt;/li&gt;
&lt;li&gt;Solve a problem.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you can’t answer how your product/service does one or more of these things, your job isn’t done.&lt;/p&gt;

&lt;p&gt;AI can be used to create personalized songs that you can then gift to people you care about. We call that entertainment!&lt;/p&gt;

&lt;p&gt;Reliable AI chatbots empower users by delivering relevant help docs so that they don’t have to wait in long queues — an excellent way to educate users and help them help themselves.&lt;/p&gt;

&lt;p&gt;Vector search [9] allows users to find search results based not just on how well their keywords match but on the &lt;em&gt;meaning&lt;/em&gt; behind them. Better search results, faster? Problem solved.&lt;/p&gt;

&lt;h2&gt;
  
  
  An AI Reset: Moving Forward With Renewed Energy
&lt;/h2&gt;

&lt;p&gt;AI fatigue doesn’t have to be permanent, but we do need to shift our approach.&lt;/p&gt;

&lt;p&gt;By this point, we’ve got at least a basic understanding of just how powerful AI is and what it’s capable of. We’ve tested it and applied it in countless ways. Some have been miraculous, and others have been disastrous.&lt;/p&gt;

&lt;p&gt;Next, we iterate!&lt;/p&gt;

&lt;p&gt;Using AI at the right time, under the right circumstances, and always for the betterment of users — consider this your north star, and you’ll never go wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.zippia.com/answers/is-technology-growing-exponentially/" rel="noopener noreferrer"&gt;https://www.zippia.com/answers/is-technology-growing-exponentially/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://leftronic.com/blog/how-fast-is-technology-growing-statistics" rel="noopener noreferrer"&gt;https://leftronic.com/blog/how-fast-is-technology-growing-statistics&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.gartner.com/en/documents/5505695" rel="noopener noreferrer"&gt;https://www.gartner.com/en/documents/5505695&lt;/a&gt;&lt;br&gt;
&lt;a href="https://futurism.com/cnet-for-sale-ai" rel="noopener noreferrer"&gt;https://futurism.com/cnet-for-sale-ai&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.charlotteobserver.com/news/business/article290791529.html" rel="noopener noreferrer"&gt;https://www.charlotteobserver.com/news/business/article290791529.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.axios.com/2024/08/06/cnet-ziff-davis-red-ventures#" rel="noopener noreferrer"&gt;https://www.axios.com/2024/08/06/cnet-ziff-davis-red-ventures#&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/apoorvajoshi95/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/apoorvajoshi95/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://mdb.link/towards-ai-dc" rel="noopener noreferrer"&gt;https://mdb.link/towards-ai-dc&lt;/a&gt;&lt;br&gt;
&lt;a href="https://mdb.link/vector-search-towards-ai" rel="noopener noreferrer"&gt;https://mdb.link/vector-search-towards-ai&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>womenintech</category>
      <category>contentwriting</category>
    </item>
  </channel>
</rss>
