<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Peter Y</title>
    <description>The latest articles on Forem by Peter Y (@flashpeter7).</description>
    <link>https://forem.com/flashpeter7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/flashpeter7"/>
    <language>en</language>
    <item>
      <title>How We Generate 100+ Product Feeds From 300k SKUs Without Hitting the Database</title>
      <dc:creator>Peter Y</dc:creator>
      <pubDate>Fri, 15 May 2026 11:24:17 +0000</pubDate>
      <link>https://forem.com/flashpeter7/how-we-generate-100-product-feeds-from-300k-skus-without-hitting-the-database-4lem</link>
      <guid>https://forem.com/flashpeter7/how-we-generate-100-product-feeds-from-300k-skus-without-hitting-the-database-4lem</guid>
      <description>&lt;p&gt;Generating product feeds (Google Shopping, Facebook, marketplaces) is a boring problem until your catalog has 300,000 SKUs. Then it becomes a nightmare.&lt;/p&gt;

&lt;p&gt;The naive approach — load each product from PrestaShop, compute its price, check availability, format the output — hits the database with ~80 queries per product. Multiply that by 300k products, add network latency to a clustered database, and you're looking at hours of generation time. Per feed. And we have over a hundred feeds: 10 different feed types × 4 languages × 3 shops.&lt;/p&gt;

&lt;p&gt;We tried the "proper" engineering approach first. It failed. Then I built something dumb and fast that actually works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Feeds Are Hard in PrestaShop
&lt;/h2&gt;

&lt;p&gt;You can't just dump product data with SQL queries. I mean, you technically can, but you'll regret it.&lt;/p&gt;

&lt;p&gt;PrestaShop computes a lot of things at runtime. Product price depends on specific price rules, group discounts, cart rules, tax rules, country settings, and a dozen admin toggles. Availability depends on stock management mode, pack stock type, combination stock, and warehouse config. Even something simple like "product name" goes through language layers and shop context.&lt;/p&gt;

&lt;p&gt;Reproducing all of that in raw SQL is reverse-engineering the entire PrestaShop business logic layer. Someone might pull it off, but it'll be fragile, it won't respect admin settings, and it'll break on every PrestaShop update.&lt;/p&gt;

&lt;p&gt;So you're stuck loading products through PrestaShop's own objects. And that means PHP, and that means queries. Lots of them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "Proper" Solution That Failed
&lt;/h2&gt;

&lt;p&gt;We hired a specialist to build a proper feed generation pipeline. His architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Message broker for events&lt;/li&gt;
&lt;li&gt;Enrichment service on Symfony&lt;/li&gt;
&lt;li&gt;Separate PrestaShop instance for data hydration&lt;/li&gt;
&lt;li&gt;Event-driven pipeline with object serialization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Five months later: nothing shipped. Not one working feed. The architecture was theoretically sound but practically impossible to finish. We let him go.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Insight
&lt;/h2&gt;

&lt;p&gt;Here's what I realized. We already have a process that loads every product through PrestaShop's full business logic: our product update cron.&lt;/p&gt;

&lt;p&gt;In our setup, product changes arrive from an external source via a Redis queue. A cron job runs every 30 seconds, picks up SKUs that changed, and performs a full product update in PrestaShop. After the update, all the expensive stuff is already computed — prices, stock, categories, attributes, descriptions, tax rules. It's all sitting right there in PHP memory.&lt;/p&gt;

&lt;p&gt;What if, at that exact moment, we just... grabbed it?&lt;/p&gt;

&lt;p&gt;And it gets better. At that same point in the code we're already updating the Elasticsearch search index for the storefront — because the data is the same. So we're already building a product object for search. Adding a second write for feed data is almost free.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: Capture Once, Feed Forever
&lt;/h2&gt;

&lt;p&gt;During the product update cycle, after all business logic has executed, we collect every field that any feed might ever need into a single associative array. Price, stock, description, categories, attributes, images, EAN, weight, shipping — everything.&lt;/p&gt;

&lt;p&gt;Then we:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;json_encode&lt;/code&gt; it&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;gzencode&lt;/code&gt; it (compress)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;base64_encode&lt;/code&gt; it (for safe storage)&lt;/li&gt;
&lt;li&gt;Write it to Elasticsearch as one document per SKU&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Elasticsearch index is dead simple: SKU as the ID, compressed payload, a timestamp, and an "indexed" flag. One document per product. ~4KB per document compressed. The whole index for 300k products is about 4GB.&lt;/p&gt;

&lt;p&gt;No extra database queries. No separate pipeline. The data piggybacks on a process that was already running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feed Generation: Just Scroll and Write
&lt;/h2&gt;

&lt;p&gt;When it's time to generate feeds, the process is trivial:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open an Elasticsearch scroll query over the entire index&lt;/li&gt;
&lt;li&gt;For each document: decompress → json_decode → you have all product data&lt;/li&gt;
&lt;li&gt;Write to whatever format the feed needs (CSV, XML, JSON)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The key trick: we write all languages and all shops in a single pass. One scroll through 300k documents, and we're writing to all output files simultaneously. No need to iterate the catalog multiple times.&lt;/p&gt;

&lt;h3&gt;
  
  
  Numbers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;216,000 active SKUs&lt;/li&gt;
&lt;li&gt;One feed (all languages, all shops): &lt;strong&gt;6 minutes&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;All feeds combined: &lt;strong&gt;~35 minutes&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Write speed: ~500 SKUs/second&lt;/li&gt;
&lt;li&gt;Database load during feed generation: &lt;strong&gt;zero&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For comparison: the naive approach (load each product via PrestaShop objects) would need ~80 queries per SKU. That's 24 million queries for one feed. On a clustered database with network latency — we're talking hours per feed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding New Feeds Takes Minutes
&lt;/h2&gt;

&lt;p&gt;This is the part I'm most happy with.&lt;/p&gt;

&lt;p&gt;The compressed JSON in Elasticsearch contains a superset of all fields any feed could need. When a manager says "we need a new feed for marketplace X," I don't build a new data pipeline. I just:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Look at what fields the marketplace requires&lt;/li&gt;
&lt;li&gt;Check if they're in the universal object (they usually are)&lt;/li&gt;
&lt;li&gt;Write a simple transformer: read field A, format it as column B&lt;/li&gt;
&lt;li&gt;Add it to the feed generation cron&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A new feed type takes maybe an hour. Most of that is reading the marketplace's spec.&lt;/p&gt;

&lt;p&gt;If a required field isn't in the universal object yet — I add it in the product update step, wait for one full update cycle, and it's available everywhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Works (And What It's Actually Called)
&lt;/h2&gt;

&lt;p&gt;After building this, I looked up whether the approach has a name. Turns out it's a combination of two well-known patterns — I just didn't know that when I built it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Materialized View&lt;/strong&gt; — a precomputed, stored result of a complex query, optimized for reading. That's exactly what our compressed JSON in Elasticsearch is: a materialized view of PrestaShop's business logic output. The classic version lives in a database (PostgreSQL has them built-in, for example). Ours lives in Elasticsearch because the "query" isn't SQL — it's the result of PHP-level computations that can't be expressed in SQL at all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Event-Carried State Transfer&lt;/strong&gt; — a pattern from event-driven architecture where instead of telling consumers "something changed, go fetch the data yourself," you send the full state along with the event. That's exactly what we do: when a product updates, we don't just flag it for later processing. We capture the complete product snapshot right there and store it. Feed generators never need to go back to the source.&lt;/p&gt;

&lt;p&gt;The twist is that both patterns are usually discussed in the context of microservices and distributed systems. Nobody talks about applying them inside a PHP monolith to solve a feed generation problem. But that's what works.&lt;/p&gt;

&lt;p&gt;The insight isn't architectural theory. It's practical: &lt;strong&gt;don't go get data when you can grab it while it's already in your hands.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Trade-offs
&lt;/h2&gt;

&lt;p&gt;It's not perfect.&lt;/p&gt;

&lt;p&gt;Feed data is only as fresh as the last product update cycle. If a product updates at 10:00 and feeds generate at 10:30, there's a 30-minute gap. For our B2B use case this is fine. For a flash-sale store it might not be.&lt;/p&gt;

&lt;p&gt;The universal object can get bloated. Ours has maybe 40-50 fields per product. Not terrible, but it needs occasional cleanup.&lt;/p&gt;

&lt;p&gt;You need the product update pipeline to begin with. If your products are edited manually in PrestaShop admin — this approach doesn't apply directly. You'd need to hook into PrestaShop's save events instead.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Don't generate feeds by loading products from the database. At scale, it's impossibly slow.&lt;/li&gt;
&lt;li&gt;Don't build a separate data pipeline. It's months of work and probably won't ship.&lt;/li&gt;
&lt;li&gt;Capture product data during your existing update process, when all business logic has already executed.&lt;/li&gt;
&lt;li&gt;Store it compressed in Elasticsearch. One document per SKU, ~4KB each.&lt;/li&gt;
&lt;li&gt;Generate feeds by scrolling Elasticsearch and writing files. Zero database load, 500 SKUs/second.&lt;/li&gt;
&lt;li&gt;New feed types take an hour to add, not weeks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sometimes the best architecture is no architecture. Just write stuff down when you already have it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Tags: prestashop, ecommerce, elasticsearch, performance&lt;/em&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>database</category>
      <category>performance</category>
      <category>php</category>
    </item>
    <item>
      <title>PrestaShop Behind a Load Balancer: What Breaks and How to Fix It</title>
      <dc:creator>Peter Y</dc:creator>
      <pubDate>Fri, 08 May 2026 20:23:27 +0000</pubDate>
      <link>https://forem.com/flashpeter7/prestashop-behind-a-load-balancer-what-breaks-and-how-to-fix-it-50cp</link>
      <guid>https://forem.com/flashpeter7/prestashop-behind-a-load-balancer-what-breaks-and-how-to-fix-it-50cp</guid>
      <description>&lt;h1&gt;
  
  
  PrestaShop Behind a Load Balancer: What Breaks and How to Fix It
&lt;/h1&gt;

&lt;p&gt;PrestaShop on a single server is fine. PrestaShop behind a load balancer with auto-scaling nodes — that's where the fun begins. The framework was never built for this, and it shows.&lt;/p&gt;

&lt;p&gt;I've been running a B2B store on PrestaShop for several years now. ~300k SKUs, hundreds of orders a day, auto-scaling on GCP. Here's what went wrong and how we dealt with it.&lt;/p&gt;

&lt;h2&gt;
  
  
  800 Queries Per Product Page. Yes, Really.
&lt;/h2&gt;

&lt;p&gt;A single product page in PrestaShop fires about 800 SQL queries. A category listing — around 2,000. Not a bug. That's just PrestaShop being PrestaShop.&lt;/p&gt;

&lt;p&gt;On one server you stick Redis in front of it and forget about it. On multiple nodes — not so simple.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cache Desync Between Nodes
&lt;/h2&gt;

&lt;p&gt;PrestaShop caches query results locally. Redis, Memcached, whatever — it's per-node. So each node builds its own cache.&lt;/p&gt;

&lt;p&gt;What happens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node A caches a product at 10:00&lt;/li&gt;
&lt;li&gt;Node B caches it at 10:02&lt;/li&gt;
&lt;li&gt;Admin updates the price at 10:05&lt;/li&gt;
&lt;li&gt;Both nodes keep serving stale data with different prices&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Obvious idea: shared Redis. Problem: network latency on every hit. At 800 queries per page, even 1ms extra per query gives you almost a second of overhead. Your "fast cache" is now slower than just hitting the database.&lt;/p&gt;

&lt;h2&gt;
  
  
  How We Actually Solved It
&lt;/h2&gt;

&lt;p&gt;We didn't fight the architecture. We went around it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sticky sessions via Cloudflare.&lt;/strong&gt; Visitor lands on a node, stays on that node for the session. No desync from the user's point of view.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local Redis on each node, TTL 10 minutes.&lt;/strong&gt; Fast, simple, no network hop. We talked to management: "product descriptions might be 10 minutes behind." They said fine. For a B2B catalog that doesn't change every minute, this is a non-issue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table blacklist.&lt;/strong&gt; This is the part that actually matters. We built a list of tables that must never be cached — any SQL query touching these tables goes straight to the database.&lt;/p&gt;

&lt;p&gt;The blacklist breaks down like this:&lt;/p&gt;

&lt;p&gt;Carts and sessions — &lt;code&gt;cart&lt;/code&gt;, &lt;code&gt;cart_product&lt;/code&gt;, &lt;code&gt;customer&lt;/code&gt;, &lt;code&gt;guest&lt;/code&gt;, &lt;code&gt;address&lt;/code&gt;. Changes on every click.&lt;/p&gt;

&lt;p&gt;Orders and payments — &lt;code&gt;orders&lt;/code&gt;, &lt;code&gt;order_detail&lt;/code&gt;, &lt;code&gt;order_history&lt;/code&gt;, payment gateway tables. You don't want stale financial data. Ever.&lt;/p&gt;

&lt;p&gt;Stock — &lt;code&gt;stock_available&lt;/code&gt;. Showing wrong availability is worse than a slow page.&lt;/p&gt;

&lt;p&gt;Analytics — &lt;code&gt;connections&lt;/code&gt;, &lt;code&gt;page_viewed&lt;/code&gt;, &lt;code&gt;log&lt;/code&gt;. Write-heavy, would trash the cache constantly anyway.&lt;/p&gt;

&lt;p&gt;Third-party modules — anything that deals with orders, wishlists, promotions, discounts. Every time you install a new module, you check its tables and add them to the blacklist if needed. This never ends.&lt;/p&gt;

&lt;p&gt;Configuration — &lt;code&gt;configuration&lt;/code&gt;, &lt;code&gt;configuration_lang&lt;/code&gt;. Admin changes a setting — it needs to work immediately.&lt;/p&gt;

&lt;p&gt;The rule is dead simple: if the table changes more often than once per 10 minutes, it goes on the blacklist.&lt;/p&gt;

&lt;h3&gt;
  
  
  What It Gave Us
&lt;/h3&gt;

&lt;p&gt;Second hit on a warm cache:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Product page: 800 → 200–300 queries (about 65% less)&lt;/li&gt;
&lt;li&gt;Category listing: 2,000 → 600 queries (about 70% less)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No distributed cache. No invalidation logic. No latency overhead. Just sticky sessions, local Redis, and a blacklist.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying Code When There's No Single Server
&lt;/h2&gt;

&lt;p&gt;On one box it's &lt;code&gt;git pull &amp;amp;&amp;amp; clear cache&lt;/code&gt;. With N nodes the questions pile up. How does code get everywhere? What about shared files? What about PrestaShop's class cache?&lt;/p&gt;

&lt;h3&gt;
  
  
  Shared Files on NFS
&lt;/h3&gt;

&lt;p&gt;Some stuff has to be shared — images, documents, uploads. An admin goes into backoffice, uploads a banner — that image lands on one node. Others don't see it. So we mount certain directories from NFS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Product images and module media (banners, sliders, brand logos)&lt;/li&gt;
&lt;li&gt;Documents, downloads, invoices&lt;/li&gt;
&lt;li&gt;Upload directory&lt;/li&gt;
&lt;li&gt;Logs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything else — PHP, vendor, compiled assets — lives on each node's local disk. You do not want to serve PHP from NFS. Trust me.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Deploy Machine
&lt;/h3&gt;

&lt;p&gt;We have a dedicated deploy box. It shares the NFS mount with web nodes. Deploy looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;SSH into deploy machine&lt;/li&gt;
&lt;li&gt;&lt;code&gt;git pull&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;npm run build&lt;/code&gt; (compiles Bootstrap and friends)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;rsync&lt;/code&gt; to all web nodes, excluding NFS directories&lt;/li&gt;
&lt;li&gt;Cache dirs get wiped during rsync — nodes rebuild class maps on next request&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The NFS directories are symlinked on each node — &lt;code&gt;/img&lt;/code&gt; points to NFS, everything else is local. Rsync skips the symlinked paths.&lt;/p&gt;

&lt;p&gt;What this gives you: code hits all nodes within seconds, PHP serves from local disk (fast), shared files are always in sync, and cache resets automatically. Good enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modules: The Worst Part
&lt;/h2&gt;

&lt;p&gt;This is where PrestaShop really punishes you for having infrastructure.&lt;/p&gt;

&lt;p&gt;When you click "Install" on a module, it can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Drop files into &lt;code&gt;/override/&lt;/code&gt;, overriding core classes&lt;/li&gt;
&lt;li&gt;Run arbitrary SQL — create tables, alter schema, insert rows&lt;/li&gt;
&lt;li&gt;Copy assets all over the filesystem&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No migrations. No rollback. It's a side-effect machine.&lt;/p&gt;

&lt;p&gt;On multiple nodes this is a nightmare. You can't just commit a module and deploy — it needs its install hooks. You can't install via admin panel on prod — it only runs on one node.&lt;/p&gt;

&lt;h3&gt;
  
  
  Module Surgery
&lt;/h3&gt;

&lt;p&gt;Every marketplace module goes through prep before it gets anywhere near production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strip the overrides.&lt;/strong&gt; Modules come with an &lt;code&gt;/override/&lt;/code&gt; folder. We empty it. A developer manually reviews what the module wanted to override and merges it into the project's own &lt;code&gt;/override/&lt;/code&gt; directory by hand. Resolves conflicts, makes it a normal commit. Reviewable, reversible.&lt;/p&gt;

&lt;p&gt;Why? Because if two modules try to override the same core method, you get a silent race condition. On a single server you might not notice. On multiple nodes with different cache states — good luck debugging that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Review the install() method.&lt;/strong&gt; Read what SQL it runs. Know what it'll do to your database before it does it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploy the code first.&lt;/strong&gt; Commit the cleaned module, &lt;code&gt;git pull&lt;/code&gt; on deploy machine, rsync to all nodes. At this point every node has the module files but the module isn't installed yet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Then hit Install in the admin panel.&lt;/strong&gt; Since we stripped the overrides, the installer only does its database work — creates tables, registers hooks. Doesn't matter which node handles the request, the DB is shared and the code is already everywhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rsync once more.&lt;/strong&gt; PrestaShop rebuilds its class index after install. One more rsync pushes that to all nodes.&lt;/p&gt;

&lt;p&gt;Tedious? Yes. But it works every time. And on production I'll take boring and reliable over clever and fragile.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Not Just Ditch PrestaShop?
&lt;/h2&gt;

&lt;p&gt;Fair question after all of the above.&lt;/p&gt;

&lt;p&gt;Because a €100 module from the marketplace replaces months of custom development. Mollie integration is free. Need gift cards? Wishlists? Complex discount rules? Someone already built it, battle-tested it on real stores, and sells it for the price of a dinner.&lt;/p&gt;

&lt;p&gt;Try getting that on a custom headless build. You'll burn through your budget before you ship anything.&lt;/p&gt;

&lt;p&gt;PrestaShop's value was never its architecture — the architecture is showing its age. The value is the ecosystem. For a small or mid-size business, solving commerce problems with off-the-shelf modules at these prices is hard to beat.&lt;/p&gt;

&lt;p&gt;Yes, you pay for it with operational complexity when you scale. Everything in this article is that tax. But you're handling it with a small team and a sane budget, while companies on "enterprise" platforms throw 10x the money at consultants and ship slower.&lt;/p&gt;

&lt;p&gt;That's the trade-off. If your team can handle the rough edges — and now you have a playbook — PrestaShop still makes a lot of sense.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;PrestaShop on multiple nodes breaks in three places: cache, deployment, and modules.&lt;/li&gt;
&lt;li&gt;Cache: sticky sessions + local Redis + table blacklist. 800 queries → 300. No distributed cache needed.&lt;/li&gt;
&lt;li&gt;Deploy: dedicated deploy machine, rsync to nodes, NFS for shared files only. PHP always on local disk.&lt;/li&gt;
&lt;li&gt;Modules: strip overrides, review SQL, deploy code first, install via admin second. Every time. No shortcuts.&lt;/li&gt;
&lt;li&gt;PrestaShop's ecosystem is worth the operational pain — if you know how to manage it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Struggling with a similar setup? Tell me.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Tags: prestashop, ecommerce, devops, scaling&lt;/em&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>devops</category>
      <category>performance</category>
      <category>php</category>
    </item>
  </channel>
</rss>
