<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ian Sosunov</title>
    <description>The latest articles on Forem by Ian Sosunov (@cxrtisxl).</description>
    <link>https://forem.com/cxrtisxl</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/cxrtisxl"/>
    <language>en</language>
    <item>
      <title>I made a ticket-centric AI swarm.</title>
      <dc:creator>Ian Sosunov</dc:creator>
      <pubDate>Sun, 22 Feb 2026 15:44:59 +0000</pubDate>
      <link>https://forem.com/cxrtisxl/i-made-a-ticket-centric-ai-swarm-32am</link>
      <guid>https://forem.com/cxrtisxl/i-made-a-ticket-centric-ai-swarm-32am</guid>
      <description>&lt;p&gt;Everyone was playing with OpenClaw recently. Me too. But soon I realized that OpenClaw is a black box with more than 400K lines of code. Then I struggled to set up agents to communicate with each other - it was unpredictable and unmanageable. I realized that a custom agentic runtime is what I really need. A system where agents will be able to communicate with each other. Brute-forcing the solution, I first came up with an HTTP Router connected to nanobot agents. It allowed them to use API calls to send messages to one another. Well, it was shut down pretty quickly when I realized that without proper context management, this system just explodes. Agents have been chatting constantly, piling their context on top of each other. They were unable to filter or shrink it, and, of course, they were unable to cooperate to solve any task, even a slightly complex one.&lt;/p&gt;

&lt;p&gt;Being in the shower after a long day, I’ve been thinking about what I should really do to somehow solve this issue. I needed something lightweight, powerful, and customizable. Remembering my own experience with managing teams, the idea hits me. Tickets! When I needed something specific from someone working in a team, I just created a ticket! It might be an issue on GitHub or a sub-conversation in Slack. There, I always described exactly what I needed and from whom.&lt;/p&gt;

&lt;p&gt;Getting back to the laptop, I started drafting what became &lt;a href="https://h1v3.io" rel="noopener noreferrer"&gt;h1v3&lt;/a&gt; (Hive). A custom runtime written (vibecoded) in Golang. One binary, many Agents - each running as a goroutine. Same with OpenClaw - dedicated workdir, memory, tools, and skills, but smarter. Existing as goroutines, my agents became interconnected via Registry - a central message broker.&lt;/p&gt;

&lt;p&gt;Each new conversation, whether with a user or another Agent, becomes a new Ticket. Tickets are nested, and each encapsulates everything needed to solve a specific piece of the job.&lt;/p&gt;

&lt;p&gt;That was the solution! Now Agents have only what they really need to act. Everything in a single binary file, in a single Docker container.&lt;/p&gt;

&lt;p&gt;Going further, I built h1v3 Monitor - an app that shows what is actually happening under the hood. Agents, tickets, their context, and each tool call - everything becomes transparent and easy to debug.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0e5p3emgyg28eveyzhyz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0e5p3emgyg28eveyzhyz.png" alt="h1v3 Monitor" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/cxrtisxl/h1v3" rel="noopener noreferrer"&gt;h1v3 is opensourced&lt;/a&gt; and it’s still just taking off. If you ever wanted to run a swarm of agents - join as a contributor or a tester, join your thoughts on the approach, and let's make the future agentic!&lt;/p&gt;

&lt;p&gt;This article was written by a human. &lt;a href="https://x.com/cxrtisxl" rel="noopener noreferrer"&gt;@cxrtisxl&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openclaw</category>
      <category>agents</category>
    </item>
    <item>
      <title>Latency Wars: The Architecture Of A Real-Time Trading Game</title>
      <dc:creator>Ian Sosunov</dc:creator>
      <pubDate>Mon, 08 Sep 2025 14:53:55 +0000</pubDate>
      <link>https://forem.com/cxrtisxl/latency-wars-the-architecture-of-a-real-time-trading-game-4amd</link>
      <guid>https://forem.com/cxrtisxl/latency-wars-the-architecture-of-a-real-time-trading-game-4amd</guid>
      <description>&lt;h3&gt;
  
  
  Overview
&lt;/h3&gt;

&lt;p&gt;Last year, I challenged myself to develop a game. This article outlines my vision for the architecture of a real-time trading game, where timing is crucial.&lt;/p&gt;

&lt;p&gt;Wolf Street — the game about paper trading. The MVP was a simple 30-second option with a grand plan to upgrade it to “real” paper trading after the launch.&lt;/p&gt;

&lt;p&gt;For those unfamiliar with these financial derivatives, you can think of an option as a bet. The trader, or “player,” is simply trying to predict the price movement within a given time interval.&lt;/p&gt;

&lt;p&gt;Here, I would like to convince you that you should not play binary options with real money. Most online platforms will just scam you.&lt;/p&gt;

&lt;p&gt;Our idea was entirely different — we wanted to create a safe and honest space for trading without risking any real money. That’s how it should work: sponsors create branded trading tournaments, players participate in them with virtual tokens (which can’t be purchased with real money but can be acquired by playing the game), climb the leaderboard, and win real prizes.&lt;/p&gt;

&lt;p&gt;Here’s how the game looked.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fty8tp7gfqabqa10n0joh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fty8tp7gfqabqa10n0joh.png" width="800" height="656"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, the implementation of this idea couldn’t be simple — to ensure the fairness of the game, we had to work with the BTC price in real-time, and the delays between players’ actions and their execution had to be minimal, since we were dealing with a binary option.&lt;/p&gt;

&lt;p&gt;Let’s ask the right questions and build the architecture together!&lt;/p&gt;

&lt;h3&gt;
  
  
  Designing the Architecture
&lt;/h3&gt;

&lt;p&gt;The market data will be streamed from &lt;a href="http://polygon.io" rel="noopener noreferrer"&gt;polygon.io&lt;/a&gt;. All trades should be handled by the Game Engine, so in the simplest form, the architecture looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feku4054luzi1mv2som92.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feku4054luzi1mv2som92.png" width="800" height="619"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It might work for a POC, but there are a lot of issues if we’re talking about a production-quality game.&lt;/p&gt;

&lt;p&gt;First of all, the &lt;strong&gt;Game Engine&lt;/strong&gt; is responsible for everything: processing market data feeds, serving this data to clients, validating users’ actions, and handling trades. What could go wrong? Everything. Too many client connections, &lt;strong&gt;Polygon&lt;/strong&gt; errors, game errors, and so on. The worst thing here is that any of the errors could potentially break everything else. Also, adding new game features will turn the codebase into a nightmare. Let’s split the &lt;strong&gt;Game Engine&lt;/strong&gt; into several services, each serving its own purpose. Additionally, let’s add a database, as our current service doesn’t save any game progress.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20sufwxkggwwigei895p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20sufwxkggwwigei895p.png" width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, the price data originates from &lt;strong&gt;Polygon&lt;/strong&gt;. &lt;strong&gt;Market Feed&lt;/strong&gt; processes it, builds candles, and sends them to the &lt;strong&gt;Trade Engine&lt;/strong&gt; and &lt;strong&gt;Client&lt;/strong&gt; s via WebSocket. &lt;strong&gt;Core&lt;/strong&gt; service is used for all non-trading-related activities, such as account management. We store user data and all the trades in a &lt;strong&gt;Postgres&lt;/strong&gt; database.&lt;/p&gt;

&lt;p&gt;When a player makes a trade, a request is sent to the &lt;strong&gt;Trade Engine&lt;/strong&gt;. &lt;strong&gt;Trade Engine&lt;/strong&gt; has the actual asset price since &lt;strong&gt;Market Feed&lt;/strong&gt; streams it and can handle the trade properly.&lt;/p&gt;

&lt;p&gt;Yet, we are far from the complete system. Let’s examine the current flow.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Please note that I have omitted some requests not directly related to the topic of handling a deal, such as those made during the service setup stage or those related to retrieving user account data.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4je0xmsxpiri16ikgymi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4je0xmsxpiri16ikgymi.png" width="800" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;See the issue? How should the client be notified that the trade has been closed and the balance has been updated? Long polling (periodic HTTP requests) is a potential solution here, but is it the best solution? We already have live updates on price provided by &lt;strong&gt;Market Feed,&lt;/strong&gt; and now we want the same updates on user balance and trades.&lt;/p&gt;

&lt;p&gt;Additionally, it’s crucial to consider high loads. What will happen if thousands or tens of thousands of clients connect to &lt;strong&gt;Market Feed&lt;/strong&gt;? Will its performance degrade? &lt;strong&gt;Market Feed&lt;/strong&gt; has an extremely important mission: to feed the system with live price data. Not only clients, but the heart of the game —  &lt;strong&gt;Trade Engine&lt;/strong&gt;. Managing &lt;strong&gt;Client&lt;/strong&gt; s’ WebSocket connections is definitely outside the scope of this microservice, and any issues with handling them might potentially cause bugs related to price data processing.&lt;/p&gt;

&lt;p&gt;Another thing to consider is what will happen if we update and redeploy the &lt;strong&gt;Market Feed&lt;/strong&gt; service while the game is running? It will result in the loss of historical price data. Do you remember the screenshot from the game? We have 2 minutes of price history on the chart. Restarting the &lt;strong&gt;Market Feed&lt;/strong&gt; will cause the bug on the UI for 2 minutes while fresh data is being formed to populate the history.&lt;/p&gt;

&lt;p&gt;Let’s solve everything by adding a Redis for caching &lt;strong&gt;Market Feed&lt;/strong&gt; price history and by adding a new microservice to handle WebSocket connections and serve all real-time updates to &lt;strong&gt;Client&lt;/strong&gt; s. Also, I’d add an API microservice to simply create a unified API endpoint for all HTTP REST requests. We can also use it to proxy WS connections, but in the actual project, we’ve decided to separate WS from HTTP services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdlf40eh6yo5puexecv3y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdlf40eh6yo5puexecv3y.png" width="800" height="578"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that we now split our services into two groups: internal (in blue) and external, which includes &lt;strong&gt;Polygon&lt;/strong&gt; and everything that the end user can access.&lt;/p&gt;

&lt;p&gt;Let’s break down what’s happening here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Internal&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Market Feed&lt;/strong&gt;
Processes price stream from &lt;strong&gt;Polygon&lt;/strong&gt; , saves the data to &lt;strong&gt;Redis,&lt;/strong&gt; and serves it via WebSocket to &lt;strong&gt;Trade Engine&lt;/strong&gt; and &lt;strong&gt;WS Notifier&lt;/strong&gt;. That’s how these services always have the latest price.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trade Engine&lt;/strong&gt;
Processes trade requests from the &lt;strong&gt;API&lt;/strong&gt;. Get live price data via WebSocket from &lt;strong&gt;Market Feed&lt;/strong&gt;. Stores trades data in &lt;strong&gt;Postgres&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Core&lt;/strong&gt;
Processes requests from the &lt;strong&gt;API&lt;/strong&gt; related to the user’s account. Stores user data in &lt;strong&gt;Postgres&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Postgres&lt;/strong&gt;
A place where users and trade data are stored. It utilizes the Postgres NOTIFY mechanism to feed data updates to the &lt;strong&gt;WS Notifier&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redis&lt;/strong&gt;
Stores price history from &lt;strong&gt;Market Feed&lt;/strong&gt;. Direct read access from WS Notifier.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;External&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;API&lt;/strong&gt;
A unified entry point for all HTTP REST API requests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WS Notifier&lt;/strong&gt;
Responsible for serving &lt;strong&gt;Cent&lt;/strong&gt; s live updates on price and user data (balance). It provides price history from &lt;strong&gt;Redis&lt;/strong&gt; for the &lt;strong&gt;Client&lt;/strong&gt; ’s first subscription and proxies live price updates from the &lt;strong&gt;Market Feed&lt;/strong&gt;. Additionally, the service is subscribed to Postgres for trades and users’ data updates, and delivers them to the appropriate &lt;strong&gt;Client&lt;/strong&gt; s.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, let’s take a look at the updated interaction diagram.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwua6v427g9bob3nrbejl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwua6v427g9bob3nrbejl.png" width="800" height="555"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Facing Latencies
&lt;/h3&gt;

&lt;p&gt;It looks like everything is just perfect now, but while trying to play my own game, I encountered a problem that ruined the entire experience! In options trading, timing is critical. Especially when we’re talking about 30-second options — the price is incredibly volatile on low timeframes.&lt;/p&gt;

&lt;p&gt;So, playing the game in Southeast Asia with my game server in Frankfurt EU, I got a second or even two seconds of latency between clicking a button to open the trade and its actual execution. It was incredibly frustrating to lose deals solely because of this, and it happened frequently. You click “UP” expecting the price to go up, observe the spinner for 2 seconds or so while the price graph spikes, and then it shows you that you opened your deal at a totally different, much higher price than you wanted. A little dip and you lose. If you had opened the deal at the original price, you would have won. However, the game’s technical limitations simply did not allow it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqby2xrens3r6wcvbf26.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqby2xrens3r6wcvbf26.png" width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Making it Geo-Distributed
&lt;/h3&gt;

&lt;p&gt;Let’s build a system that will allow players to open deals almost instantly. We need to fix the request marked orange on the previous interaction diagram. To achieve it, we need to bring game servers closer to the players. First, we need to decide which services should be geo-distributed. Obviously, &lt;strong&gt;Trade Engine&lt;/strong&gt;. We decided to leave &lt;strong&gt;Core&lt;/strong&gt; in just one replica in the EU, as it was primarily used during game loading and didn’t contribute significantly to latencies.&lt;/p&gt;

&lt;p&gt;If we move the &lt;strong&gt;Trade Engine&lt;/strong&gt; , we also need to move the &lt;strong&gt;API&lt;/strong&gt;. There’s just zero sense in bringing &lt;strong&gt;Trade Engine&lt;/strong&gt; closer to the end user if it is accessible only through an &lt;strong&gt;API&lt;/strong&gt; that is not in the same location. Otherwise, it will cause even more latency with requests ping-ponging all over the world.&lt;/p&gt;

&lt;p&gt;Now we have &lt;strong&gt;Trade Engine&lt;/strong&gt; and &lt;strong&gt;API&lt;/strong&gt; located closer to the end user. All that remains is to geo-distribute &lt;strong&gt;WS Notifier&lt;/strong&gt;. Although it simply delivers data from the UE, we chose this for several reasons. The geo-distributed &lt;strong&gt;WS Notifier&lt;/strong&gt; ensures that the client gets the nearest price to the &lt;strong&gt;Trade Engine&lt;/strong&gt;. It also allows for better horizontal scaling based on regional loads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trade Engine&lt;/strong&gt; and &lt;strong&gt;WS Notifier&lt;/strong&gt; both get price data from &lt;strong&gt;Marketfeed&lt;/strong&gt; , which is located in the EU. Now they both have the same base delay from the EU to their region. When a client connects to &lt;strong&gt;WS Notifier&lt;/strong&gt; , price data divergence from the &lt;strong&gt;regional&lt;/strong&gt;  &lt;strong&gt;Trade Engine&lt;/strong&gt; is now only caused by client-to- &lt;strong&gt;regional WS Notifier&lt;/strong&gt; latency &lt;strong&gt;,&lt;/strong&gt; not client-to- &lt;strong&gt;EU&lt;/strong&gt;  &lt;strong&gt;WS Notifier&lt;/strong&gt;  latency.&lt;/p&gt;

&lt;p&gt;Now take a look at the previous schema again. There is a “save trade data” step in &lt;strong&gt;Trade Engine&lt;/strong&gt; before returning trade data to the &lt;strong&gt;Client&lt;/strong&gt;. This step requires &lt;strong&gt;Trade Engine&lt;/strong&gt; to make a request to &lt;strong&gt;Postrgres&lt;/strong&gt; in the EU, which is a very expensive action in terms of latency.&lt;/p&gt;

&lt;p&gt;We need to do one more optimization — become optimistic! And by this, I mean we can consider the deal open even before all the checks are passed and before the data is stored in the DB. That will allow us to return the deal’s timestamp and price instantly, with a cost associated with the request delay between the &lt;strong&gt;Client&lt;/strong&gt; and the &lt;strong&gt;Trade Engine&lt;/strong&gt;. You can argue that it’s not safe since there might be issues during the checks and saving data to the database, but actually, this might occur only if someone tries to cheat the game, and we shouldn’t prioritize a smooth UI for these individuals. In this case, the deal will be marked as active on the client, but it will fail checks and won’t be executed on the backend. For those who play honestly, the frontend takes care of all validations, such as having only one active deal at a time, and all of them will be valid.&lt;/p&gt;

&lt;p&gt;What about closing the deals? As you may recall, in our current system, the &lt;strong&gt;Trade Engine&lt;/strong&gt; closes the deal, updates the data in &lt;strong&gt;Postgres,&lt;/strong&gt; which notifies the &lt;strong&gt;WS Notifier,&lt;/strong&gt; and then it sends the update to the appropriate &lt;strong&gt;Client&lt;/strong&gt;. Since &lt;strong&gt;Postgres&lt;/strong&gt; might be located far from the &lt;strong&gt;Client,&lt;/strong&gt; this may take some time, but it’s actually not a problem at all. The problem was in opening the deal at the right time, but after that, it closes by &lt;strong&gt;Trade Engine&lt;/strong&gt; in 30 seconds, and the client can wait 1–2 seconds (or so) more to get the result.&lt;/p&gt;

&lt;p&gt;We run the 30-second timer on a &lt;strong&gt;Client&lt;/strong&gt; adjusted for the deal opening time received from the &lt;strong&gt;Trade Engine&lt;/strong&gt;. After it expires, we simply show a spinner, demonstrating that the deal was closed but we’re waiting for the result. Potentially, it could be optimized — the &lt;strong&gt;Trade Engine&lt;/strong&gt; might send a notification to the &lt;strong&gt;WS Notifier&lt;/strong&gt; once the deal is closed, before updating the state in &lt;strong&gt;Postgres&lt;/strong&gt;. However, in the real game, it wasn’t a significant issue at all.&lt;/p&gt;

&lt;p&gt;Let’s update the architecture schema and the interaction diagram.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtlr0lq9n5lw467zkejq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtlr0lq9n5lw467zkejq.png" width="800" height="521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kj044vj8i7grg9h04ui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kj044vj8i7grg9h04ui.png" width="800" height="555"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I added colors to the arrows to illustrate client communication latency. Blue arrows represent low latency within a single region, while red arrows indicate higher latency due to requests crossing regional boundaries.&lt;/p&gt;

&lt;p&gt;We addressed the issue, allowing deals to open almost instantly!&lt;/p&gt;

&lt;p&gt;In this configuration, the game was published and performed extremely well even under high loads during tournaments.&lt;/p&gt;

&lt;p&gt;Feel free to give it a try in Telegram: &lt;a href="http://t.me/WolfStreetGameBot" rel="noopener noreferrer"&gt;t.me/WolfStreetGameBot&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Thank you for reading.&lt;/p&gt;

&lt;p&gt;I hope this article has offered valuable insights to help you approach project architecture with greater confidence and clarity. For more in-depth articles on tech and business, follow me on Medium and X.&lt;/p&gt;

&lt;p&gt;X: &lt;a href="http://x.com/cxrtisxl" rel="noopener noreferrer"&gt;@cxrtisxl&lt;/a&gt;&lt;br&gt;&lt;br&gt;
LinkedIn: &lt;a href="https://www.linkedin.com/in/ian-sosunov/" rel="noopener noreferrer"&gt;Ian Sosunov&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Medium: &lt;a href="https://medium.com/u/3d75c7c255" rel="noopener noreferrer"&gt;Ian Sosunov&lt;/a&gt;&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>architecture</category>
      <category>gamedev</category>
      <category>latency</category>
    </item>
    <item>
      <title>TLSNotary ― Flow Overview</title>
      <dc:creator>Ian Sosunov</dc:creator>
      <pubDate>Mon, 23 Jun 2025 20:28:17 +0000</pubDate>
      <link>https://forem.com/cxrtisxl/tlsnotary-flow-overview-17o0</link>
      <guid>https://forem.com/cxrtisxl/tlsnotary-flow-overview-17o0</guid>
      <description>&lt;p&gt;Working on &lt;a href="https://x.com/zkBring" rel="noopener noreferrer"&gt;Bring ID&lt;/a&gt; I've dived into the &lt;a href="https://x.com/tlsnotary" rel="noopener noreferrer"&gt;TLSNotary&lt;/a&gt; protocol. This tiny article is a compilation of what could be found in &lt;a href="https://tlsnotary.github.io/tlsn/tlsn_core/index.html" rel="noopener noreferrer"&gt;Rust Crate docs&lt;/a&gt; made to help you faster understand the core concepts and the flow of what happens after the MPC-TLS part. It is assumed that you have read the general &lt;a href="https://tlsnotary.org/docs" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; and understand what Prover, Notary and Verifier are.&lt;/p&gt;

&lt;h3&gt;
  
  
  Glossary
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Transcript&lt;/strong&gt;
The plaintext of all application data communicated between the Prover and the Server.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Attestation&lt;/strong&gt;&lt;br&gt;
A cryptographically &lt;u&gt;signed document issued by a Notary&lt;/u&gt; who witnessed a TLS connection. It contains various fields which can be used to verify statements about the connection and the associated application data.&lt;/p&gt;

&lt;p&gt;Attestations are comprised of two parts: a Header and a Body.&lt;/p&gt;

&lt;p&gt;The header is the data structure which is signed by a Notary. It contains a unique identifier, the protocol version, and a Merkle root of the body fields.&lt;/p&gt;

&lt;p&gt;The body contains the fields of the attestation. These fields include data which can be used to verify aspects of a TLS connection, such as the server’s identity, and facts about the transcript.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Extension&lt;/strong&gt;&lt;br&gt;
An attestation may be extended using &lt;strong&gt;Extension&lt;/strong&gt; fields included in the body. Extensions may be used to implement application specific functionality.&lt;/p&gt;

&lt;p&gt;A &lt;u&gt;Prover may append extensions to their attestation request&lt;/u&gt;, provided that the Notary supports them. A Notary may also be configured to validate any extensions requested by a Prover using custom application logic. Additionally, a &lt;u&gt;Notary may include their own extensions&lt;/u&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Presentation&lt;/strong&gt;&lt;br&gt;
A proof of an attestation from a Notary along with additional selectively disclosed information about the TLS connection such as the server’s identity and the application data communicated with the server.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;A presentation is self-contained&lt;/u&gt; and can be verified by a Verifier without needing access to external data. The Verifier need only check that the key used to sign the attestation, referred to as a VerifyingKey, is from a Notary they trust.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Flow
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;The MPC-TLS protocol produces commitments to the entire &lt;strong&gt;Transcript&lt;/strong&gt; of application data.&lt;/li&gt;
&lt;li&gt;Prover has the opportunity to slice and dice the commitments into smaller sections which can be selectively disclosed. Additionally, the Prover may want to use different commitment schemes depending on the context they expect to disclose.&lt;/li&gt;
&lt;li&gt;Prover makes an attestation Request, which can be configured. With it the Prover can configure some of the details of the &lt;strong&gt;Attestation&lt;/strong&gt;, such as which cryptographic algorithms are used. The Prover may also request for &lt;strong&gt;Extensions&lt;/strong&gt; to be added to the &lt;strong&gt;Attestation&lt;/strong&gt;.
Upon being issued an &lt;strong&gt;Attestation&lt;/strong&gt;, the Prover will also hold a corresponding Secrets which contains all private information. &lt;/li&gt;
&lt;li&gt;Upon receiving a request, the Notary can issue an &lt;strong&gt;Attestation&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Prover uses an &lt;strong&gt;Attestation&lt;/strong&gt; and the corresponding Secrets to construct a verifiable &lt;strong&gt;Presentation&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Verifier checks the verifying key and verifies the &lt;strong&gt;Presentation&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;— Ian (&lt;a href="https://x.com/cxrtisxl" rel="noopener noreferrer"&gt;@cxrtisxl&lt;/a&gt;)&lt;/p&gt;

</description>
      <category>cryptography</category>
      <category>blockchain</category>
      <category>privacy</category>
      <category>tlsn</category>
    </item>
  </channel>
</rss>
