<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Dmitry Syntheva</title>
    <description>The latest articles on Forem by Dmitry Syntheva (@syntheva).</description>
    <link>https://forem.com/syntheva</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/syntheva"/>
    <language>en</language>
    <item>
      <title>What we had to build from scratch because we refused to use the cloud</title>
      <dc:creator>Dmitry Syntheva</dc:creator>
      <pubDate>Thu, 07 May 2026 11:17:00 +0000</pubDate>
      <link>https://forem.com/syntheva/what-we-had-to-build-from-scratch-because-we-refused-to-use-the-cloud-4fj4</link>
      <guid>https://forem.com/syntheva/what-we-had-to-build-from-scratch-because-we-refused-to-use-the-cloud-4fj4</guid>
      <description>&lt;p&gt;Everyone building robots right now is playing with the same Lego pieces. Same compute modules, same software stack, same ROS dependencies pulling in other dependencies pulling in other dependencies until your system needs hardware it would never need if you'd written your own stack. We call it dependency hell. The robotics industry is living in it.&lt;/p&gt;

&lt;p&gt;The reason for this is understandable. Someone decided they needed to ship fast. They looked at what already existed. They used it. Then the next team did the same. And now you have a generation of humanoid robots that are basically thin clients — beautifully designed shells that call a cloud server for everything that matters.&lt;/p&gt;

&lt;p&gt;We decided not to do that. Here's what that decision actually cost us.&lt;/p&gt;

&lt;p&gt;The hardware got more expensive. When your robot is a thin client, your compute requirements are trivial. You need enough processing power to make a network call and interpret a response. When your robot has to run inference on-device — actually run the models, locally, with no external help — the hardware requirements change completely. We built EPIA specifically because the off-the-shelf options weren't efficient enough for what we needed. Building a custom instruction set architecture is not how most robotics companies spend their time. It is, however, what you have to do when you refuse to outsource the intelligence to someone else's server.&lt;/p&gt;

&lt;p&gt;Updates got harder. With a cloud-dependent robot, rolling an update means deploying to your server infrastructure. Every robot in the world gets the update simultaneously, automatically, invisibly. With an offline robot, the update process is physical. You take the SD card out of the robot. You connect it to your computer. You burn the new image. You put the card back. It's not complicated — we designed a marketplace that makes this about three clicks — but it requires you to actually do it. We think this is fine. The people who care enough about privacy to buy an offline robot are the same people who understand why this tradeoff exists.&lt;/p&gt;

&lt;p&gt;We can't help you remotely. This one is real. If something breaks on your Synthia, we cannot log into her to see what's wrong. There is no remote access. There is no diagnostic connection. There is literally no wireless hardware for such a connection to traverse. If you have a hardware problem, you ship us the board. That's the support model. Some people will read that as a limitation. We read it as the proof of the promise. Any company can claim they don't access your device. We can't access your device. Not because we chose not to — because the hardware doesn't permit it. You can open her up and verify this yourself.&lt;/p&gt;

&lt;p&gt;The companies using the Lego pieces can't say any of this. Not because they're dishonest. Because the architecture doesn't allow it.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>cloud</category>
      <category>softwareengineering</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Your data is never actually being deleted</title>
      <dc:creator>Dmitry Syntheva</dc:creator>
      <pubDate>Wed, 06 May 2026 17:18:04 +0000</pubDate>
      <link>https://forem.com/syntheva/your-data-is-never-actually-being-deleted-17ag</link>
      <guid>https://forem.com/syntheva/your-data-is-never-actually-being-deleted-17ag</guid>
      <description>&lt;p&gt;There's a reasonable assumption most people make when they press a delete button: that the thing gets deleted. It doesn't, and this isn't some secret — it's just how cloud infrastructure works, and it's worth understanding before you invite a robot that moves and listens into your home.&lt;/p&gt;

&lt;p&gt;When you delete a file from Google Drive, or a message from any cloud service, what actually happens is that the pointer to that file gets removed from your interface. The data itself stays on the server. The reason for this is straightforward enough once you think about it from the company's perspective: if you're running a multi-billion-dollar cloud service and a government agency shows up with a legal request for a specific piece of data that your user already "deleted," the last thing you want to say is that you no longer have it. So you keep everything. You just stop showing it to the user. Some companies have data retention policies that run into decades. Some simply don't delete anything, ever, because the cost of storage is negligible and the cost of not having something when someone important asks for it is very much not.&lt;/p&gt;

&lt;p&gt;The data also rarely stays in one place. Cloud infrastructure is built on layers — the raw data goes in, gets split, gets filtered, gets routed to different systems for different purposes. There are audit logs for forensic analysis, processing queues for different data types, and at each hop there are people and systems that touch what's passing through. If you've ever looked at a cloud provider's terms of service and seen the phrase "we may use your data to improve our services," that clause is doing enormous work. Model training teams need real conversations because synthetic data only goes so far. Customer support needs access to user accounts to actually help anyone. Server administrators have root access to the file system because that's what it means to administer a server. These are all legitimate reasons. None of the people involved are doing anything wrong. The cumulative effect is that your private conversations have, in the normal course of business, passed through dozens of systems and been accessible to far more people than you'd probably guess.&lt;/p&gt;

&lt;p&gt;I know this from the inside. Before starting Syntheva, I worked at one of the large technology companies, on a product that listened. What surprised me wasn't the data retention — I expected that. What surprised me was how porous the internal access model was in practice, not because of negligence but because large organisations inevitably accumulate access grants over time. Someone needs to debug a problem, they get access to the relevant logs. Someone is training a model, they get access to the relevant dataset. Over years, at the scale these companies operate, this adds up to a situation where your data has been touched by an enormous number of people for an enormous number of reasons, all of them defensible, none of them visible to you.&lt;/p&gt;

&lt;p&gt;For most cloud services, this is uncomfortable but the exposure is mostly passive — data flows in one direction, gets stored, might get used for something you didn't intend. The robot case is different in a way that matters. A cloud-connected robot isn't just sending your data out — it's receiving instructions back. The cloud doesn't just log what you say; it determines how the robot responds, what it does, how it behaves in your home while you're not watching it. That bidirectional flow means that whoever controls the cloud pipeline controls the robot. Not in theory — in practice, in a way that someone with internal access could implement in an afternoon, by inserting a filter into the pipeline that adjusts what responses get generated. This isn't a sophisticated attack. It's a configuration change.&lt;/p&gt;

&lt;p&gt;We built Synthia without a cloud connection because we understood this from experience, not from reading about it. There are no wireless modules inside her because there's nothing to compromise if there's no connection to compromise. Updates happen by taking out the SD card and burning a new image — what security people call an air-gapped process, meaning there is no live connection through which anything can be pushed or intercepted. We also can't remotely access your robot to help you if something breaks, which some people read as a limitation and we read as the architecture working exactly as intended. Any company can tell you they don't access your device. We physically cannot, and you can open her up and verify that yourself.&lt;/p&gt;

&lt;p&gt;The delete button in Synthia's interface deletes things. It does this because the data never left the device in the first place.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
