<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: ReductStore</title>
    <description>The latest articles on Forem by ReductStore (@reductstore).</description>
    <link>https://forem.com/reductstore</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/reductstore"/>
    <language>en</language>
    <item>
      <title>Air-Gapped Drone Data Operations with Delayed Sync and Auditability</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Tue, 24 Feb 2026 00:00:00 +0000</pubDate>
      <link>https://forem.com/reductstore/air-gapped-drone-data-operations-with-delayed-sync-and-auditability-55ne</link>
      <guid>https://forem.com/reductstore/air-gapped-drone-data-operations-with-delayed-sync-and-auditability-55ne</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9aq3u4rhjg760xmpz7u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9aq3u4rhjg760xmpz7u.png" alt="Architecture for Air-Gapped Drone Data" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Drones in air-gapped environments produce a &lt;strong&gt;lot&lt;/strong&gt; of data (camera images, telemetry, logs, model outputs). Storing this data reliably on each drone and syncing it to a ground station later can be hard. &lt;strong&gt;ReductStore&lt;/strong&gt; makes this easier: it's a lightweight, time-series object store that works offline and replicate data when a connection is available.&lt;/p&gt;

&lt;p&gt;This guide explains a simple setup where each drone stores data locally with labels, replicates records to a ground station based on what it detects, and keeps a clear audit trail of what was captured and replicated.&lt;/p&gt;

&lt;p&gt;What we'll cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#drone-to-ground-architecture" rel="noopener noreferrer"&gt;&lt;strong&gt;Drone-to-Ground Architecture&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#setting-up-the-drone-node" rel="noopener noreferrer"&gt;&lt;strong&gt;Setting Up the Drone Node&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#storing-drone-data-with-labels" rel="noopener noreferrer"&gt;&lt;strong&gt;Storing Drone Data with Labels&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#setting-up-selective-replication" rel="noopener noreferrer"&gt;&lt;strong&gt;Setting Up Selective Replication&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#querying-for-audit-reports" rel="noopener noreferrer"&gt;&lt;strong&gt;Querying for Audit Reports&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#why-this-setup-works-well-for-drones" rel="noopener noreferrer"&gt;&lt;strong&gt;Why This Setup Works Well for Drones&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Drone-to-Ground Architecture&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#drone-to-ground-architecture" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The architecture has three main components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Each drone runs a small ReductStore server&lt;/strong&gt; to save images and telemetry locally on disk (this lets the drone operate fully offline).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A ground station runs a ReductStore instance&lt;/strong&gt; that receives replicated data for analysis and archiving.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ReductStore replication tasks&lt;/strong&gt; copy data from drone to ground based on labels and conditions (e.g., only records flagged as anomalies, plus context around them).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7h80aubzv31qrce5xw5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7h80aubzv31qrce5xw5.png" alt="Drone Workflow" width="800" height="785"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each drone pushes its data to the ground whenever it is connected. If the network disconnects, replication continues when the drone reconnects. This approach provides offline capability, lets you decide which data to replicate, and keeps a clear record of what happened.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up the Drone Node&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#setting-up-the-drone-node" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Start by running ReductStore on the drone's companion computer. Here is a minimal &lt;code&gt;docker-compose.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;services:
  reductstore:
    image: reduct/store:latest
    ports:
      - &lt;span class="s2"&gt;"8383:8383"&lt;/span&gt;
    environment:
      RS_API_TOKEN: &amp;lt;DRONE_TOKEN&amp;gt;
      RS_BUCKET_1_NAME: mission-data
      RS_BUCKET_1_QUOTA_TYPE: FIFO
      RS_BUCKET_1_QUOTA_SIZE: 10000000000 &lt;span class="c"&gt;# 10 GB&lt;/span&gt;
    volumes:
      - ./data:/data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This starts a ReductStore server with a &lt;code&gt;mission-data&lt;/code&gt; bucket that uses FIFO retention. Old data is deleted only when the 10 GB limit is reached, so the drone always keeps as much history as possible.&lt;/p&gt;

&lt;p&gt;FIFO quota is volume-based, not time-based. This means data is only deleted when disk space runs out, not after a fixed time period. This is important for drones that may sit idle between missions.&lt;/p&gt;

&lt;p&gt;If you prefer Snap instead of Docker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;snap &lt;span class="nb"&gt;install &lt;/span&gt;reductstore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That starts a ReductStore server on port &lt;code&gt;8383&lt;/code&gt; by default. You can then create the bucket using the &lt;strong&gt;&lt;a href="https://github.com/reductstore/reduct-cli" rel="noopener noreferrer"&gt;Reduct CLI&lt;/a&gt;&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;reduct-cli &lt;span class="nb"&gt;alias &lt;/span&gt;add drone &lt;span class="nt"&gt;-L&lt;/span&gt; http://localhost:8383 &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;DRONE_TOKEN&amp;gt;"&lt;/span&gt;
reduct-cli bucket create drone/mission-data &lt;span class="nt"&gt;--quota-type&lt;/span&gt; FIFO &lt;span class="nt"&gt;--quota-size&lt;/span&gt; 10GB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Storing Drone Data with Labels&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#storing-drone-data-with-labels" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Use labels to tag every record with mission context. This is what makes selective replication and auditing possible later. Here is an example using the Python SDK:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;hashlib&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;reduct&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt;


&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:8383&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api_token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;DRONE_TOKEN&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mission-data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Read a camera frame
&lt;/span&gt;        &lt;span class="n"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;frame.jpg&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;checksum&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;hashlib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sha256&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;hexdigest&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;timestamp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1_000_000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# microseconds
&lt;/span&gt;
        &lt;span class="c1"&gt;# Write with mission labels
&lt;/span&gt;        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;camera&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;labels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mission_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;m-2026-02-24-01&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;platform_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;uav-07&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;anomaly&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;false&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;confidence&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.95&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;checksum&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;checksum&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="n"&gt;content_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;image/jpeg&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Write telemetry as a CSV batch
&lt;/span&gt;        &lt;span class="n"&gt;csv_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ts,lat,lon,alt,speed&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;csv_data&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1708771200000000,47.3769,8.5417,450.2,12.5&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;csv_data&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1708771201000000,47.3770,8.5418,451.0,12.8&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;telemetry&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;csv_data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
            &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;labels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mission_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;m-2026-02-24-01&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;platform_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;uav-07&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;anomaly&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;false&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;checksum&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;hashlib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sha256&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;csv_data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;()).&lt;/span&gt;&lt;span class="nf"&gt;hexdigest&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="n"&gt;content_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text/csv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;anomaly&lt;/code&gt; label is important: it lets the replication task decide what to sync based on what the drone actually sees. For example, if the drone detects something unusual (an object, a warning, a low confidence score), it sets &lt;code&gt;anomaly=true&lt;/code&gt;. The replication task can then automatically sync that record — plus the context around it.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;checksum&lt;/code&gt; label gives you a simple way to verify data integrity during audits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Selective Replication&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#setting-up-selective-replication" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Once the drone connects to a trusted network, replication sends only the relevant records to the ground station. The simplest approach is to replicate based on a label, for example only records where the drone detected an anomaly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;reduct-cli &lt;span class="nb"&gt;alias &lt;/span&gt;add drone &lt;span class="nt"&gt;-L&lt;/span&gt; http://localhost:8383 &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;DRONE_TOKEN&amp;gt;"&lt;/span&gt;

reduct-cli replica create drone/mission-to-ground &lt;span class="se"&gt;\&lt;/span&gt;
    mission-data &lt;span class="se"&gt;\&lt;/span&gt;
    https://&amp;lt;GROUND_TOKEN&amp;gt;@&amp;lt;ground-address&amp;gt;/drone-data &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--when&lt;/span&gt; &lt;span class="s1"&gt;'{"&amp;amp;anomaly": {"$eq": "true"}}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a replication task that copies only records where &lt;code&gt;anomaly=true&lt;/code&gt; from the drone's &lt;code&gt;mission-data&lt;/code&gt; bucket to the ground station.&lt;/p&gt;

&lt;h3&gt;
  
  
  Replicating with context (before and after)&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#replicating-with-context-before-and-after" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;In many cases, you don't just want the anomaly record itself — you also want to see what happened &lt;strong&gt;before&lt;/strong&gt; it. ReductStore supports this with the &lt;code&gt;#ctx_before&lt;/code&gt; and &lt;code&gt;#ctx_after&lt;/code&gt; directives. For example, to replicate each anomaly record plus 30 seconds of data before it and 10 seconds after:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"&amp;amp;anomaly"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"$eq"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"true"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"#ctx_before"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"30s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"#ctx_after"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"10s"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is powerful for drone operations: imagine the drone's onboard model detects an unexpected object. ReductStore will replicate that record &lt;strong&gt;and&lt;/strong&gt; the 30 seconds of camera frames leading up to the detection, so the ground team can review what happened.&lt;/p&gt;

&lt;p&gt;You can provision this directly in Docker using environment variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;services:
  reductstore:
    image: reduct/store:latest
    ports:
      - &lt;span class="s2"&gt;"8383:8383"&lt;/span&gt;
    environment:
      RS_API_TOKEN: &amp;lt;DRONE_TOKEN&amp;gt;
      RS_BUCKET_1_NAME: mission-data
      RS_BUCKET_1_QUOTA_TYPE: FIFO
      RS_BUCKET_1_QUOTA_SIZE: 10000000000
      RS_REPLICATION_1_NAME: mission-to-ground
      RS_REPLICATION_1_SRC_BUCKET: mission-data
      RS_REPLICATION_1_DST_BUCKET: drone-data
      RS_REPLICATION_1_DST_HOST: https://&amp;lt;ground-address&amp;gt;
      RS_REPLICATION_1_DST_TOKEN: &amp;lt;GROUND_TOKEN&amp;gt;
      RS_REPLICATION_1_WHEN: |
        &lt;span class="o"&gt;{&lt;/span&gt;
          &lt;span class="s2"&gt;"&amp;amp;anomaly"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$$&lt;/span&gt;&lt;span class="s2"&gt;eq"&lt;/span&gt;: &lt;span class="s2"&gt;"true"&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;,
          &lt;span class="s2"&gt;"#ctx_before"&lt;/span&gt;: &lt;span class="s2"&gt;"30s"&lt;/span&gt;,
          &lt;span class="s2"&gt;"#ctx_after"&lt;/span&gt;: &lt;span class="s2"&gt;"10s"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    volumes:
      - ./data:/data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this setup, the drone can operate fully offline. Replication runs automatically when a connection is available and waits when it's not. It's also possible to pause replication tasks if needed. And because context is included, the ground team always has enough data to understand what triggered the event.&lt;/p&gt;

&lt;h2&gt;
  
  
  Querying for Audit Reports&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#querying-for-audit-reports" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;After a mission, you can query the ground station to check what was captured and replicated. Here is a simple example that lists all records from a specific mission:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import asyncio
from reduct import Client


async def main():
    async with Client("https://&amp;lt;ground-address&amp;gt;", api_token="&amp;lt;GROUND_TOKEN&amp;gt;") as client:
        bucket = await client.get_bucket("drone-data")

        # Query all camera records from a specific mission
        async for record in bucket.query(
            "camera",
            when={"&amp;amp;mission_id": {"$eq": "m-2026-02-24-01"}},
        ):
            print(
                f"ts={record.timestamp}, "
                f"anomaly={record.labels.get('anomaly')}, "
                f"checksum={record.labels.get('checksum')}, "
                f"size={record.size}"
            )


asyncio.run(main())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives you a clear log of every record in that mission: timestamp, anomaly flag, checksum, and size. You can use this to verify that all expected data arrived on the ground side.&lt;/p&gt;

&lt;p&gt;To go further, compare the checksums on the drone with the ground side to confirm nothing was altered during transfer. You can also check the error logs of the replication task to see if any records failed to replicate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Setup Works Well for Drones&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#why-this-setup-works-well-for-drones" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Drones have specific constraints that general purpose databases don't handle well. Here is what makes this setup practical:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Full offline operation.&lt;/strong&gt; Drones store everything locally and don't need a network connection during the mission. Data is safe on disk until sync happens.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic sync when connected.&lt;/strong&gt; When the drone lands or connects to a trusted network, replication picks up where it left off. No manual file transfers, no rsync scripts, no USB sticks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smart replication with context.&lt;/strong&gt; You don't have to sync everything. The replication task filters by labels and can include past records around each event using &lt;code&gt;#ctx_before&lt;/code&gt;. The ground team gets exactly what they need to understand what happened.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disk never fills up unexpectedly.&lt;/strong&gt; FIFO retention removes the oldest data only when the disk is full. The drone always keeps as much history as possible without running out of space mid mission.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easy auditing.&lt;/strong&gt; Every record has a timestamp, labels, and a checksum. After a mission, you can query the ground station and verify exactly what was captured and what was synced.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Store any file type.&lt;/strong&gt; Camera frames, telemetry CSV, logs, MCAP files, model outputs. Everything goes into the same system with the same interface.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next Steps&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#next-steps" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;If you want to go deeper, check out these articles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://dev.to/reductstore/distributed-storage-in-mobile-robotics-1oe0"&gt;Distributed Storage in Mobile Robotics&lt;/a&gt;&lt;/strong&gt; for a similar setup with mobile robots and S3 cloud backend&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://dev.to/reductstore/how-to-store-and-manage-robotic-data-3ojp"&gt;How to Store and Manage Robotics Data&lt;/a&gt;&lt;/strong&gt; for a broader look at ReductStore features for robotics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://www.reduct.store/docs/guides/data-replication" rel="noopener noreferrer"&gt;Data Replication Guide&lt;/a&gt;&lt;/strong&gt; for the full documentation on replication tasks, filters, and modes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://www.reduct.store/docs/conditional-query" rel="noopener noreferrer"&gt;Conditional Query Reference&lt;/a&gt;&lt;/strong&gt; for all available conditional query operators you can use in replication filters and queries&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;I hope you found this article helpful! If you have any questions or feedback, don't hesitate to reach out on our &lt;a href="https://community.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community&lt;/strong&gt;&lt;/a&gt; forum.&lt;/p&gt;

</description>
      <category>aerospace</category>
      <category>robotics</category>
      <category>database</category>
    </item>
    <item>
      <title>Comparing Data Management Tools for Robotics</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Thu, 04 Dec 2025 09:26:57 +0000</pubDate>
      <link>https://forem.com/reductstore/comparing-data-management-tools-for-robotics-5a61</link>
      <guid>https://forem.com/reductstore/comparing-data-management-tools-for-robotics-5a61</guid>
      <description>&lt;p&gt;Modern robots collect a lot of data from sensors, cameras, logs, and system outputs. Managing this data well is important for debugging, performance tracking, and training machine learning models.&lt;/p&gt;

&lt;p&gt;Over the past few years, we've been building a storage system from scratch. As part of that work, we spoke with many robotics teams across different industries to understand their challenges with data management.&lt;/p&gt;

&lt;p&gt;Here's what we heard often:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Only a subset of what robots generate is actually useful&lt;/li&gt;
&lt;li&gt;Network connections are not always stable or fast&lt;/li&gt;
&lt;li&gt;On-device storage is limited (hard drive swaps is not practical)&lt;/li&gt;
&lt;li&gt;Teams rely on manual workflows with scripts and raw files&lt;/li&gt;
&lt;li&gt;It's hard to find and extract the right data later&lt;/li&gt;
&lt;li&gt;ROS bag files get large quickly and are difficult to manage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this article, we compare four tools built to handle robotics data: &lt;strong&gt;ReductStore&lt;/strong&gt;, &lt;strong&gt;Foxglove&lt;/strong&gt;, &lt;strong&gt;Rerun&lt;/strong&gt;, and &lt;strong&gt;Heex&lt;/strong&gt;. We look at how they work, what they're good at, and which use cases they support.&lt;/p&gt;

&lt;p&gt;If you're working with robots and need to organize, stream, or store data more effectively, this overview should help.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Criteria for Comparison
&lt;/h2&gt;

&lt;p&gt;When picking a data tool for robotics, focus on these areas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Types&lt;/strong&gt;
Robotics is a large field with many sensor types. The tool should support the data you work with, such as:

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Telemetry:&lt;/em&gt; Lightweight (GPS, IMU, joints), ideal for monitoring.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Downsampled Data:&lt;/em&gt; Lower-rate images or lidar for incident review without high storage cost.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Full-Resolution:&lt;/em&gt; Raw sensor outputs for deep debugging or training. This is storage-intensive but essential for some applications.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Integration&lt;/strong&gt;
The tool should work with what you already use, like ROS, Grafana, MQTT, cloud platforms (S3, Azure, Google Cloud), and your development environment to avoid extra glue code and simplify workflows.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Performance and Scalability&lt;/strong&gt;
Data must move quickly (both locally and to the cloud). Large files or slow queries can block robots or delay analysis.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Ease of Use and APIs&lt;/strong&gt;
A simple UI and solid API support make it easier to automate, scale, and adapt the tool to different use cases.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tool Overviews
&lt;/h2&gt;

&lt;h3&gt;
  
  
  ReductStore
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03tsk44ak9or6allpkk1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03tsk44ak9or6allpkk1.png" alt="ReductStore Dashboard" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ReductStore&lt;/strong&gt; is a storage and streaming system designed for robotics data. It works both on the robot and in central storage (on-premise/self-hosted or in the cloud) with the same interface and SDKs (in Python, C++, Go, Javascript/TypeScript or Rust). That means your code stays the same whether you're reading local, remote data (or creating a browser-based dashboard).&lt;/p&gt;

&lt;p&gt;To move data to the cloud, ReductStore uses &lt;strong&gt;conditional replication&lt;/strong&gt;. You can define rules to upload only certain records: by label, rules, or event. For example, replicate all incident data, or just 1 out of 10 entries for routine monitoring.&lt;/p&gt;

&lt;p&gt;ReductStore handles storage limits on edge devices with &lt;strong&gt;FIFO retention&lt;/strong&gt;. Old data is deleted only when the device is full. Each bucket can have different rules, so you can keep more images and less telemetry, for example.&lt;/p&gt;

&lt;p&gt;With an &lt;strong&gt;S3 backend&lt;/strong&gt;, ReductStore batches small records together before uploading. This cuts down the number of requests and lowers cloud storage costs. For observability, you can connect &lt;strong&gt;Grafana&lt;/strong&gt; to ReductStore to create dashboards with system metrics and sensor data. For MCAP files, ReductStore supports shareable query links that open directly in &lt;strong&gt;Foxglove v1/v2&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It also lets you &lt;strong&gt;filter or merge records server-side&lt;/strong&gt;. For example, you can pull all temperature readings above a threshold over a time range without downloading full datasets.&lt;/p&gt;

&lt;p&gt;Want more technical detail? Check out &lt;a href="https://www.reduct.store/blog/database-for-robotics" rel="noopener noreferrer"&gt;&lt;strong&gt;The Missing Database for Robotics Is Out&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Foxglove and MCAP
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkvdrb2pukv2h4k9jwpu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkvdrb2pukv2h4k9jwpu.png" alt="Foxglove Dashboard" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Foxglove&lt;/strong&gt; is a browser-based visualization and observability tool for robotics. It supports &lt;strong&gt;ROS 1, ROS 2&lt;/strong&gt;, and &lt;strong&gt;MCAP logs&lt;/strong&gt;, and handles data types like telemetry, camera feeds, lidar, and depth maps.&lt;/p&gt;

&lt;p&gt;It uses &lt;strong&gt;MCAP&lt;/strong&gt;, an open-source log format built for robotics, to store high-resolution data efficiently. You can explore MCAP files interactively in &lt;strong&gt;Foxglove Studio&lt;/strong&gt; or stream them programmatically.&lt;/p&gt;

&lt;p&gt;Foxglove provides an &lt;strong&gt;agent&lt;/strong&gt; that detects new MCAP files on the robot and uploads them to the cloud automatically. This requires robots to record short rosbag segments (typically a few minutes each) which are closed and rotated continuously.&lt;/p&gt;

&lt;p&gt;It integrates natively with &lt;strong&gt;ROS topics, services, and actions&lt;/strong&gt;, and offers &lt;strong&gt;WebSocket and REST APIs&lt;/strong&gt;. It also connects to major cloud providers like &lt;strong&gt;AWS, Azure,&lt;/strong&gt; and &lt;strong&gt;Google Cloud&lt;/strong&gt; for scalable storage.&lt;/p&gt;

&lt;p&gt;The interface is built for time-series and sensor data, with interactive 2D/3D views, plots, and drag-and-drop panels for quick setup and review.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rerun
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2whlpk1u114zfrutvs3f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2whlpk1u114zfrutvs3f.png" alt="Rerun Dashboard" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rerun&lt;/strong&gt; is an open-source visualization solution for time-series and multimodal data. It supports data types like images, point clouds, lidar, depth maps, tensors, and other sensor streams.&lt;/p&gt;

&lt;p&gt;Its main strength is combining flexible logging with a fast, built-in 3D viewer designed for robotics and extended reality (XR) applications. For large datasets, Rerun provides a &lt;strong&gt;column-oriented API&lt;/strong&gt; to speed up ingestion and reduce memory usage. It also uses efficient internal structures to minimize allocations and optimize performance on edge devices.&lt;/p&gt;

&lt;p&gt;Rerun doesn't offer native ROS integration yet, but it can be used in ROS projects by adding custom logging to nodes.&lt;/p&gt;

&lt;p&gt;You can embed Rerun in &lt;strong&gt;Jupyter notebooks&lt;/strong&gt; or web pages, and use loggers for &lt;strong&gt;Python, Rust, and C++&lt;/strong&gt; to stream data into the viewer.&lt;/p&gt;

&lt;p&gt;The UI is built for &lt;strong&gt;real-time 3D exploration&lt;/strong&gt;, with overlays and live tracking that make it easy to inspect different data types in the same visual space.&lt;/p&gt;

&lt;h3&gt;
  
  
  Heex
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsc01tbs5lqwlhxkl70zz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsc01tbs5lqwlhxkl70zz.png" alt="Heex Dashboard" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Heex&lt;/strong&gt; is a data capture and review platform for autonomous systems that focuses on collecting only key moments—like errors or specific events instead of logging everything. This reduces bandwidth and storage needs while keeping important context.&lt;/p&gt;

&lt;p&gt;Robots using Heex record data continuously in short ROSbag segments. A small agent on the robot watches for triggers and uploads only selected segments to the cloud based on rules.&lt;/p&gt;

&lt;p&gt;A core feature is &lt;strong&gt;RDA (Resource and Data Automation)&lt;/strong&gt; for ROS 2, which automates what to record and when. Rules can be changed remotely without restarting the robot.&lt;/p&gt;

&lt;p&gt;Data is stored in &lt;strong&gt;ROSbag&lt;/strong&gt; and can be reviewed directly in the &lt;strong&gt;Heex dashboard&lt;/strong&gt;, which includes built-in open-source version of &lt;strong&gt;Foxglove&lt;/strong&gt;. This setup makes it easy to manage data across fleets and locations.&lt;/p&gt;

&lt;p&gt;Heex supports both &lt;strong&gt;ROS 1 and ROS 2&lt;/strong&gt;, and integrates with other systems through &lt;strong&gt;SDKs, APIs, and a CLI&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The interface includes customizable dashboards to monitor sensor data, errors, and system status. Timelines and streams are easy to navigate for quick analysis.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparative Analysis Table
&lt;/h2&gt;

&lt;p&gt;To help visualize the differences between the tools, here is a comparison table summarizing their main characteristics:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Tool&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Core Focus&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Data Types&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Storage Strategy&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Visualization&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;ROS Integration&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Unique Features&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Time-series storage and streaming for robotics&lt;/td&gt;
&lt;td&gt;Telemetry, camera images, lidar, logs&lt;/td&gt;
&lt;td&gt;Local + cloud with same API (supports S3, FIFO retention, conditional replication)&lt;/td&gt;
&lt;td&gt;Grafana, Foxglove (via MCAP links)&lt;/td&gt;
&lt;td&gt;Integrated with ROS via extensions&lt;/td&gt;
&lt;td&gt;Filter/merge on server, batch uploads, topic-level control, efficient on edge&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Foxglove&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Visualization and observability for robotics logs&lt;/td&gt;
&lt;td&gt;MCAP logs (telemetry, lidar, camera, depth)&lt;/td&gt;
&lt;td&gt;ROSbag short segments, auto-upload with agent&lt;/td&gt;
&lt;td&gt;Foxglove Studio (2D/3D, timeline, plots)&lt;/td&gt;
&lt;td&gt;Native ROS 1 &amp;amp; 2&lt;/td&gt;
&lt;td&gt;Drag-and-drop views, real-time stream inspection, cloud integration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Rerun&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Real-time 3D visualization of multimodal time-series data&lt;/td&gt;
&lt;td&gt;Images, lidar, point clouds, tensors, metrics&lt;/td&gt;
&lt;td&gt;User-defined logging; logs streamed into viewer or embedded in notebooks&lt;/td&gt;
&lt;td&gt;Built-in viewer (3D overlays, tracking)&lt;/td&gt;
&lt;td&gt;Not native (custom logging)&lt;/td&gt;
&lt;td&gt;Column-oriented API, fast ingestion, selective logging, notebook/web integration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Heex&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Event-driven data capture for fleets of robots&lt;/td&gt;
&lt;td&gt;ROSbag (telemetry, images, lidar, metrics)&lt;/td&gt;
&lt;td&gt;Continuous recording, uploads filtered by event-based rules via onboard agent&lt;/td&gt;
&lt;td&gt;Built-in Foxglove in dashboard&lt;/td&gt;
&lt;td&gt;Native ROS 1 &amp;amp; 2&lt;/td&gt;
&lt;td&gt;RDA (automated capture rules), remote config, scalable fleet-wide dashboards&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Each tool addresses a different part of the robotics data workflow. &lt;strong&gt;ReductStore&lt;/strong&gt; is ideal for distributed storage across many robots, with selective replication to the cloud and flexible integration with tools like Grafana and Foxglove. &lt;strong&gt;Foxglove&lt;/strong&gt; excels at visualizing MCAP logs and ROS topics. &lt;strong&gt;Rerun&lt;/strong&gt; offers flexible, real-time 3D inspection for custom applications. &lt;strong&gt;Heex&lt;/strong&gt; focuses on capturing just the important moments for efficient fleet analysis.&lt;/p&gt;

&lt;p&gt;Choosing the right tool depends on what kind of data you collect, how you process it, and where you need it to go. In many cases, combining tools can give you the best of all worlds.&lt;/p&gt;




&lt;p&gt;Thanks for reading. I hope this article helps you decide on the right storage strategy for your vibration data.&lt;br&gt;
If you have questions or comments, feel free to visit the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community Forum&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>robotics</category>
      <category>ros</category>
    </item>
    <item>
      <title>Distributed Storage in Mobile Robotics</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Mon, 17 Nov 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/reductstore/distributed-storage-in-mobile-robotics-1oe0</link>
      <guid>https://forem.com/reductstore/distributed-storage-in-mobile-robotics-1oe0</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1numadno34nlnfk2m0g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1numadno34nlnfk2m0g.png" alt="Distributed Storage in Mobile Robotics" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mobile robots produce a &lt;strong&gt;lot&lt;/strong&gt; of data (camera images, IMU readings, logs, etc). Storing this data reliably on each robot and syncing it to the cloud can be hard. &lt;strong&gt;ReductStore&lt;/strong&gt; makes this easier: it's a lightweight, time-series object store built for robotics and industrial IoT. It stores binary blobs (images, logs, CSV sensor data, MCAP, JSON) with timestamps and labels so you can quickly find and query them later.&lt;/p&gt;

&lt;p&gt;This introduction guide explains a simple setup where each robot stores data locally and automatically syncs it to a cloud ReductStore instance backed by Amazon S3.&lt;/p&gt;

&lt;h2&gt;
  
  
  Edge-to-Cloud Architecture&lt;a href="https://www.reduct.store/blog/distributed-storage-mobile-robotics#edge-to-cloud-architecture" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The architecture has three main components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Each robot runs a small ReductStore server&lt;/strong&gt; in order to save images and IMU data locally on disk (this let the robot operate offline).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A cloud ReductStore instance runs on a server (e.g., EC2)&lt;/strong&gt; and uses S3 for long-term storage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ReductStore replication tasks&lt;/strong&gt; copies data from robot to cloud based on labels, events, or rules (e.g., 1 record every minute).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each robot pushes its data to the cloud whenever it is connected to the network. This approach provides the robots with offline capability, allows you to decide which data to replicate, and easily scales to support many robots.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Replication Works&lt;a href="https://www.reduct.store/blog/distributed-storage-mobile-robotics#how-replication-works" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;ReductStore uses an &lt;strong&gt;append-only&lt;/strong&gt; replication model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The robot stores new data locally.&lt;/li&gt;
&lt;li&gt;ReductStore automatically detects new records.&lt;/li&gt;
&lt;li&gt;It sends them to the cloud in batches (or streams large files).&lt;/li&gt;
&lt;li&gt;If the network disconnects, replication continues when the robot reconnects.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can replicate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;everything&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;or only specific sensors&lt;/li&gt;
&lt;li&gt;or only records with certain labels&lt;/li&gt;
&lt;li&gt;or based on rules (e.g., 1 record every S seconds or every N records)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This can be configured per robot using environment variables (provisioning), with the web console or via the CLI (as shown in this guide).&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud ReductStore With S3 Backend&lt;a href="https://www.reduct.store/blog/distributed-storage-mobile-robotics#cloud-reductstore-with-s3-backend" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;ReductStore supports storing all records directly in S3. It keeps a local cache for fast access and batches many small blobs into larger blocks to save on S3 costs.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;By batching data into S3 objects, you can save &lt;strong&gt;significantly&lt;/strong&gt; on storage costs compared to storing many small files individually.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here is an example &lt;code&gt;docker-compose.yml&lt;/code&gt; to run a ReductStore server that uses S3 as the remote backend and provisions buckets for robots:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;reductstore&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;reduct/store:latest&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;reductstore&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8383:8383"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# AWS credentials and S3 bucket configuration&lt;/span&gt;
      &lt;span class="na"&gt;RS_REMOTE_BACKEND_TYPE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;s3&lt;/span&gt;
      &lt;span class="na"&gt;RS_REMOTE_BUCKET&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;YOUR_S3_BUCKET_NAME&amp;gt;&lt;/span&gt;
      &lt;span class="na"&gt;RS_REMOTE_REGION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;YOUR_S3_REGION&amp;gt;&lt;/span&gt;
      &lt;span class="na"&gt;RS_REMOTE_ACCESS_KEY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;YOUR_AWS_ACCESS_KEY_ID&amp;gt;&lt;/span&gt;
      &lt;span class="na"&gt;RS_REMOTE_SECRET_KEY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;YOUR_AWS_SECRET_ACCESS_KEY&amp;gt;&lt;/span&gt;
      &lt;span class="na"&gt;RS_REMOTE_CACHE_PATH&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/data/cache&lt;/span&gt;
      &lt;span class="c1"&gt;# Bucket provisioning&lt;/span&gt;
      &lt;span class="na"&gt;RS_BUCKET_ROBOT_1_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;robot1-data&lt;/span&gt;
      &lt;span class="na"&gt;RS_BUCKET_ROBOT_2_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;robot2-data&lt;/span&gt;
      &lt;span class="c1"&gt;# .. additional buckets as needed&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./cache:/data/cache&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This starts a ReductStore server that writes to S3 automatically. There are many more configuration options available in the &lt;strong&gt;&lt;a href="https://www.reduct.store/docs/configuration" rel="noopener noreferrer"&gt;configuration documentation&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Replication&lt;a href="https://www.reduct.store/blog/distributed-storage-mobile-robotics#setting-up-replication" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;First spin up a local ReductStore on each robot. Here with Snap:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;snap &lt;span class="nb"&gt;install &lt;/span&gt;reductstore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That starts a ReductStore server on port &lt;code&gt;8383&lt;/code&gt; by default. Then you can use the &lt;strong&gt;&lt;a href="https://github.com/reductstore/reduct-cli" rel="noopener noreferrer"&gt;Reduct CLI&lt;/a&gt;&lt;/strong&gt; to set up replication from the robot to the cloud instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Point the CLI to the robot's local ReductStore&lt;/span&gt;
reduct-cli &lt;span class="nb"&gt;alias &lt;/span&gt;add &lt;span class="nb"&gt;local&lt;/span&gt; &lt;span class="nt"&gt;-L&lt;/span&gt; http://localhost:8383 &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;ROBOT_API_TOKEN&amp;gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Create a bucket for that robot&lt;/span&gt;
reduct-cli bucket create &lt;span class="nb"&gt;local&lt;/span&gt;/robot1-data

&lt;span class="c"&gt;# Create a replication task to the cloud&lt;/span&gt;
reduct-cli replica create &lt;span class="nb"&gt;local&lt;/span&gt;/robot1-to-cloud &lt;span class="se"&gt;\&lt;/span&gt;
    robot1-data &lt;span class="se"&gt;\&lt;/span&gt;
    https://&amp;lt;CLOUD_API_TOKEN&amp;gt;@&amp;lt;cloud-address&amp;gt;/robot1-data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a replication task called &lt;code&gt;robot1-to-cloud&lt;/code&gt; that copies all data from the robot's local &lt;code&gt;robot1-data&lt;/code&gt; bucket to the cloud instance. You can customize replication further by adding filters or rules. See the &lt;strong&gt;&lt;a href="https://www.reduct.store/docs/guides/data-replication" rel="noopener noreferrer"&gt;replication guide&lt;/a&gt;&lt;/strong&gt; for more details.&lt;/p&gt;

&lt;h2&gt;
  
  
  Storing Sensor Data&lt;a href="https://www.reduct.store/blog/distributed-storage-mobile-robotics#storing-sensor-data" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;There are many ways to store data. When it comes to high-frequency sensor data like IMU readings, a common approach is to store them in 1-second files. Images can be stored as binary blobs (e.g., JPEG or PNG files). Here is an example of storing IMU data as CSV files and images as binary blobs using the Python SDK (this stores 10,000 samples and one camera image for a given timestamp as an example):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;random&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;reduct&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt;


&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:8383&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api_token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;ROBOT_API_TOKEN&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;robot1-data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Current timestamp to index the data by time in ReductStore
&lt;/span&gt;        &lt;span class="n"&gt;timestamp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1_000_000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# microseconds
&lt;/span&gt;
        &lt;span class="c1"&gt;# Generate 10'000 IMU samples
&lt;/span&gt;        &lt;span class="n"&gt;rows&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10_000&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="n"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ts&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;timestamp&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# microseconds
&lt;/span&gt;                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_x&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;uniform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_y&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;uniform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_z&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;uniform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;8.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;10.0&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Convert to CSV (store 1 seconds of data per file)
&lt;/span&gt;        &lt;span class="n"&gt;csv&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ts,linear_acceleration_x,linear_acceleration_y,linear_acceleration_z&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ts&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_x&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_y&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_z&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;rows&lt;/span&gt;
            &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Write the IMU batch
&lt;/span&gt;        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;entry_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;imu_logs&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;csv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
            &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;labels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sensor&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;imu&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rows&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1000&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="n"&gt;content_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text/csv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# MIME type
&lt;/span&gt;        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Write one camera image
&lt;/span&gt;        &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;camera_image.png&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="n"&gt;entry_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;images&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
                &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;labels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sensor&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;camera&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
                &lt;span class="n"&gt;content_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;image/png&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;If you are considering storing all IMU data as individual records in a time series database (TSDB) like Timescale or InfluxDB, keep in mind that high-frequency sensors (e.g., 1000 Hz) can lead to performance and cost issues. Batching samples into files (e.g., one second of data per CSV file) is a more efficient storage and querying method.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Querying Sensor Data Using ReductSelect&lt;a href="https://www.reduct.store/blog/distributed-storage-mobile-robotics#querying-sensor-data-using-reductselect" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;If your IMU data is stored as CSV, the &lt;strong&gt;ReductSelect extension&lt;/strong&gt; lets you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;extract only certain columns&lt;/li&gt;
&lt;li&gt;filter rows based on conditions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example: filter CSV rows where &lt;code&gt;acc_x &amp;gt; 10&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "#ext": {
        "select": {
            "csv": {"has_headers": True},
            "columns": [
                {"name": "ts", "as_label": "ts_ns"},
                {"name": "linear_acceleration_x", "as_label": "acc_x"},
                {"name": "linear_acceleration_y"},
                {"name": "linear_acceleration_z"},
            ],
        },
        "when": {"@acc_x": {"$gt": 1.9}},
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Python example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;reduct&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt;

&lt;span class="n"&gt;when&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="c1"&gt;# the JSON condition from above
&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://&amp;lt;cloud-address&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api_token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;TOKEN&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;robot1-data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;rec&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;imu_logs&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;when&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;when&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;rec&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_all&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This returns only the rows where &lt;code&gt;linear_acceleration_x &amp;gt; 1.9&lt;/code&gt;, along with the timestamp.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Setup Works Well for Robotics&lt;a href="https://www.reduct.store/blog/distributed-storage-mobile-robotics#why-this-setup-works-well-for-robotics" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;There are several advantages to using a specialized storage solution like ReductStore for mobile robotics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Robots can store data locally&lt;/strong&gt; and operate offline without network connectivity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic replication when connected&lt;/strong&gt; to avoid manual uploads and simplify data management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Selective replication&lt;/strong&gt; lets you control what data is sent to the cloud (i.e. decide on your reduction strategy) to save bandwidth and storage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Labels and timestamps&lt;/strong&gt; make it easy to organize and query sensor data later.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Store files of any type&lt;/strong&gt; (images, CSV, logs, MCAP) in a single system without needing separate storage solutions for each data type.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next Steps&lt;a href="https://www.reduct.store/blog/distributed-storage-mobile-robotics#next-steps" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;ReductStore also integrates into robotics observability stacks such as the Canonical Observability Stack (COS) for robotics. You can visualize sensor data, logs, and metrics in Grafana dashboards alongside your other robot telemetry. More details in our blog post &lt;strong&gt;&lt;a href="https://dev.to/reductstore/the-missing-database-for-robotics-is-out-4p4i"&gt;The Missing Database for Robotics Is Out&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;




&lt;p&gt;I hope you found this article helpful! If you have any questions or feedback, don't hesitate to reach out on our &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community&lt;/strong&gt;&lt;/a&gt; forum.&lt;/p&gt;

</description>
      <category>database</category>
      <category>ros</category>
      <category>robotics</category>
    </item>
    <item>
      <title>The Missing Database for Robotics Is Out</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Wed, 22 Oct 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/reductstore/the-missing-database-for-robotics-is-out-4p4i</link>
      <guid>https://forem.com/reductstore/the-missing-database-for-robotics-is-out-4p4i</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5p4xxqhkx9pq86jm95d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5p4xxqhkx9pq86jm95d.png" alt="Img example" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Robotics teams today wrestle with data that grows faster than their infrastructure. Every robot generates streams of images, sensor readings, logs, and events in different formats. These data piles are fragmented, expensive to move, and slow to analyze. Teams often rely on generic cloud tools that are not built for robotics. They charge way too much per gigabyte (when it should cost little per terabyte), hide the raw data behind proprietary APIs, and make it hard for robots (and developers) to access or use their own data.&lt;/p&gt;

&lt;p&gt;ReductStore introduces a new category: a database purpose built for robotics data pipelines. It is open, efficient, and developer friendly. It lets teams store, query, and manage any time series of unstructured data directly from robots to the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Makes It a New Category&lt;a href="https://www.reduct.store/blog/database-for-robotics#what-makes-it-a-new-category" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;ReductStore treats robotics with the respect it deserves. It captures everything in its raw form and stores it with a time index and labels for flexible querying and management. It ingests and streams any type of data (images, sensor frames, logs, MCAP files, CSVs, JSON, etc) without forcing developers to convert or reformat it.&lt;/p&gt;

&lt;p&gt;It works on robots and in the cloud using the same interface and SDKs (Python, C++, Rust, Javascript, Go). This means developers can build data pipelines that run the same way on robots or in the cloud without needing to change code or learn new tools.&lt;/p&gt;

&lt;p&gt;Developers can run ReductStore on an edge device for local data capture and replicate to a cloud instance (with S3 backend) for cloud analytics or archiving.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It is the first and only database designed specifically for unstructured, time series robotics data.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Data Handling and Querying&lt;a href="https://www.reduct.store/blog/database-for-robotics#data-handling-and-querying" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Developers can work directly with data using simple queries and SDKs. The focus is speed and flexibility.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. MCAP topic filtering&lt;a href="https://www.reduct.store/blog/database-for-robotics#1-mcap-topic-filtering" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;You can filter topics directly from multiple MCAP files stored in ReductStore without needing to download and reprocess everything locally.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;reduct&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt;

&lt;span class="c1"&gt;# Extract only the IMU topic from MCAP files
&lt;/span&gt;&lt;span class="n"&gt;ext&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ros&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;extract&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;topic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/imu/data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}},&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://test.reduct.store&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-robotics-data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;parts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;rec&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mcap-entry&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ext&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ext&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;blob&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;rec&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_all&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="n"&gt;rows&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ts&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;header&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stamp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sec&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1_000_000_000&lt;/span&gt;
                &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;header&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stamp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;nanosec&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_x&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;x&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_y&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;y&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_z&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;z&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allows you to extract only the relevant topics from multiple bags. In this example, we extract only the IMU topic as a stream of JSON records, which would look like this:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;ts&lt;/th&gt;
&lt;th&gt;linear_acceleration_x&lt;/th&gt;
&lt;th&gt;linear_acceleration_y&lt;/th&gt;
&lt;th&gt;linear_acceleration_z&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1633024800000&lt;/td&gt;
&lt;td&gt;0.1&lt;/td&gt;
&lt;td&gt;0.3&lt;/td&gt;
&lt;td&gt;-9.8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1633024801000&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.1&lt;/td&gt;
&lt;td&gt;-9.7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  2. CSV/JSON field extraction and filtering&lt;a href="https://www.reduct.store/blog/database-for-robotics#2-csvjson-field-extraction-and-filtering" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;You can extract specific JSON fields or CSV columns when querying data. This lets you select only the information you need, for example, filtering and visualizing certain fields from streams of JSON or CSV sensor readings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;io&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;reduct&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt;

&lt;span class="c1"&gt;# Select specific CSV columns and filter rows
&lt;/span&gt;&lt;span class="n"&gt;ext&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;select&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;csv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;has_headers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="c1"&gt;# Use "json": {}, for JSON data
&lt;/span&gt;        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;columns&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ts&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_x&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;as_label&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;acc_x&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_y&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_z&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;when&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$gt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$abs&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;@acc_x&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]},&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;]},&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://test.reduct.store&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-robotics-data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Loop over filtered CSV entries
&lt;/span&gt;    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;rec&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;csv_sensor_readings&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ext&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ext&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;blob&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;rec&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_all&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;csv_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BytesIO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The tabular result will only include the selected columns and rows that match the filter &lt;code&gt;abs(linear_acceleration_x) &amp;gt; 10&lt;/code&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;ts&lt;/th&gt;
&lt;th&gt;linear_acceleration_x&lt;/th&gt;
&lt;th&gt;linear_acceleration_y&lt;/th&gt;
&lt;th&gt;linear_acceleration_z&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1633024800000&lt;/td&gt;
&lt;td&gt;12.5&lt;/td&gt;
&lt;td&gt;0.3&lt;/td&gt;
&lt;td&gt;-9.8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1633024801000&lt;/td&gt;
&lt;td&gt;-15.2&lt;/td&gt;
&lt;td&gt;0.1&lt;/td&gt;
&lt;td&gt;-9.7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  3. Query any type of data&lt;a href="https://www.reduct.store/blog/database-for-robotics#3-query-any-type-of-data" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;ReductStore automatically batches small records and streams large ones for efficient storage and access. You can query any type of data, from lightweight telemetry to high-resolution images or point clouds, efficiently.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;io&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;PIL&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Image&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;reduct&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt;

&lt;span class="c1"&gt;# Every 5 seconds, limit to 5 records
&lt;/span&gt;&lt;span class="n"&gt;when&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$each_t&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$limit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://test.reduct.store&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-robotics-data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;rec&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;camera_frames&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;when&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;when&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;blob&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;rec&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_all&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BytesIO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The example above retrieves camera frames at 5-second intervals. You can then process or visualize these images as needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs41awykv9yru7r8ybsw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs41awykv9yru7r8ybsw.png" alt="Query Images Example" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Browse petabytes of data&lt;a href="https://www.reduct.store/blog/database-for-robotics#4-browse-petabytes-of-data" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;ReductStore is designed to handle massive volumes of data. Its indexing and storage architecture allows you to efficiently browse data at scale without downloading everything locally.&lt;/p&gt;

&lt;p&gt;For example, you can quickly navigate records and preview your data directly in the ReductStore &lt;a href="https://www.reduct.store/docs/glossary#web-console" rel="noopener noreferrer"&gt;&lt;strong&gt;web console&lt;/strong&gt;&lt;/a&gt;, even when working with petabytes of robotics data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gbh4ko0yqdal2udvoxp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gbh4ko0yqdal2udvoxp.png" alt="Browse Large Datasets" width="800" height="650"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;info&lt;/p&gt;

&lt;p&gt;You can build custom applications on top of ReductStore using its SDKs for Python, C++, Rust, Javascript, and Go. This makes it easy to build data pipelines, dashboards that works in the browser, or integrate with existing tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Integration and Cost Savings&lt;a href="https://www.reduct.store/blog/database-for-robotics#cloud-integration-and-cost-savings" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;ReductStore connects robots and the cloud in a simple and flexible way. It works with S3-compatible storage and includes a robust replication system to transfer data from robots to the cloud (even when the network is unstable or intermittent), making it perfect for field robots that often go offline.&lt;/p&gt;

&lt;p&gt;Replication tasks can be configured to replicate only specific data based on labels or any criteria (for example, only replicate data when the confidence score is below a threshold, or &lt;strong&gt;replicate everything from a 10-minute window around a specific event&lt;/strong&gt; ).&lt;/p&gt;

&lt;p&gt;In the cloud, by batching multiple records into single data blocks, ReductStore minimizes both the number of blobs and the number of API calls to S3. This design reduces storage and retrieval costs by leveraging S3's pricing model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdj8tgvb7zoiln9zoimgo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdj8tgvb7zoiln9zoimgo.png" alt="Diagram Cloud Integration" width="800" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This approach can deliver major savings when working with large volumes of robotics data.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Observability Stack Integration&lt;a href="https://www.reduct.store/blog/database-for-robotics#observability-stack-integration" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;ReductStore works with the tools robotics engineers already trust.&lt;/p&gt;

&lt;h3&gt;
  
  
  Foxglove Studio&lt;a href="https://www.reduct.store/blog/database-for-robotics#foxglove-studio" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Foxglove is an amazing tool for visualizing robotics data and debugging robots for the MCAP format.&lt;/p&gt;

&lt;p&gt;To share data from ReductStore to Foxglove, you can use the ReductStore web console (or the SDKs) to generate a &lt;a href="https://www.reduct.store/docs/glossary#query-link" rel="noopener noreferrer"&gt;&lt;strong&gt;query link&lt;/strong&gt;&lt;/a&gt; that Foxglove can open directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvr17w58ytt4cjd4069l1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvr17w58ytt4cjd4069l1.png" alt="ReductStore Query Link" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can then paste the query link into Foxglove Studio to visualize the data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F058x6bhrz4stf0fut1tj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F058x6bhrz4stf0fut1tj.png" alt="Foxglove Studio" width="800" height="547"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Grafana&lt;a href="https://www.reduct.store/blog/database-for-robotics#grafana" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Grafana is a popular open-source tool for creating dashboards and visualising time-series data. You can connect Grafana to ReductStore using the ReductStore data source plugin, which allows you to query and visualise data stored in ReductStore.&lt;/p&gt;

&lt;p&gt;You can query data using labels, for example, localization coordinates, object detected, confidence score, etc:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjoqmnbn97pjsocud3zv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjoqmnbn97pjsocud3zv.png" alt="Grafana Query Labels" width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or you can query based on content, such as JSON files with sensor readings or other structured data:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmr8ivq30fhabtjrsbtv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmr8ivq30fhabtjrsbtv.png" alt="Grafana Query Content" width="800" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Canonical Observability Stack (COS)&lt;a href="https://www.reduct.store/blog/database-for-robotics#canonical-observability-stack-cos" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Canonical's COS (Canonical Observability Stack) for robotics is an end to end observability framework built on open source tools such as Prometheus, Loki, Grafana, and Foxglove.&lt;/p&gt;

&lt;p&gt;The missing piece in this stack has always been a purpose built system for storing and managing robotics data efficiently from robot to cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffg09arjzl4jeks920hmf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffg09arjzl4jeks920hmf.png" alt="Diagram Observability Stack Integration" width="800" height="742"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ReductStore closes that gap. It provides a data storage and streaming solution optimized for both edge and cloud environments, along with an agent that captures data directly from ROS and streams it into the observability pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fda5hez3zprc4mrw8vnov.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fda5hez3zprc4mrw8vnov.png" alt="COS with ReductStore" width="800" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing Thoughts&lt;a href="https://www.reduct.store/blog/database-for-robotics#closing-thoughts" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Robotics teams no longer need to choose between control and convenience. ReductStore gives full ownership of data from robot to cloud. It removes vendor lock, cuts cost, and keeps everything observable and connected. It is the new foundation for robotics data infrastructure (the missing database for robotics).&lt;/p&gt;

&lt;p&gt;If you are interested to compare ReductStore with other databases (like MongoDB or InfluxDB), you can read our &lt;a href="https://www.reduct.store/whitepaper" rel="noopener noreferrer"&gt;&lt;strong&gt;white paper&lt;/strong&gt;&lt;/a&gt; that goes deeper into the architecture and design choices.&lt;/p&gt;




&lt;p&gt;I hope you found this article helpful! If you have any questions or feedback, don't hesitate to reach out on our &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community&lt;/strong&gt;&lt;/a&gt; forum.&lt;/p&gt;

</description>
      <category>ros</category>
      <category>robotics</category>
    </item>
    <item>
      <title>ReductStore v1.17.0 Released with Query Links and S3 Storage Backend Support</title>
      <dc:creator>Alexey Timin</dc:creator>
      <pubDate>Tue, 21 Oct 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/reductstore/reductstore-v1170-released-with-query-links-and-s3-storage-backend-support-447j</link>
      <guid>https://forem.com/reductstore/reductstore-v1170-released-with-query-links-and-s3-storage-backend-support-447j</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbm0m00zkjfcvyzs7dt6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbm0m00zkjfcvyzs7dt6.png" alt="ReductStore v1.17.0 Released" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are pleased to announce the release of the latest minor version of &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;, &lt;a href="https://github.com/reductstore/reductstore/releases/tag/v1.17.0" rel="noopener noreferrer"&gt;&lt;strong&gt;1.17.0&lt;/strong&gt;&lt;/a&gt;. ReductStore is a high-performance storage and streaming solution designed for storing and managing large volumes of historical data.&lt;/p&gt;

&lt;p&gt;To download the latest released version, please visit our &lt;a href="https://www.reduct.store/download" rel="noopener noreferrer"&gt;&lt;strong&gt;Download Page&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's new in 1.17.0?&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_17_0-released#whats-new-in-1170" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This release includes several new features and enhancements. First, there are query links for simplified data access. Second, there is support for S3-compatible storage backends.&lt;/p&gt;

&lt;p&gt;These new features enhance the usability and flexibility of ReductStore for various use cases in the cloud and on-premises environments and make it easier to share and access data stored in the database.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔗 Query Links for Data Access&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_17_0-released#-query-links-for-data-access" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;ReductStore now supports &lt;strong&gt;&lt;a href="https://www.reduct.store/docs/glossary#query-link" rel="noopener noreferrer"&gt;query links&lt;/a&gt;&lt;/strong&gt;, enabling users to generate temporary, public URLs for specific data records — without requiring authentication. This makes it easier to share datasets with &lt;strong&gt;external collaborators&lt;/strong&gt; , embed links into dashboards, or integrate with &lt;strong&gt;third-party systems&lt;/strong&gt; that need read-only access to specific data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l5ism0xyqfqobolt3px.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l5ism0xyqfqobolt3px.webp" alt="Generate Query Links in ReductStore Web Console" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can create query links directly from the &lt;strong&gt;Web Console&lt;/strong&gt; (or any SDKs):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the &lt;strong&gt;Data Browser&lt;/strong&gt; page and select a record you want to share.&lt;/li&gt;
&lt;li&gt;Click the &lt;strong&gt;“Share record”&lt;/strong&gt; icon in the action panel.&lt;/li&gt;
&lt;li&gt;Configure an &lt;strong&gt;expiration time&lt;/strong&gt; to automatically revoke access after a defined period.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once generated, anyone with the link can access the selected record via a simple HTTP(S) request — no access token required. The link only has access to the specific query for which it was created, along with the creator's permissions. This provides a secure and convenient way to expose selected data for collaboration and analysis.&lt;/p&gt;

&lt;h3&gt;
  
  
  ☁️ S3-Compatible Storage Backend&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_17_0-released#%EF%B8%8F-s3-compatible-storage-backend" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;ReductStore now supports &lt;strong&gt;S3-compatible storage backends&lt;/strong&gt; , allowing you to use &lt;strong&gt;object storage&lt;/strong&gt; instead of a local file system for your underlying data. This update brings greater flexibility and scalability for managing large datasets in the cloud.&lt;/p&gt;

&lt;p&gt;Previously, ReductStore supported only local disk storage, and users had to mount S3 buckets as local disks via FUSE drivers. With this release, ReductStore can now natively integrate with S3-compatible backends — no additional software or mounting is required.&lt;/p&gt;

&lt;p&gt;This feature is designed with performance and &lt;strong&gt;cost optimization&lt;/strong&gt; in mind. ReductStore uses a local disk cache layer to speed up read and write operations, while batching multiple records into a single data block to reduce storage and retrieval costs. This approach works especially well with cost-efficient AWS S3 storage classes such as &lt;strong&gt;S3 Standard-IA&lt;/strong&gt; or &lt;strong&gt;S3 Glacier&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To run ReductStore with an S3-compatible backend, use the following environment variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -p 8383:8383 \
 -e RS_REMOTE_BACKEND_TYPE=s3 \
 -e RS_REMOTE_BUCKET=&amp;lt;YOUR_S3_BUCKET_NAME&amp;gt; \
 -e RS_REMOTE_REGION=&amp;lt;YOUR_S3_REGION&amp;gt; \
 -e RS_REMOTE_ACCESS_KEY=&amp;lt;YOUR_S3_ACCESS_KEY_ID&amp;gt; \
 -e RS_REMOTE_SECRET_KEY=&amp;lt;YOUR_S3_SECRET_ACCESS_KEY&amp;gt; \ 
 -e RS_REMOTE_CACHE_PATH=/data/cache \
 -e RS_LICENSE_PATH=&amp;lt;PATH_TO_YOUR_LICENSE_FILE&amp;gt; \ 
 -v ${PWD}/data:/data/cache \
 reduct/store:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Read more about configuring S3-compatible storage backend in the &lt;a href="https://www.reduct.store/docs/configuration#remote-backend-settings" rel="noopener noreferrer"&gt;&lt;strong&gt;documentation&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;info&lt;/p&gt;

&lt;p&gt;This feature requires a commercial license. Please see the &lt;strong&gt;&lt;a href="https://www.reduct.store/pricing" rel="noopener noreferrer"&gt;Pricing page&lt;/a&gt;&lt;/strong&gt; for more details.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_17_0-released#whats-next" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We’re continuing to develop new features to make ReductStore even more powerful and user-friendly. Here’s a preview of what’s coming in the next releases:&lt;/p&gt;

&lt;h3&gt;
  
  
  📦 Multiple Entries in a Single Request&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_17_0-released#-multiple-entries-in-a-single-request" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Currently, each write or query request must target a &lt;strong&gt;single entry&lt;/strong&gt;. This can be limiting when dealing with &lt;strong&gt;multiple entries&lt;/strong&gt; or dynamic lists of entries in your applications.&lt;/p&gt;

&lt;p&gt;In upcoming versions, ReductStore will support &lt;strong&gt;batch operations&lt;/strong&gt; across multiple entries within a single API request. This improvement will simplify integrations and reduce overhead for large-scale data ingestion and querying workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔒 Read-Only Mode for ReductStore&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_17_0-released#-read-only-mode-for-reductstore" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Like most databases, ReductStore currently requires &lt;strong&gt;exclusive access&lt;/strong&gt; to its data directory while running. As a result, running multiple instances on the same dataset—for load balancing or high availability—is not yet possible.&lt;/p&gt;

&lt;p&gt;To address this, we’re introducing a &lt;strong&gt;read-only mode&lt;/strong&gt; that will allow one writer instance* and multiple reader instances to access the same dataset concurrently. This approach will enable &lt;strong&gt;scalable read operations&lt;/strong&gt; and &lt;strong&gt;improved availability&lt;/strong&gt; without adding the complexity of clustering or replication mechanisms.&lt;/p&gt;




&lt;p&gt;I hope you find those new features useful. If you have any questions or feedback, don’t hesitate to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community&lt;/strong&gt;&lt;/a&gt; forum.&lt;/p&gt;

&lt;p&gt;Thanks for using &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>news</category>
    </item>
    <item>
      <title>Building a Resilient ReductStore Deployment with NGINX</title>
      <dc:creator>Alexey Timin</dc:creator>
      <pubDate>Sat, 13 Sep 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/reductstore/building-a-resilient-reductstore-deployment-with-nginx-59jb</link>
      <guid>https://forem.com/reductstore/building-a-resilient-reductstore-deployment-with-nginx-59jb</guid>
      <description>&lt;p&gt;If you’re collecting high-rate sensor or video data at the edge and need zero-downtime ingestion and fault-tolerant querying, an &lt;strong&gt;&lt;a href="https://www.reduct.store/docs/guides/disaster-recovery#active-active-setup" rel="noopener noreferrer"&gt;active–active ReductStore setup&lt;/a&gt;&lt;/strong&gt; fronted by NGINX is a clean, practical pattern.&lt;/p&gt;

&lt;p&gt;This tutorial walks you through the &lt;strong&gt;&lt;a href="https://github.com/reductstore/nginx-resilient-setup" rel="noopener noreferrer"&gt;reference implementation&lt;/a&gt;&lt;/strong&gt;, explains the architecture, and shows production-grade NGINX snippets you can adapt.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We’ll Build&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#what-well-build" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We’ll set up a &lt;strong&gt;ReductStore cluster&lt;/strong&gt; with NGINX as a reverse proxy, separating the &lt;strong&gt;ingress&lt;/strong&gt; and &lt;strong&gt;egress&lt;/strong&gt; layers. This architecture allows for independent scaling of write and read workloads, ensuring high availability and performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fno4ylirej4b8tpfrhctg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fno4ylirej4b8tpfrhctg.png" alt="NGINX Resilient Deployment" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Ingress layer&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#ingress-layer" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;ingress layer&lt;/strong&gt; handles all writes and replicates data to the egress layer. Its nodes may have limited storage capacity, while they need only to handle writes and replicate data to the &lt;strong&gt;egress&lt;/strong&gt; nodes. It can use high-rate storage like NVMe SSDs or even RAM disks, depending on your data volume.&lt;/p&gt;

&lt;h3&gt;
  
  
  Egress layer&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#egress-layer" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;egress layer&lt;/strong&gt; handles all reads and serves data to clients. Its nodes are optimized for read performance and can use larger, slower storage like HDDs or cloud object storage. Each egress node holds a complete copy of the dataset, allowing for high availability and load balancing.&lt;/p&gt;

&lt;h3&gt;
  
  
  NGINX Load Balancer&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#nginx-load-balancer" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;NGINX&lt;/strong&gt; load balancer sits in front of both layers, exposing two stable endpoints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;http://&amp;lt;host&amp;gt;/ingress&lt;/code&gt; → load balances writes across ingress nodes&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;http://&amp;lt;host&amp;gt;/egress&lt;/code&gt; → load balances reads across egress nodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This separation allows you to scale each layer independently and ensures that writes and reads are handled optimally.&lt;/p&gt;

&lt;p&gt;It is also important to note that NGINX must maintain &lt;strong&gt;session affinity&lt;/strong&gt; (stickiness) for both ingress and egress requests to ensure that queries remain consistent and throughput is maximized.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Start&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#quick-start" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Clone the example and bring it up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/reductstore/nginx-resilient-setup
&lt;span class="nb"&gt;cd &lt;/span&gt;nginx-resilient-setupdocker 
compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will start two ingress nodes and two egress nodes with NGINX in front, all configured to replicate data between them. Check the docker compose file for details on how the nodes are set up.&lt;/p&gt;

&lt;p&gt;Now we need to write some data and verify that we can read it back.&lt;a href="https://www.reduct.store/download" rel="noopener noreferrer"&gt;&lt;strong&gt;Install the &lt;code&gt;reduct-cli&lt;/code&gt; tool&lt;/strong&gt;&lt;/a&gt; if you haven't already, then run the following commands to set up aliases for the ingress and egress endpoints:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;reduct-cli &lt;span class="nb"&gt;alias &lt;/span&gt;add ingress &lt;span class="nt"&gt;-L&lt;/span&gt; http://localhost:80/ingress &lt;span class="nt"&gt;--token&lt;/span&gt; secret
reduct-cli &lt;span class="nb"&gt;alias &lt;/span&gt;add egress &lt;span class="nt"&gt;-L&lt;/span&gt; http://localhost:80/egress &lt;span class="nt"&gt;--token&lt;/span&gt; secret
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then copy some data from our &lt;a href="https://play.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;Demo Server&lt;/strong&gt;&lt;/a&gt; to the ingress layer and read it back from the egress layer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Add demo server alias to the CLI&lt;/span&gt;
reduct-cli &lt;span class="nb"&gt;alias &lt;/span&gt;add play &lt;span class="nt"&gt;-L&lt;/span&gt; https://play.reduct.store &lt;span class="nt"&gt;--token&lt;/span&gt; reductstore
&lt;span class="c"&gt;# Copy data from the demo server to ingress&lt;/span&gt;
reduct-cli &lt;span class="nb"&gt;cp &lt;/span&gt;play/datasets ingress/bucket-1 &lt;span class="nt"&gt;--limit&lt;/span&gt; 1000
&lt;span class="c"&gt;# Read/export via egress&lt;/span&gt;
reduct-cli &lt;span class="nb"&gt;cp &lt;/span&gt;egress/bucket-1 ./export_folder &lt;span class="nt"&gt;--limit&lt;/span&gt; 1000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  NGINX Configuration&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#nginx-configuration" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Below is a distilled config you can adapt for open-source NGINX:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Upstreams
# Separate pools for ingress (writes) and egress (reads)
upstream reduct_ingress {
    ip_hash;   # stickiness for writes
    server ingress-1:8383 max_fails=3 fail_timeout=10s;
    server ingress-2:8383 max_fails=3 fail_timeout=10s;
    keepalive 64;
}

upstream reduct_egress {
    ip_hash;   # stickiness for queries
    server egress-1:8383 max_fails=3 fail_timeout=10s;
    server egress-2:8383 max_fails=3 fail_timeout=10s;
    keepalive 64;
}

server {
    listen 80;
    server_name _;

    client_max_body_size 512m;
    proxy_read_timeout 600s;
    proxy_send_timeout 600s;

    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $remote_addr;

    location /ingress/ {
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_pass http://reduct_ingress/;
    }

    location /egress/ {
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_pass http://reduct_egress/;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the config above, we define two upstream blocks: &lt;code&gt;reduct_ingress&lt;/code&gt; for handling write requests and &lt;code&gt;reduct_egress&lt;/code&gt; for handling read requests. Each block uses &lt;code&gt;ip_hash&lt;/code&gt; to ensure session affinity, which is crucial for maintaining consistent writes and reads.&lt;/p&gt;

&lt;h2&gt;
  
  
  ReductStore Configuration Notes&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#reductstore-configuration-notes" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The configuration between nodes of each layer is identical. To reach the desired architecture, you need to provision buckets and replication tasks for ingress nodes and buckets only for egress nodes. See the configuration files in the example repo for details.&lt;/p&gt;

&lt;h2&gt;
  
  
  Failure Drills&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#failure-drills" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;When the setup is running, you can simulate failures to see how it behaves:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Kill an ingress node&lt;/strong&gt; → writes continue via other ingress nodes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kill an egress node&lt;/strong&gt; → reads continue via other egress nodes; replication resyncs when it’s back.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simulate total ingress outage&lt;/strong&gt; → analysis continues on egress; for true ingestion continuity, pair with a pilot-light instance in another location.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Runbook&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#runbook" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Here’s a high-level runbook for deploying this architecture in production:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Provision ingress + egress ReductStore nodes&lt;/li&gt;
&lt;li&gt;Create buckets and replication tasks&lt;/li&gt;
&lt;li&gt;Expose &lt;code&gt;/ingress&lt;/code&gt; and &lt;code&gt;/egress&lt;/code&gt; via NGINX with &lt;code&gt;ip_hash&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Test with demo dataset&lt;/li&gt;
&lt;li&gt;Validate reads from egress&lt;/li&gt;
&lt;li&gt;Run failure drills&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  References&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#references" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/reductstore/nginx-resilient-setup" rel="noopener noreferrer"&gt;NGINX Resilient Setup Example&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.reduct.store/docs/guides/disaster-recovery" rel="noopener noreferrer"&gt;Disaster Recovery Guide&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;I hope you find this article interesting and useful. If you have any questions or feedback, don’t hesitate to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community&lt;/strong&gt;&lt;/a&gt; forum.&lt;/p&gt;

</description>
      <category>tutorials</category>
      <category>nginx</category>
    </item>
    <item>
      <title>ReductStore v1.16.0 Released With New Extensions and Context Replication</title>
      <dc:creator>Alexey Timin</dc:creator>
      <pubDate>Sat, 30 Aug 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/reductstore/reductstore-v1160-released-with-new-extensions-and-context-replication-562c</link>
      <guid>https://forem.com/reductstore/reductstore-v1160-released-with-new-extensions-and-context-replication-562c</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F034g7npet9bq7xya5pln.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F034g7npet9bq7xya5pln.webp" alt="ReductStore v1.16.0 Released" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are pleased to announce the release of the latest minor version of &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;, &lt;a href="https://github.com/reductstore/reductstore/releases/tag/v1.16.0" rel="noopener noreferrer"&gt;&lt;strong&gt;1.16.0&lt;/strong&gt;&lt;/a&gt;. ReductStore is a high-performance storage and streaming solution designed for storing and managing large volumes of historical data.&lt;/p&gt;

&lt;p&gt;To download the latest released version, please visit our &lt;a href="https://www.reduct.store/download" rel="noopener noreferrer"&gt;&lt;strong&gt;Download Page&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's new in 1.16.0?&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_16_0-released#whats-new-in-1160" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The v1.16.0 release introduces two new extensions designed to enhance data workflows for robotics and columnar data, along with support for replicating context records during queries.&lt;/p&gt;

&lt;h3&gt;
  
  
  Querying and Replicating Data with Context&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_16_0-released#querying-and-replicating-data-with-context" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We’ve extended the conditional query syntax with &lt;strong&gt;&lt;a href="https://www.reduct.store/docs/conditional-query/directives" rel="noopener noreferrer"&gt;directives&lt;/a&gt;&lt;/strong&gt; that allow users to modify global query behavior. The first directives introduced are &lt;code&gt;#ctx_before&lt;/code&gt; and &lt;code&gt;#ctx_after&lt;/code&gt;, which enable the inclusion of context records that occur before or after each matching record in a query.&lt;/p&gt;

&lt;p&gt;This feature is particularly useful when analyzing specific events or conditions in your data, as it helps provide a clearer picture of the surrounding context. For instance, you can use these directives to include records from a few seconds before or after an anomaly or incident, aiding in root cause analysis or pattern recognition.&lt;/p&gt;

&lt;p&gt;Here’s an example of how to use the &lt;code&gt;#ctx_before&lt;/code&gt; directive in a query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"#ctx_before"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"5s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"&amp;amp;anomaly_score"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"$gt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.8&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This query returns all records with an anomaly score greater than 0.8, along with the context records that occurred within 5 seconds before each matching entry.&lt;/p&gt;

&lt;h3&gt;
  
  
  New ReductSelect Extension&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_16_0-released#new-reductselect-extension" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;ReductStore is fundamentally a blob storage system and does not allow direct manipulation of stored data. However, with its extension mechanism, we can introduce new capabilities while keeping the core system simple.&lt;/p&gt;

&lt;p&gt;The new &lt;a href="https://www.reduct.store/docs/extensions/official/select-ext" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductSelect&lt;/strong&gt;&lt;/a&gt; extension enables users to query and transform data stored in CSV or JSON formats, making it easier to build flexible and efficient data processing workflows.&lt;/p&gt;

&lt;p&gt;For example, the following query uses ReductSelect to extract specific columns from CSV data and filter rows using the same conditional syntax available in ReductStore's native query language:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ext"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"select"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"csv"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"has_headers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"columns"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"temperature"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"as_labels"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"temp"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"humidity"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"when"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"&amp;amp;temperature"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"$gt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This query selects the &lt;code&gt;temperature&lt;/code&gt; and &lt;code&gt;humidity&lt;/code&gt; columns from a CSV file, renames &lt;code&gt;temperature&lt;/code&gt; to &lt;code&gt;temp&lt;/code&gt;, and filters rows where the temperature is greater than 30°C.&lt;/p&gt;

&lt;p&gt;These simple transformations enable you to ingest structured data very quickly and retrieve only subsets of it for further processing and analysis.&lt;/p&gt;

&lt;h3&gt;
  
  
  New ReductROS Extension&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_16_0-released#new-reductros-extension" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Another exciting addition is the &lt;a href="https://www.reduct.store/docs/extensions/official/ros-ext" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductROS&lt;/strong&gt;&lt;/a&gt; extension, which provides tools for extracting and transforming data stored in ReductStore into formats compatible with the Robot Operating System (ROS).&lt;/p&gt;

&lt;p&gt;With this extension, you can extract data from MCAP files containing ROS 2 messages and convert it into JSON format, making it easier to analyze and visualize. It also supports transforming raw binary data—such as images—into more accessible formats like JPEG or base64 strings.&lt;/p&gt;

&lt;p&gt;For example, the following query extracts data from a ROS 2 topic and encodes the image payload as a JPEG:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "ext": {
    "ros": {
      "extract": {
        "topic": "/camera/image",
        "encode": { "data": "jpeg" }
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ReductROS is still in active development, and we plan to expand its capabilities with support for additional ROS message types and more flexible extraction options in future releases. Stay tuned for updates!&lt;/p&gt;

&lt;h2&gt;
  
  
  What next?&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_16_0-released#what-next" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We are constantly working on improving ReductStore and adding new features to provide the best experience for our users. In the next release we plan to add new features and improvements, including:&lt;/p&gt;

&lt;h3&gt;
  
  
  Shareable Query Links&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_16_0-released#shareable-query-links" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We are developing a feature that allows users to generate and share links to specific queries in ReductStore.&lt;/p&gt;

&lt;p&gt;This will simplify collaboration by enabling team members to access query results without needing direct access to the ReductStore instance. It will also allow users to download results directly via a link and support integration with external tools and platforms such as &lt;strong&gt;&lt;a href="https://foxglove.dev/" rel="noopener noreferrer"&gt;Foxglove&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integration with Grafana&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_16_0-released#integration-with-grafana" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We are also working on a &lt;strong&gt;&lt;a href="https://github.com/reductstore/reduct-grafana" rel="noopener noreferrer"&gt;Grafana plugin&lt;/a&gt;&lt;/strong&gt; that enables users to visualize and analyze data stored in ReductStore directly within Grafana dashboards.&lt;/p&gt;

&lt;p&gt;This integration will provide a seamless experience with Grafana’s powerful visualization tools, allowing you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build custom dashboards using data from ReductStore.&lt;/li&gt;
&lt;li&gt;Monitor your data streams and historical records in real time.&lt;/li&gt;
&lt;li&gt;Visualize labels and data output in JSON or CSV formats.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stay tuned for the first release—coming soon!&lt;/p&gt;




&lt;p&gt;I hope you find those new features useful. If you have any questions or feedback, don’t hesitate to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community&lt;/strong&gt;&lt;/a&gt; forum.&lt;/p&gt;

&lt;p&gt;Thanks for using &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>news</category>
    </item>
    <item>
      <title>Comparing Robotics Visualization Tools: RViz, Foxglove, Rerun</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Tue, 15 Jul 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/reductstore/comparing-robotics-visualization-tools-rviz-foxglove-rerun-458n</link>
      <guid>https://forem.com/reductstore/comparing-robotics-visualization-tools-rviz-foxglove-rerun-458n</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbjhuzia3684xf0b3uz09.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbjhuzia3684xf0b3uz09.png" alt="Intro image" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In robotics development, effective visualization and analysis tools are essential for monitoring, debugging, and interpreting complex sensor data. Platforms like RViz, Foxglove, and Rerun play a key role at the visualization layer of the observability stack. They help developers interact with both live and recorded data. These tools rely on timely, well-structured access to the underlying data streams. That's where &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt; comes in. It handles the data logging, storage, and processing, with a focus on capturing high-volume time-series data efficiently. ReductStore aims to integrate with tools like RViz, Foxglove, and Rerun, supporting a complete observability pipeline: from raw data ingestion to actionable insights.&lt;/p&gt;

&lt;p&gt;Each visualization platform has its unique role in the development workflow. &lt;a href="https://wiki.ros.org/rviz" rel="noopener noreferrer"&gt;&lt;strong&gt;RViz (ROS Visualization) is the classic 3D visualization tool built for the ROS ecosystem&lt;/strong&gt;&lt;/a&gt;, widely used for real-time robot monitoring and debugging. &lt;a href="https://foxglove.dev/about" rel="noopener noreferrer"&gt;&lt;strong&gt;Foxglove is a modern data visualization and inspection platform for robotics and physical AI systems&lt;/strong&gt;&lt;/a&gt;, aiming to simplify how teams collect, visualize, analyze, and manage large volumes of diverse sensor data. &lt;a href="https://rerun.io/" rel="noopener noreferrer"&gt;&lt;strong&gt;Rerun is a lightweight, native desktop application focused on fast and efficient visualization of robotics data&lt;/strong&gt;&lt;/a&gt;, enabling developers to quickly explore and debug both live and recorded sensor streams with minimal setup.&lt;/p&gt;

&lt;p&gt;This article compares RViz, Foxglove, and Rerun across key criteria: pricing, cross-platform support, remote collaboration, user interface, extensibility, ROS integration, performance with large datasets, and visualization and analysis features. The goal is to help robotics developers choose the right tool for their specific needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Pricing&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#pricing" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;RViz&lt;/strong&gt; and &lt;strong&gt;RViz 2&lt;/strong&gt; are part of the ROS ecosystem and released under the BSD 3-Clause License. This permissive open-source license allows free use, modification, and redistribution (including for commercial purposes), as long as the original copyright and license notices are preserved.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Foxglove&lt;/strong&gt; offers a free tier that includes core features for up to 3 users, 10 devices, and 10 GB of cloud storage. For larger teams or needs (e.g., extra users, storage, private extensions, enterprise integrations), paid subscriptions are available. Pricing is based on the number of users and storage volume, as well as usage and support level. There is also a free academic plan for qualified institutions, which includes more users and storage. Foxglove itself is proprietary software, though it is built on open protocols like MCAP and integrates with open-source ROS tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rerun&lt;/strong&gt; is fully open-source under both the MIT and Apache 2.0 licenses. There are no current paid plans for the open-source core. The project follows an open-core model: the core visualizer and SDK are free, while a commercial platform is in early access for teams needing cloud-based storage, collaboration tools, advanced analytics, and scalable CI/CD workflows. This commercial layer is designed to build on top of the open-source foundation.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Platform &amp;amp; Collaboration&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#platform--collaboration" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;RViz&lt;/strong&gt; and &lt;strong&gt;RViz 2&lt;/strong&gt; are primarily developed for Linux, where they offer the most stable and reliable performance. RViz 2 also supports Windows and macOS as part of ROS 2, but these versions are less mature and less commonly used. They often require manual setup or compilation, though support continues to improve with newer ROS 2 releases.&lt;/p&gt;

&lt;p&gt;Both RViz versions are local desktop applications and are not designed for remote or multi-user use out of the box. Workarounds like SSH with X11 forwarding, VNC, or running RViz locally while connecting remotely to a ROS system are possible, but they are often fragile, require manual configuration, and may suffer from performance or latency issues depending on the network and hardware.&lt;/p&gt;

&lt;p&gt;To address these limitations, early tools like &lt;code&gt;ROS3D.js&lt;/code&gt; offered browser-based ROS 1 visualization, but they are now mostly unmaintained and incompatible with ROS 2. Modern web visualization is typically done with tools like Foxglove, Webviz, or custom WebSocket-based interfaces. Some cloud robotics platforms also offer remote ROS visualization, though they typically require extra integration work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Foxglove&lt;/strong&gt; runs on Windows, macOS, and Linux, available both as a native desktop app and in a web browser. This gives users the flexibility to work locally or remotely without installing software. The browser version supports multi-user collaboration, allowing teams to share layouts and stream live data securely in real time from any internet-connected device.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rerun&lt;/strong&gt; is a lightweight native desktop application for Windows, macOS, and Linux. It requires minimal setup and enables developers to quickly visualize and debug live or recorded sensor data without needing a browser or complex configuration. Although Rerun does not support multi-user or collaborative features, teams often share log files for offline review. This approach is usually more practical than using remote desktop tools. Rerun also integrates well into development workflows, such as Python environments, which typically require installing Rerun's SDKs and dependencies.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; : All three tools support sharing of recorded data, such as rosbag files for RViz &amp;amp; Rviz 2 (&lt;code&gt;.bag&lt;/code&gt; for ROS 1 and &lt;code&gt;.db3&lt;/code&gt;, &lt;code&gt;.mcap&lt;/code&gt; for ROS 2), &lt;code&gt;.mcap&lt;/code&gt; files for Foxglove, and &lt;code&gt;.rrd&lt;/code&gt; for Rerun. To support these workflows at scale, you can use &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt; solutions to manage continuous recording, indexing, and long-term storage of these file types across teams and infrastructure.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;User Interface&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#user-interface" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;RViz&lt;/strong&gt; and &lt;strong&gt;RViz 2&lt;/strong&gt; have a powerful but somewhat dated interface that focuses more on functionality than modern design. The learning curve can be steep, especially for beginners, due to the complex layout and the need to manually configure displays, topics, coordinate frames, and tools. The interface is built around multiple panels and dialogs that require careful configuration. It lacks the visual polish and streamlined workflows of newer visualization tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Foxglove&lt;/strong&gt; features a modern, user-friendly interface with flexible dashboards and responsive controls. It is designed to be accessible to users at all experience levels, making it easier to explore, analyze, and share robotics data. The interface relies heavily on graphical elements instead of commands or configuration files, which lowers the entry barrier for users unfamiliar with ROS or robotics tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rerun&lt;/strong&gt; offers a clean and straightforward interface focused on efficient data visualization. It balances ease of use with core functionality, providing easy-to-navigate views without overwhelming users. The interface requires minimal setup and supports intuitive exploration of data streams and logs. However, it currently has fewer customization options than RViz or Foxglove.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Extensibility&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#extensibility" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;RViz&lt;/strong&gt; (both ROS 1 and ROS 2) supports extensibility through C++ plugins, allowing users to develop and integrate custom visualizations, tools, and panels. This plugin architecture makes RViz highly adaptable across robotics domains such as perception, navigation, and manipulation. Many ROS packages include their own RViz plugins by default. However, developing and using plugins requires tight integration with the specific ROS environment. Plugins made for RViz in ROS 1 are not directly compatible with RViz 2; they often require modification or a complete rewrite.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Foxglove&lt;/strong&gt; offers extensibility through an Extensions SDK, which allows developers to build React-based visualizations using TypeScript. Extensions can be shared via an online registry and do not require recompilation. Foxglove also provides APIs and libraries in C++, Python, and Rust, primarily for working with the MCAP file format, enabling integration with ROS (both versions), WebSocket streams, and recorded sensor data. Foxglove's ecosystem also supports integration with popular robotics and simulation tools such as NVIDIA Isaac Sim, Velodyne LiDAR, and Jupyter Notebooks, either directly or via external bridges.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rerun&lt;/strong&gt; focuses on extensibility through SDKs and APIs, especially for Python and other programming environments. It does not support plugin-based customization or drag-and-drop extensions like RViz or Foxglove. Instead, it prioritizes programmatic data embedding and visualization, making it well-suited for users who prefer scripting and code-driven workflows.&lt;/p&gt;

&lt;p&gt;Rerun offers strong Python support, but its core is built with Rust and the egui GUI framework — technologies less familiar to many robotics developers. This can introduce a learning curve and limit low-level customization unless users are comfortable with Rust.&lt;/p&gt;

&lt;p&gt;Rerun does not offer a simple or dynamic plugin system or scripting layer similar to RViz's C++ plugins or Foxglove's TypeScript extensions. This limits rapid prototyping or quick third-party integration.&lt;/p&gt;

&lt;p&gt;Still, its APIs offer robust integration with diverse data sources, including ROS topics, sensor streams, and machine learning frameworks like TensorFlow and PyTorch. This makes Rerun a flexible tool for logging, visualizing, and debugging complex data pipelines.&lt;/p&gt;

&lt;p&gt;Rerun is best suited for developers who prefer programming-driven customization over GUI-based tools. It provides direct control over data ingestion and visualization, enabling highly tailored, dynamic workflows that can grow with project needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;ROS Integration&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#ros-integration" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;RViz&lt;/strong&gt; is tightly integrated with ROS and supports direct interaction with live ROS topics. Originally developed for ROS 1, it was succeeded by &lt;strong&gt;RViz 2&lt;/strong&gt; for ROS 2, and it remains a core visualization tool in many robotics workflows. However, this deep integration limits RViz's usability outside the ROS ecosystem. Both versions depend on a fully functioning ROS environment and are not designed to run independently or handle non-ROS data without conversion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Foxglove&lt;/strong&gt; connects to live ROS systems using &lt;code&gt;foxglove_bridge&lt;/code&gt;, a WebSocket-based bridge designed for this purpose. It runs on the same network as the ROS system and streams real-time ROS messages to Foxglove over WebSocket. This architecture allows remote monitoring and interaction with installing ROS locally. Unlike RViz, Foxglove can be used without a full ROS setup.&lt;/p&gt;

&lt;p&gt;In addition to live data, Foxglove also supports opening and analyzing ROS bag files locally. This makes it easy to review recorded data, visualize topics, and troubleshoot issues offline, without needing an active ROS system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rerun&lt;/strong&gt; supports integration with both ROS 1 and ROS 2, enabling live topic visualization and recorded data inspection. For ROS 2, Rerun officially maintaines basic example scripts, hosted on GitHub, that use Python (&lt;code&gt;rclpy&lt;/code&gt;) or C++ to subscribe to ROS 2 topics and forward selected data to the Rerun viewer. This is a user-defined bridge rather than a native-plugin integration. ROS 1 integration is possible using custom nodes written in either C++ or Python (&lt;code&gt;rospy&lt;/code&gt;), but usually requires more manual setup. Unlike Foxglove, which uses standardized communication protocols like &lt;code&gt;foxglove_websocket&lt;/code&gt; via &lt;code&gt;foxglove_bridge&lt;/code&gt; (and optionally &lt;code&gt;rosbridge&lt;/code&gt;), Rerun ingests data directly through user-defined code and does not rely on ROS-specific bridge protocols. While Rerun avoids protocol-based bridging, it still requires users to write custom nodes that translate ROS messages into its API.&lt;/p&gt;

&lt;p&gt;Rerun is especially useful for visualizing time-synchronized multimodal data, such as sensor readings, 3D geometry, camera images, transforms, and trajectories. However, it currently lacks built-in support for certain ROS-specific features like interactive TF tree exploration, occupancy/grid map overlays, and full URDF-based robot model visualization. Community-maintained examples (e.g., the &lt;code&gt;urdf_loader&lt;/code&gt;) offer partial support for URDF rendering, but do not yet match RViz’s depth or interactivity.&lt;/p&gt;

&lt;p&gt;Rerun also cannot currently open ROS bag files directly (&lt;code&gt;.bag&lt;/code&gt; for ROS 1 or &lt;code&gt;.db3&lt;/code&gt; for ROS 2). Instead, users replay them with &lt;code&gt;rosbag play&lt;/code&gt; or &lt;code&gt;ros2 bag play&lt;/code&gt; and forward selected topics to Rerun using custom Python or C++ bridge nodes. This workflow offers flexibility and performance but requires additional configuration. Rerun uses its own &lt;code&gt;.rrd&lt;/code&gt; log format, which is optimized for high-throughput, time-seekable storage and streaming.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Performance with Large Data&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#performance-with-large-data" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;RViz&lt;/strong&gt; is not fully optimized for very large datasets, such as dense point clouds, high-frequency topics, or long message histories. When visualizing large volumes of data, users may encounter performance issues like low frame rates, rendering lag, and high CPU or GPU usage. This happens because RViz continuously renders incoming ROS messages and stores message history in memory, which can quickly overwhelm system resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RViz 2&lt;/strong&gt; improves on this with better multithreading and more efficient message transport via DDS. These changes help boost performance and scalability in ROS 2 environments. However, RViz 2 still struggles with very dense or high-rate data streams, especially when rendering complex 3D data in real time, and these improvements do not fully solve the challenges of high-density visualization. To improve performance, users often reduce message history length, filter or downsample data, and disable non-essential displays.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Foxglove&lt;/strong&gt; , particularly its web version, can underperform RViz in high-data scenarios. Because it runs in a web browser, it's constrained by browser memory limits, single-threaded JavaScript execution, and limited access to hardware acceleration. As a result, visualizing large point clouds or streaming high-frequency topics may lead to lag, dropped frames, or browser instability. These limitations are especially evident when handling continuous 3D data or large bag files.&lt;/p&gt;

&lt;p&gt;Performance can vary depending on the use case and browser environment. The desktop application bypasses some browser limitations and can perform better. However, since it is built on Electron, it still has overhead related to memory usage and resource management common to Electron-based apps. Though these issues are generally less severe than in the web version. For lighter workloads, such as 2D plots or moderate-frequency telemetry, Foxglove often performs well and benefits from its accessible UI and cross-platform support.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rerun&lt;/strong&gt; is designed with high performance in mind for large-scale, multimodal data workflows. It is a native desktop application written in Rust and uses the modern WGPU rendering backend. This gives it direct access to system resources, helping it efficiently handle dense point clouds, long message histories, and high-frequency data streams. Behind the scenes, Rerun uses techniques such as memory-mapped I/O, zero-copy data handling, and intelligent batching to reduce latency and resource use.&lt;/p&gt;

&lt;p&gt;Although there are only few formal benchmarks comparing Rerun with RViz or Foxglove, early community feedback and its architecture suggest that Rerun scales effectively with complex datasets. Performance can be further improved by filtering or downsampling data streams according to specific needs. Rerun is currently under active development to expand its capabilities for robotics visualization and analysis.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; : Best practices for handling large datasets include splitting data files by time or size (e.g., every 1–5 minutes), using separate files for different topic groups, and automatically deleting old files when disk space is low. Chunk compression can also save disk space more efficiently than whole-file compression, but this approach consumes more CPU and memory resources, representing a trade-off between storage and performance.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Analysis &amp;amp; Visualization&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#analysis--visualization" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;RViz &amp;amp; RViz 2&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#rviz--rviz-2" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Key Capabilities:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-Time Visualization &amp;amp; Bag File Support&lt;/strong&gt; : RViz and RViz 2 support real-time visualization by subscribing to live ROS topics. They also display data from recorded bag files (&lt;code&gt;.bag&lt;/code&gt; for ROS 1, &lt;code&gt;.db3&lt;/code&gt; and &lt;code&gt;.mcap&lt;/code&gt; for ROS 2), when those files are replayed using tools like &lt;code&gt;rosbag play&lt;/code&gt; or &lt;code&gt;ros2 bag play&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Format Support&lt;/strong&gt; : RViz visualizes a wide range of robot state information, including URDF robot models, coordinate transforms (TF), and various sensor data such as LIDAR, IMU, depth, and RGB cameras. It also supports odometry, localization, occupancy grid maps (used in SLAM), navigation data (paths, goals, trajectories), and interactive markers for user interaction. RViz 2 supports the same data types with ROS 2 message compatibility.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Interactive Markers&lt;/strong&gt; : These 3D UI elements enable users to manipulate objects within the visualization: setting navigation goals, adjusting robot end-effector positions, or dragging points for motion planning. Using them requires writing supporting ROS nodes and configuring interaction logic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configurable Interface&lt;/strong&gt; : Users can add, remove, and arrange panels, and customize display properties such as colors, shapes, and update rates for each data type. These configurations can be saved and reloaded using &lt;code&gt;.rviz&lt;/code&gt; files, streamlining repetitive workflows like navigation, debugging, or SLAM visualization. Multiple camera control modes (Orbit, FPS, Top-down) allow flexible 3D scene navigation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Plugin-Based Architecture&lt;/strong&gt; : Developers can extend RViz by creating custom visualizations and tools through C++ plugins. RViz 2 supports plugins too, built on a more modern and modular architecture.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsiuhsoi39k1qg9verjkh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsiuhsoi39k1qg9verjkh.png" alt="RViz" width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small&gt;&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://foxglove.dev/examples" rel="noopener noreferrer"&gt;Data from Mobile Robot Example&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited Analysis&lt;/strong&gt; : RViz and RViz 2 primarily serve visualization purposes and lack built-in tools for detailed message inspection, conditional logging, or advanced playback controls like pause, step, or speed adjustment. These features typically require external tools such as &lt;code&gt;rqt_bag&lt;/code&gt;, ROS CLI utilities, or third-party RViz plugins (e.g., &lt;code&gt;rosbag_panel&lt;/code&gt;). RViz also does not consistently warn about invalid data (e.g., NaNs or infinities), which can result in missing or misleading visuals. These tools are not designed for deep offline data analysis and are best used alongside more specialized logging or analysis solutions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No Time-Series Analysis&lt;/strong&gt; : RViz and RViz 2 do not support time-series plotting or statistical analysis. For these tasks, dedicated tools like &lt;code&gt;rqt_plot&lt;/code&gt;, PlotJuggler (with ROS 2 support), or external environments like Jupyter with Python are more appropriate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No Conditional Filtering&lt;/strong&gt; : RViz and RViz 2 display all incoming data without the ability to filter messages based on content or fields. Filtering must be performed upstream, often by custom ROS nodes. Some plugins or panels offer limited filtering but are not general solutions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No Topic Synchronization&lt;/strong&gt; : RViz and RViz 2 subscribe to each topic independently and display messages as they arrive. They do not synchronize data streams from different topics based on timestamps, which can cause misalignment or inconsistencies in time-sensitive visualizations (e.g., camera images, LIDAR scans, TF frames). Synchronization requires external tools like &lt;code&gt;message_filters&lt;/code&gt; or custom nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No Built-In Logging or Export&lt;/strong&gt; : RViz and RViz 2 cannot automatically export visualized data or record screencasts. Users are limited to manual screenshots unless using custom plugins or external tools to record sessions or extract data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited Multi-Robot Support&lt;/strong&gt; : While RViz can display data from multiple robots using namespaces, the interface is not designed for straightforward multi-robot workflows. RViz 2 includes minor improvements, but still lacks dedicated features for managing multiple robots simultaneously.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Foxglove&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#foxglove" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Key Capabilities:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multi-Modal 3D Visualization&lt;/strong&gt; : Foxglove provides comprehensive 3D visualization for a variety of robotics data, including URDF robot models, TF trees, sensor streams (LIDAR, point clouds, camera feeds), occupancy grids, and navigation elements such as paths, goals, and costmaps. Users can interact with the scene in real time: rotating the view, toggling layers, and focusing on specific frames or topics. Multi-camera views, tooltips, and overlays enhance spatial understanding. Synchronized multi-viewports and flexible camera modes (free, fixed, follow-frame, sensor-aligned) make it possible to examine several spatial data streams side by side. All streams are synchronized through a shared timeline for consistent context across modalities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Topic Synchronization &amp;amp; Playback Timeline&lt;/strong&gt; : Foxglove offers a unified, timestamp-based timeline that synchronizes data from multiple topics. This ensures time-aligned playback of sensor streams like RGB images, depth, point clouds, IMU, and TFs, useful both in real time and with recorded data. The timeline includes playback controls such as pause, frame-by-frame stepping, variable speed, and bookmarks for quickly navigating to key events. This tight time synchronization is a major advantage over RViz, enabling clearer insights into system behavior.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Advanced Analysis &amp;amp; Time-Series Tools&lt;/strong&gt; : Foxglove offers a capable set of tools for offline analysis of recorded data. Users can inspect messages in detail, filter them by topic or namespace, and control playback through an integrated timeline with pause, step-by-step navigation, and adjustable speed. To view custom ROS 2 message types with full support, messages are best recorded in or converted to the MCAP format, although Foxglove can open other formats with some limitations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Modular &amp;amp; Configurable Interface&lt;/strong&gt; : The Foxglove UI is fully modular, allowing users to add, remove, duplicate, and rearrange panels such as 3D views, image feeds, message viewers, plots, diagnostics, and consoles. Each panel is highly configurable, with settings for color, scale, transparency, update rate, and filtering. Users can save layouts as JSON files, enabling reproducible setups, role-based dashboards, and fast task switching (e.g., from SLAM debugging to perception analysis). Layouts can be shared across teams or versioned over time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Custom Panels &amp;amp; Extensions&lt;/strong&gt; : Foxglove allows users to build custom panels using plugins, enabling specialized interfaces tailored to specific workflows. These panels are embedded directly into the Foxglove interface, keeping everything streamlined and centralized. This is particularly valuable for teams developing internal tools or dashboards for robotics development and testing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cloud &amp;amp; Collaboration&lt;/strong&gt; : Foxglove can be run locally or in the cloud. Its cloud features include shared dashboards, timeline comments, and real-time collaboration, enabling teams to jointly review logs or live data remotely. This makes it particularly useful for distributed development, remote testing, or asynchronous data reviews.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7ob7uf307o200pazpro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7ob7uf307o200pazpro.png" alt="Foxglove" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small&gt;&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://foxglove.dev/examples" rel="noopener noreferrer"&gt;Autonomous Robotic Manipulation Example&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited Real-Time 3D Interactivity&lt;/strong&gt; : Foxglove does not natively support interactive 3D markers like RViz. Users cannot directly manipulate objects in the 3D scene (e.g., setting goals, editing poses, or dragging elements) without building custom extensions. This limits Foxglove's out-of-the-box usability for real-time tasks such as motion planning, teleoperation, or interactive environment setup.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited Advanced Features&lt;/strong&gt; : Foxglove currently lacks certain advanced features found in tools like PlotJuggler. For example, Foxglove does not yet support strict axis ratio locking — a critical feature for accurately visualizing spatial data where maintaining proportional relationships between axes is important. Additionally, Foxglove's built-in data transformation capabilities are limited compared to PlotJuggler's comprehensive suite of statistical and signal-processing tools, such as moving averages, derivatives, filtering, and custom mathematical expressions. These advanced features make PlotJuggler especially useful for detailed signal analysis and fine-grained data manipulation, often essential when debugging sensor data or control signals.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No Automated Anomaly Detection&lt;/strong&gt; : Foxglove does not include built-in automated validation or anomaly detection. It does not use ML models or rule-based systems to automatically flag issues. Instead, it offers detailed message introspection and customizable visualizations that enable users to manually identify irregularities such as NaNs, infinities, or out-of-range values. This hands-on approach requires user expertise but provides flexible, in-depth analysis without automated alerts.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Rerun&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#rerun" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Key Capabilities&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-time &amp;amp; Recorded Data Visualization&lt;/strong&gt; : Rerun supports both live-streamed and recorded sensor data visualization with minimal latency. It ingests data via Rust- or Python-based logging SDKs, handling a wide range of robotics sensor modalities including 3D spatial data, camera imagery, numeric time-series, semantic segmentation maps, depth maps, annotations (bounding boxes, keypoints), and textual or categorical event data. Recorded datasets can be replayed with full timeline control for stepwise inspection or smooth playback, aiding in bug reproduction and model validation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Collaboration &amp;amp; Sharing Features&lt;/strong&gt; : Rerun streamlines collaborative workflows through data export and session sharing via &lt;code&gt;.rrd&lt;/code&gt; files. Teams can share recorded &lt;code&gt;.rrd&lt;/code&gt; files for offline inspection, annotate data using Annotation Context (which supports labeling via class IDs and color mapping), and use shared Recording IDs to log streams from multiple processes or machines into a unified session, as long as the Recording ID is set consistently at the time of logging. Note: merging previously recorded &lt;code&gt;.rrd&lt;/code&gt; files with different Recording IDs offline is currently not supported. Users can also export screenshots (for reports or dashboards) via the CLI or viewer options, depending on the version and available commands.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Customizable &amp;amp; Extensible UI&lt;/strong&gt; : The Rerun Viewer offers a modular, layout-aware interface tailored for tasks such as SLAM debugging, multi-sensor calibration, and performance profiling. Users can save and reload Blueprints — serialized UI configurations that preserve panel layouts, timelines, selected entities, and styling (e.g., color, transparency, size). A full styling hierarchy (override → store → default → fallback) makes it easy to customize visuals without modifying source data. Multiple synchronized views (3D scenes, timelines, 2D plots, raw data inspectors) support comprehensive analysis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rich 3D Visualization with Spatial Context&lt;/strong&gt; : Built on egui and WGPU, Rerun's 3D viewer efficiently renders large-scale scenes on consumer hardware. It uses an entity-path-based scene graph that reflects the hierarchical kinematic tree, allowing intuitive navigation and inspection of components, sensor frames, trajectories, bounding boxes, segmentation masks, dense point clouds, annotated images, 3D meshes, and time-series plots. Users can customize visual parameters (e.g., color maps, visibility, annotations, rendering modes) and navigate using orbit, zoom, and pan controls.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Flexible Time-Series &amp;amp; Event Logging&lt;/strong&gt; : Rerun supports synchronized timeline playback of multiple data streams, using both explicit (user-defined) and implicit (auto-derived) timestamps. It manages multiple time domains (logical/log time and timeline time) to accurately align heterogeneous data sources. Timeline controls include zooming, scrubbing, filtering by entity path or timeline, and detailed event inspection with metadata. Conditional filtering and selective visibility help isolate anomalies or relevant events in complex multi-agent or multi-sensor deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Programmable Data Access &amp;amp; Web Integration&lt;/strong&gt; : The Rerun SDK provides semantic logging primitives (e.g., &lt;code&gt;log_scalar&lt;/code&gt;, &lt;code&gt;log_image&lt;/code&gt;, &lt;code&gt;log_point_cloud&lt;/code&gt;, &lt;code&gt;log_text_entry&lt;/code&gt;, &lt;code&gt;log_tensor&lt;/code&gt;) that render automatically in the Viewer. Rerun uses Apache Arrow for efficient data handling, supporting advanced analysis with tools like Pandas and Jupyter. Direct export to formats like Parquet is supported via the API, making it suitable for both streaming visualization and offline batch analysis. The Viewer is also available as a React component, enabling seamless embedding within React applications and custom web dashboards, though integration with other JavaScript frameworks may require additional adaptation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Emerging Features&lt;/strong&gt; : Experimental capabilities include graph-based views for visualizing system architectures, connectivity, and agent interactions, extending Rerun's utility beyond traditional sensor data visualization into system design and research workflows.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpkl31ajhsla9aed5yu8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpkl31ajhsla9aed5yu8.png" alt="Rerun" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small&gt;&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://rerun.io/examples" rel="noopener noreferrer"&gt;nuScenes Example&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No Built-In Advanced Analytics&lt;/strong&gt; : Rerun focuses primarily on visualization and lacks integrated statistical analysis, anomaly detection, or expression-based plotting features. In contrast, Foxglove provides richer analytics, including expression plots and integration with monitoring systems like Prometheus.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Not Optimized for Live Robot Control&lt;/strong&gt; : Although it supports real-time data streaming, Rerun is not designed for robot teleoperation or control input interaction. RViz and Foxglove offer more mature tools for monitoring and interacting with live robots.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No Native Support for Navigation and SLAM Maps&lt;/strong&gt; : Unlike RViz, Rerun does not natively visualize occupancy grids, costmaps, or SLAM results, limiting its utility for path planning or localization workflows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited Real-Time Collaboration&lt;/strong&gt; : While Rerun supports offline session sharing, it lacks live multi-user collaboration features such as synchronized remote views or cloud-hosted live sessions, which are available in Foxglove.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited Visualization of Large-Scale System Architectures&lt;/strong&gt; : Rerun's entity-based model focuses on spatial and temporal data but does not yet offer comprehensive tools for exploring complex system communication graphs or architecture diagrams interactively.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#conclusion" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This article provided a detailed comparison of RViz, Foxglove, and Rerun, evaluating them across practical dimensions: pricing, platform, and collaboration support, user interface, extensibility, ROS integration, performance with large datasets, and analysis and visualization capabilities. By outlining their strengths and limitations, we offer a clear perspective to help robotics engineers and developers choose the right tool for their specific needs.&lt;/p&gt;

&lt;p&gt;Choosing the right tool depends on your context: use RViz for real-time ROS development and interactive debugging, Foxglove for collaborative data analysis, time-synchronized playback, and remote team workflows, and Rerun for fast, developer-centric visualization of structured data in programmatic pipelines. In practice, many robotics teams find that combining these tools enables more effective development and validation across different stages of their workflows.&lt;/p&gt;




&lt;p&gt;We hope this comparison helps you make informed decisions and inspires you to keep exploring better tools and workflows. If you have questions, feedback, or insights to share, join the conversation on the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community Forum&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>robotics</category>
      <category>rviz</category>
      <category>foxglove</category>
      <category>rerun</category>
    </item>
    <item>
      <title>Getting Started with LeRobot</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Tue, 27 May 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/reductstore/getting-started-with-lerobot-4i0a</link>
      <guid>https://forem.com/reductstore/getting-started-with-lerobot-4i0a</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fct5n78g9tpnlcaxt05nd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fct5n78g9tpnlcaxt05nd.png" alt="Intro image" width="800" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://huggingface.co/lerobot" rel="noopener noreferrer"&gt;&lt;strong&gt;LeRobot is an open-source project by Hugging Face&lt;/strong&gt;&lt;/a&gt; that makes it easy to explore the world of robotics with machine learning, even if you’ve never done anything like this before. It gives you pre-trained models, real-world data, and simple tools built with PyTorch, a popular machine learning framework. Whether you're just curious or ready to try your first robotics project, LeRobot is a great place to start.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Need to Get Started&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#what-you-need-to-get-started" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;You can run everything in a simulation right from your browser — no robot, no installations, and no powerful computer needed. We’ll be using Google Colab, a free cloud-based coding environment.&lt;/p&gt;

&lt;p&gt;Here’s what you’ll need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Google Account:&lt;/strong&gt; To use Colab, you need a Google account. If you use Gmail, you already have one. If not, you can &lt;a href="https://colab.research.google.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;create a Google account&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hugging Face Account:&lt;/strong&gt; LeRobot uses models and datasets hosted on Hugging Face. To access all features, you'll need to &lt;a href="https://huggingface.co/" rel="noopener noreferrer"&gt;&lt;strong&gt;create a Hugging Face account&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Preparation&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#preparation" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;To get started with LeRobot in Google Colab, first open Google Colab and sign in with your Google account. Once you're signed in, click the &lt;code&gt;New Notebook&lt;/code&gt; button to create a blank notebook — this is where you’ll run all your code.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; All the commands below are already written out in a &lt;a href="https://colab.research.google.com/gist/AnthonyCvn/f02f12ce113f0e2fcd773fd39d0e1dfa/getting-started-with-lerobot.ipynb" rel="noopener noreferrer"&gt;&lt;strong&gt;ready-made Google Colab notebook&lt;/strong&gt;&lt;/a&gt; you can use to follow along.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Step 1: Switch to GPU&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#step-1-switch-to-gpu" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;LeRobot can use a GPU (Graphics Processing Unit), which makes things run faster, especially for simulation and machine learning tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the Colab menu, click &lt;code&gt;Runtime&lt;/code&gt; &amp;gt; &lt;code&gt;Change runtime type&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under the &lt;code&gt;Hardware accelerator&lt;/code&gt;, select &lt;code&gt;GPU&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;code&gt;Save&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now your notebook is using a free GPU provided by Google. Note that GPU access in Colab is limited in time and resources, depending on whether you’re using the free or PRO version.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2: Clone the LeRobot Repository&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#step-2-clone-the-lerobot-repository" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;Run this command in a Colab code cell to download LeRobot from GitHub. This repository is public, so you don’t need a GitHub account to clone it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;git clone https://github.com/huggingface/lerobot.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A new folder named &lt;code&gt;lerobot&lt;/code&gt; will appear in the file browser on the left (click the folder icon to open it).&lt;/p&gt;

&lt;p&gt;For now, you can simply start with the &lt;code&gt;lerobot/examples&lt;/code&gt; folder. It contains ready-to-use scripts that let you try out real robot tasks using pre-trained models — no setup or deep knowledge needed.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Colab’s environment is temporary. If you restart the runtime, the files will be deleted and you’ll need to run the setup steps again. It’s best to keep these commands handy at the top of your notebook.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Step 3: Move into the LeRobot folder&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#step-3-move-into-the-lerobot-folder" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;Now that the LeRobot files are downloaded, we need to tell Python to work inside that folder. Run this command in a new cell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;%cd lerobot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This changes the current working directory to the &lt;code&gt;lerobot&lt;/code&gt; folder, where all the code and scripts are located.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 4: Install LeRobot and Its Dependencies&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#step-4-install-lerobot-and-its-dependencies" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;After cloning the repository and switching to the &lt;code&gt;lerobot&lt;/code&gt; folder, the next step is to install everything LeRobot needs to work. Run this command in a new cell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="s2"&gt;".[pusht]"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running it, LeRobot will be ready to use in your notebook. All necessary tools and libraries will be installed automatically.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you see any errors during installation, you may just need to install a missing libraries.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We recommend installing the &lt;code&gt;hf_xet&lt;/code&gt; library for faster and more reliable downloads from Hugging Face:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;hf_xet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tool helps speed up access to models and datasets, especially when loading large files in Colab.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running a Pre-Trained Model&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#running-a-pre-trained-model" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;LeRobot includes several pre-trained models, so you can try robot tasks without needing to train anything yourself. These models are already trained on specific tasks and ready to go.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;π0:&lt;/strong&gt; A powerful model that combines vision, language, and action. It’s designed for general robot tasks, for example, following instructions or reacting to what it sees.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;π0 FAST:&lt;/strong&gt; A faster, optimized version of the π0 model.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Diffusion Policy:&lt;/strong&gt; A model trained on the Push-T dataset, where a robot learns to push a T-shaped object toward a target.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;VQ-BeT:&lt;/strong&gt; Another model trained on the same Push-T task, but it uses a different architecture. You can run both and compare how they perform.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ACT:&lt;/strong&gt; A model trained for fine manipulation tasks that require high precision, like inserting objects or handling small parts.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By default, the example script runs the Diffusion Policy model on the Push-T task. To try it out, run this command in a code cell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;python examples/2_evaluate_pretrained_policy.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you run the command, LeRobot will automatically download the pre-trained model, set up a simulation environment, and run the robot as it tries to complete the task. Throughout the process, you’ll see messages showing what’s happening step-by-step. A short video will also be saved so you can see how the robot performed.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; When running dataset downloads or model loading multiple times in a row, you might occasionally encounter temporary access restrictions from Hugging Face. This is normal and part of their rate limiting to prevent abuse.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;What You’ll See in the Output&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As the model runs, Colab will print some logs in the output below the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{'observation.image': PolicyFeature(type=&amp;lt;FeatureType.VISUAL: 'VISUAL'&amp;gt;, shape=(3, 96, 96)), 'observation.state': PolicyFeature(type=&amp;lt;FeatureType.STATE: 'STATE'&amp;gt;, shape=(2,))}
Dict('agent_pos': Box(0.0, 512.0, (2,), float64), 'pixels': Box(0, 255, (96, 96, 3), uint8))
{'action': PolicyFeature(type=&amp;lt;FeatureType.ACTION: 'ACTION'&amp;gt;, shape=(2,))}
Box(0.0, 512.0, (2,), float32)
step=0 reward=np.float64(0.0) terminated=False
step=1 reward=np.float64(0.0) terminated=False
...
step=108 reward=np.float64(0.9727550736734778) terminated=False
step=109 reward=np.float64(0.9969248691240408) terminated=False
step=110 reward=np.float64(1.0) terminated=True
Success!
IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (680, 680) to (688, 688) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to 1 (risking incompatibility).
Video of the evaluation is available in 'outputs/eval/example_pusht_diffusion/rollout.mp4'.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s what they mean:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Observations:&lt;/strong&gt; What kind of data the robot receives, like the shape and type of images or sensor readings it expects.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Actions:&lt;/strong&gt; The format of the commands the robot will output to control its movements.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reward:&lt;/strong&gt; A number that shows how well the robot is doing (higher = better).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Step-by-step info:&lt;/strong&gt; Shows progress, like step 108, reward 0.97, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Success or Failure:&lt;/strong&gt; Whether the robot completed the task. In our experiments, the same pre-trained model produced different results. It didn’t always complete the task successfully.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You may also see a &lt;strong&gt;warning&lt;/strong&gt; about video resizing. It’s normal and doesn’t affect how the robot runs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where’s the Video?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The video is saved in &lt;code&gt;lerobot/outputs/eval/example_pusht_diffusion/rollout.mp4&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It shows the robot pushing the T-shaped object in simulation using the actions generated by the model. To download it, find the file in the file browser, click the three dots to the right of the filename, and select &lt;code&gt;Download&lt;/code&gt;. Then you can watch it with any video player.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qcssdzb2b4udiqod4dt.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qcssdzb2b4udiqod4dt.gif" alt="GIF" width="688" height="688"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Want to Try a Different Model?&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#want-to-try-a-different-model" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;You can switch from Diffusion Policy to VQ-BeT, which is trained on the same task. It’s a good way to explore how different models perform.&lt;/p&gt;

&lt;p&gt;Here’s how you can do it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the file browser, open the file &lt;code&gt;lerobot/examples/2_evaluate_pretrained_policy.py&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Double-click the file to open it in the editor pane on the right.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update the following lines in the script:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="mi"&gt;33&lt;/span&gt; &lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;lerobot.common.policies.vqbet.modeling_vqbet&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;VQBeTPolicy&lt;/span&gt;
&lt;span class="c1"&gt;# Optional: change output path to avoid overwriting results
&lt;/span&gt;&lt;span class="mi"&gt;36&lt;/span&gt; &lt;span class="n"&gt;output_directory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;outputs/eval/example_vqbet_pusht&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="mi"&gt;43&lt;/span&gt; &lt;span class="n"&gt;pretrained_policy_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;lerobot/vqbet_pusht&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="mi"&gt;47&lt;/span&gt; &lt;span class="n"&gt;policy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;VQBeTPolicy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_pretrained&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pretrained_policy_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Save the file by pressing &lt;code&gt;Ctrl+S&lt;/code&gt; (or &lt;code&gt;Cmd+S&lt;/code&gt; on Mac).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After saving, re-run the code cell that runs the script:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;python examples/2_evaluate_pretrained_policy.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will now evaluate the VQ-BeT model instead of the Diffusion Policy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Training a Model&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#training-a-model" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;LeRobot isn’t just for running pre-trained models, it also lets you try training one yourself. You can train the same type of model used by the official LeRobot team: the Diffusion Policy on the Push-T task.&lt;/p&gt;

&lt;p&gt;Since we’re using Google Colab, you have access to a free GPU, which is important because training on other systems without a CUDA-enabled GPU can be very slow. For example, in our tests on a Mac with Apple Silicon (using the MPS backend), training took significantly longer — in one case, up to two hours just to complete just 20 steps.&lt;/p&gt;

&lt;p&gt;By default, the training script runs for 5000 steps, which takes some time. In our case, the run took about an hour on Colab’s GPU. If you want to try it faster, you can reduce the steps to, say, 100. This will still give you a good idea of how training works.&lt;/p&gt;

&lt;p&gt;In the file &lt;code&gt;lerobot/examples/3_train_policy.py&lt;/code&gt;, find and change these line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="mi"&gt;42&lt;/span&gt; &lt;span class="n"&gt;training_steps&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run the training script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;python examples/3_train_policy.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will start training the Diffusion Policy model on the Push-T task using the &lt;code&gt;lerobot/pusht&lt;/code&gt; dataset.&lt;/p&gt;

&lt;p&gt;As the script runs, you’ll see lines like this in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;step: 0 loss: 1.161
step: 1 loss: 5.978
...
step: 4998 loss: 0.048
step: 4999 loss: 0.037
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each line shows the current training step and the corresponding loss value. A decreasing loss generally means the model is learning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where the Trained Model is Saved&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;LeRobot will save your trained model in &lt;code&gt;lerobot/outputs/train/example_pusht_diffusion&lt;/code&gt;. Inside the folder, you’ll find two files represent your trained Diffusion Policy: one with the model’s weights and one with its settings. They will be used automatically when you run the model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Evaluating Your Trained Model&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#evaluating-your-trained-model" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Now let’s see your model in action.&lt;/p&gt;

&lt;p&gt;Open the file &lt;code&gt;lerobot/examples/2_evaluate_pretrained_policy.py&lt;/code&gt; and change the code so it loads your trained model instead of the pre-trained one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="mi"&gt;33&lt;/span&gt; &lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;lerobot.common.policies.diffusion.modeling_diffusion&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;DiffusionPolicy&lt;/span&gt;
&lt;span class="mi"&gt;36&lt;/span&gt; &lt;span class="n"&gt;output_directory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;outputs/eval/example_pusht_diffusion&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Comment out the old pretrained model path
&lt;/span&gt;&lt;span class="mi"&gt;43&lt;/span&gt; &lt;span class="c1"&gt;# pretrained_policy_path = "lerobot/diffusion_pusht"
# Use your newly trained model path instead
&lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt; &lt;span class="n"&gt;pretrained_policy_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;outputs/train/example_pusht_diffusion&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="mi"&gt;47&lt;/span&gt; &lt;span class="n"&gt;policy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;DiffusionPolicy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_pretrained&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pretrained_policy_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To avoid overwriting the previous video, give your video a new name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="mi"&gt;136&lt;/span&gt; &lt;span class="n"&gt;video_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;output_directory&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rollout_our_model.mp4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run the evaluation script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;python examples/2_evaluate_pretrained_policy.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What You’ll See&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The script will run your model in simulation and save a video you can later open to see how your model behaved &lt;code&gt;lerobot/outputs/eval/example_pusht_diffusion/rollout_our_model.mp4&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You’ll also see logs like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
step=297 reward=np.float64(0.0) terminated=False
step=298 reward=np.float64(0.0) terminated=False
step=299 reward=np.float64(0.0) terminated=False
Failure!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means the robot didn’t complete the task successfully. Even if you trained for 5000 steps, your model may still perform noticeably worse than the official pre-trained model. That’s normal, the LeRobot team trained their models with much more compute and fine-tuning. In comparison, your version might show less precise or more random movements. It’s a good first step, though, and shows the entire training and evaluation pipeline working end-to-end.&lt;/p&gt;

&lt;h2&gt;
  
  
  Downloading a Dataset&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#downloading-a-dataset" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;To train a model we need one key ingredient: data. These include video from the robot’s cameras, joint positions, and the actions it took over time.&lt;/p&gt;

&lt;p&gt;LeRobot makes this part easy. It comes with a growing collection of high-quality robot learning datasets you can download and explore with just a few lines of code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://huggingface.co/datasets?other=LeRobot" rel="noopener noreferrer"&gt;&lt;strong&gt;Browse all available datasets here&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To download and inspect a dataset, run this example script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!python examples/1_load_lerobot_dataset.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, this will download a dataset &lt;code&gt;lerobot/aloha_mobile_cabinet&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;But you’re not limited to just one. If you’d like to try the dataset used by the models in the previous section (DiffusionPolicy and VQ-BeT), open the script and change the &lt;code&gt;repo_id&lt;/code&gt; variable like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="mi"&gt;50&lt;/span&gt; &lt;span class="n"&gt;repo_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;lerobot/pusht&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then re-run the script. This will download the &lt;a href="https://huggingface.co/datasets/lerobot/pusht" rel="noopener noreferrer"&gt;&lt;strong&gt;Push-T dataset&lt;/strong&gt;&lt;/a&gt;, the same one used to train both models you just ran earlier. You’ll now have access to the raw data they were trained on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip: Clean Up the Output&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The dataset script prints a lot of information, overwhelming for beginners. To make things easier, you can comment out some of the verbose print lines.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To comment out multiple lines quickly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Windows/Linux:&lt;/strong&gt; Press &lt;code&gt;Ctrl + /&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;macOS:&lt;/strong&gt; Press &lt;code&gt;Cmd + /&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Suggested lines to comment out include:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="mi"&gt;38&lt;/span&gt;  &lt;span class="c1"&gt;# print("List of available datasets:")
&lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;  &lt;span class="c1"&gt;# pprint(lerobot.available_datasets)
&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;  &lt;span class="c1"&gt;# hub_api = HfApi()
&lt;/span&gt;&lt;span class="mi"&gt;43&lt;/span&gt;  &lt;span class="c1"&gt;# repo_ids = [info.id for info in hub_api.list_datasets(task_categories="robotics", tags=["LeRobot"])]
&lt;/span&gt;&lt;span class="mi"&gt;44&lt;/span&gt;  &lt;span class="c1"&gt;# pprint(repo_ids)
&lt;/span&gt;&lt;span class="mi"&gt;65&lt;/span&gt;  &lt;span class="c1"&gt;# print("Features:")
&lt;/span&gt;&lt;span class="mi"&gt;66&lt;/span&gt;  &lt;span class="c1"&gt;# pprint(ds_meta.features)
&lt;/span&gt;&lt;span class="mi"&gt;69&lt;/span&gt;  &lt;span class="c1"&gt;# print(ds_meta)
&lt;/span&gt;&lt;span class="mi"&gt;73&lt;/span&gt;  &lt;span class="c1"&gt;# dataset = LeRobotDataset(repo_id, episodes=[0, 10, 11, 23])
&lt;/span&gt;&lt;span class="mi"&gt;76&lt;/span&gt;  &lt;span class="c1"&gt;# print(f"Selected episodes: {dataset.episodes}")
&lt;/span&gt;&lt;span class="mi"&gt;77&lt;/span&gt;  &lt;span class="c1"&gt;# print(f"Number of episodes selected: {dataset.num_episodes}")
&lt;/span&gt;&lt;span class="mi"&gt;78&lt;/span&gt;  &lt;span class="c1"&gt;# print(f"Number of frames selected: {dataset.num_frames}")
&lt;/span&gt;&lt;span class="mi"&gt;82&lt;/span&gt;  &lt;span class="c1"&gt;# print(f"Number of episodes selected: {dataset.num_episodes}")
&lt;/span&gt;&lt;span class="mi"&gt;83&lt;/span&gt;  &lt;span class="c1"&gt;# print(f"Number of frames selected: {dataset.num_frames}")
&lt;/span&gt;&lt;span class="mi"&gt;86&lt;/span&gt;  &lt;span class="c1"&gt;# print(dataset.meta)
&lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt;  &lt;span class="c1"&gt;# print(dataset.hf_dataset)
&lt;/span&gt;&lt;span class="mi"&gt;111&lt;/span&gt; &lt;span class="c1"&gt;# pprint(dataset.features[camera_key])
&lt;/span&gt;&lt;span class="mi"&gt;113&lt;/span&gt; &lt;span class="c1"&gt;# pprint(dataset.features[camera_key])
&lt;/span&gt;&lt;span class="mi"&gt;119&lt;/span&gt; &lt;span class="c1"&gt;# delta_timestamps = {
#... all lines
&lt;/span&gt;&lt;span class="mi"&gt;148&lt;/span&gt; &lt;span class="c1"&gt;# break
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can always uncomment them later if you want a deeper look into the dataset structure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What’s Inside the Push-T Dataset?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once downloaded, you’ll see a summary like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Number of episodes:&lt;/strong&gt; 206. An episode is like one full attempt by the robot to complete a task, one round of practice.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Frames per episode (avg.):&lt;/strong&gt; ~124. Each episode is made up of about 124 images (or frames), showing what the robot saw over time as it moved and acted.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Recording speed:&lt;/strong&gt; 10 FPS. These images were recorded at 10 frames per second, like a slow-motion video. It lets you see how the robot moved step by step.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Camera views:&lt;/strong&gt; &lt;code&gt;observation.image&lt;/code&gt;. Each frame is taken from the robot’s camera, and labeled as &lt;code&gt;observation.image&lt;/code&gt; in the data. It’s what the robot sees.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Task description:&lt;/strong&gt; Push the T-shaped block onto the T-shaped target.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Image format:&lt;/strong&gt; Each image is stored as a PyTorch tensor (a data structure used in machine learning):&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LeRobot downloads the dataset into a special hidden cache folder inside the Colab environment &lt;code&gt;/root/.cache/huggingface/lerobot/lerobot/pusht/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This folder contains all the data files: observations, actions, metadata, and even video recordings. Since it’s hidden by default, follow these steps to access it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click the eye icon at the top of the file browser to show hidden folders like &lt;code&gt;.cache.&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the folder icon with two dots just above the &lt;code&gt;lerobot&lt;/code&gt; folder.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4satqv0pbrp2qw4daeb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4satqv0pbrp2qw4daeb.png" alt="Folders" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Now navigate through the folders like this: &lt;code&gt;root&lt;/code&gt; &amp;gt; &lt;code&gt;.cache&lt;/code&gt; &amp;gt; &lt;code&gt;huggingface&lt;/code&gt; &amp;gt; &lt;code&gt;lerobot&lt;/code&gt; &amp;gt; &lt;code&gt;lerobot&lt;/code&gt; &amp;gt; &lt;code&gt;pusht&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To go back to the &lt;code&gt;lerobot&lt;/code&gt; folder, look for the &lt;code&gt;content&lt;/code&gt; folder, it's at the same level as the &lt;code&gt;root&lt;/code&gt;, and go inside.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Dataset Folder Structure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's what the folder structure typically looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lerobot/pusht
├── README.md
├── .cache/
├── data/
│   └── chunk-000/
│       ├── episode_000000.parquet
│       └── ...  # More episodes
├── meta/
├── videos/
│   └── chunk-000/
│       └── observation.image/
│           ├── episode_000000.mp4
│           └── ...  # More videos
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;README.md&lt;/code&gt;: A short file that explains what’s inside the dataset and what it’s for.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;data/&lt;/code&gt;: This folder contains one file per episode &lt;code&gt;.parquet&lt;/code&gt;, where the robot logs everything it experienced.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;meta/&lt;/code&gt;: This folder contains helpful background info, like the episode’s descriptions, task goals, and performance stats, that LeRobot uses to organize and analyze the data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;videos/&lt;/code&gt;: Short &lt;code&gt;.mp4&lt;/code&gt; videos showing the robot’s camera view during each episode. These are great if you want to see what the robot was doing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;.cache/&lt;/code&gt;: A hidden folder used by LeRobot internally.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Visualize a Dataset&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#visualize-a-dataset" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Once your dataset is loaded, it’s super helpful to see what the robot actually experienced. LeRobot comes with an easy-to-use, interactive visualization tool that runs right in your browser.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try the Built-in Viewer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can open it here: &lt;a href="https://huggingface.co/spaces/lerobot/visualize_dataset" rel="noopener noreferrer"&gt;&lt;strong&gt;Visualize Dataset (v2.0+ latest dataset format)&lt;/strong&gt;&lt;/a&gt; or use the older version: &lt;a href="https://huggingface.co/spaces/lerobot/visualize_dataset_v1.6" rel="noopener noreferrer"&gt;&lt;strong&gt;Visualize Dataset (v1.6 old dataset format)&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the viewer, just enter the name of a dataset, like &lt;code&gt;lerobot/pusht&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Can You See?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Watch each episode like a video from the robot’s point of view.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Explore graphs showing how the robot moved and what actions it took. For example, in the &lt;code&gt;lerobot/pusht&lt;/code&gt; dataset, the viewer displays Motor 0 and Motor 1 — both state and action — as four curves plotted over time. This allows you to see how the robot's decisions changed from frame to frame during each episode.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fziuy95474ovgo9qwiok8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fziuy95474ovgo9qwiok8.png" alt="Motors" width="774" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#next-steps" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;You’ve just taken your first steps into robotics and machine learning with LeRobot, so what can you do next?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Try different models and tasks:&lt;/strong&gt; LeRobot supports several models and scenarios. For more challenging examples, check out the &lt;code&gt;lerobot/examples/advanced&lt;/code&gt; folder.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Run your own experiment:&lt;/strong&gt; Once you’re familiar with the basic workflow, you can try a simple experiment: change the dataset slightly or load a new one. Even a small change, such as selecting a different set of episodes, will help you see how data affects the model’s behavior.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Grow your projects further:&lt;/strong&gt; As you work more with LeRobot and collect larger amounts of data, organizing and managing that data becomes important. This can feel overwhelming at first, but understanding the basics of data management will save you time and frustration later. We recommend checking out this beginner-friendly guide, &lt;a href="https://www.reduct.store/blog/store-robotic-data" rel="noopener noreferrer"&gt;&lt;strong&gt;How to Store and Manage Robotics Data&lt;/strong&gt;&lt;/a&gt;. It explains simple strategies for handling robot data efficiently. You don’t need to master this now, but keeping these ideas in mind will help you scale your experiments smoothly when you’re ready.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#conclusion" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we saw how LeRobot lets you explore robotics and machine learning without needing a physical robot. You ran pre-trained models in simulation, worked with real robot data, and even trained a simple model — all within Colab.&lt;/p&gt;

&lt;p&gt;What many find surprising is how accessible this has become. Tasks that once required expensive hardware and deep skills can now be done with just a browser and a few lines of code. Seeing a robot act based on what it sees is exciting, and you can go further by modifying, training, and evaluating models yourself. LeRobot is a great way to start new projects and dive into robotics.&lt;/p&gt;




&lt;p&gt;We hope this tutorial inspires you to keep exploring. If you have any questions or ideas to share, feel free to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community Forum&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>tutorials</category>
      <category>robotics</category>
      <category>lerobot</category>
    </item>
    <item>
      <title>Getting Started with MetriCal</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Tue, 13 May 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/reductstore/getting-started-with-metrical-215i</link>
      <guid>https://forem.com/reductstore/getting-started-with-metrical-215i</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flp5tq3iz8hkst9ivlaag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flp5tq3iz8hkst9ivlaag.png" alt="Intro image" width="800" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sensor calibration&lt;/strong&gt; is the process of determining the precise mathematical parameters that describe how a sensor perceives or measures the physical world. By comparing sensor outputs to known reference values, we can correct measurement errors and ensure data from different sensors align accurately.&lt;/p&gt;

&lt;p&gt;There are two main categories of calibration parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Intrinsic parameters (Intrinsics):&lt;/strong&gt; These capture the internal characteristics of a sensor, such as lens distortion in cameras or bias and scaling errors in IMUs. Calibrating intrinsics helps eliminate built-in measurement errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Extrinsic parameters (Extrinsics):&lt;/strong&gt; These define a sensor's position and orientation relative to another sensor or the environment. Accurate extrinsics are essential for transforming and combining data from multiple sensors into a shared coordinate system.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;High-quality calibration is key to getting reliable, consistent data, which is critical for mapping, perception, and decision-making in robotics and autonomous systems. Recognizing this need, &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt; can be used to manage the entire calibration data pipeline — from raw inputs such as LiDAR scans and calibration images to the output files produced during processing (e.g., intrinsic/extrinsic parameters, transformation matrices). When used together with tools like &lt;a href="https://www.tangramvision.com/products/calibration/metrical" rel="noopener noreferrer"&gt;&lt;strong&gt;MetriCal&lt;/strong&gt;&lt;/a&gt;, which streamline the calibration of multimodal sensor data, ReductStore can help enable scalable, automated workflows across distributed systems by making it easy to collect, store, and manage sensor data directly at the edge. Calibration results can then be saved back to ReductStore for persistent access and reuse.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is MetriCal?&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#what-is-metrical" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.tangramvision.com/metrical/intro/" rel="noopener noreferrer"&gt;&lt;strong&gt;MetriCal is a calibration tool developed by Tangram Vision&lt;/strong&gt;&lt;/a&gt; for systems that include diverse types of sensors. It’s designed to handle real-world calibration scenarios and supports the simultaneous processing of data from cameras, LiDARs, and IMUs. MetriCal is suitable for both small-scale setups and larger, production-level environments, providing tools for precise and consistent multi-sensor calibration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#key-features" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ROS Data Input:&lt;/strong&gt; Supports &lt;code&gt;.bag&lt;/code&gt; and &lt;code&gt;.mcap&lt;/code&gt; files (recommended)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automatic Extrinsics Estimation:&lt;/strong&gt; Computes sensor and target poses without requiring CAD models or manual setup&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unlimited Sensor Streams:&lt;/strong&gt; Supports an arbitrary number of input streams&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Broad Target Support:&lt;/strong&gt; Compatible with both 2D and 3D targets; includes a library of premade targets and supports multiple targets at once&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Modular Calibration Workflow:&lt;/strong&gt; Allows splitting the calibration process into multiple datasets and stages&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Detailed Diagnostics:&lt;/strong&gt; Provides visual and numerical feedback on data quality and calibration performance&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ROS Integration:&lt;/strong&gt; Outputs calibration results as an URDF file&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pixel-Level Corrections:&lt;/strong&gt; Generates lookup tables for single-camera undistortion and stereo rectification&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lightweight Deployment:&lt;/strong&gt; CPU-only operation; runs efficiently on compact devices like Intel NUCs or in the cloud&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How MetriCal Works&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#how-metrical-works" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;MetriCal is structured as a CLI-based, fully scriptable pipeline designed to support reproducible workflows and automation. The core calibration process can be divided into the following stages:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Preparation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The quality of calibration strongly depends on the choice of targets and the quality of the input data. It's important to select or build targets suited to your use case and follow MetriCal’s data capture guidelines to ensure the collected data meets the required quality standards.&lt;/p&gt;

&lt;p&gt;At this stage, you'll also prepare an &lt;strong&gt;object space file&lt;/strong&gt; , which describes all calibration targets and their properties.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Initialization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the dataset and configuration files are ready, MetriCal’s &lt;code&gt;init mode&lt;/code&gt; analyzes sensor observations to infer a raw input &lt;strong&gt;plex&lt;/strong&gt; — a description of the spatial, temporal, and semantic relationships within your perception system. It represents the physical system being calibrated and serves as the starting point for all further calibration steps.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you already have a plex with existing calibration results that you want to preserve, it can be used as a seed for an init plex.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;3. Calibration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;calibrate mode&lt;/code&gt;, MetriCal performs a full bundle adjustment to refine both the initial plex and the object space. It applies motion filtering to remove features affected by motion blur, rolling shutter, false detections, and other artifacts in images or point clouds.&lt;/p&gt;

&lt;p&gt;A &lt;code&gt;.json&lt;/code&gt; cache file is created at this step. This file stores detected objects, allowing future runs to skip the detection process and complete faster.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The calibration data capture and detection process can also be visualized during this step.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;4. Diagnostics&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;MetriCal generates a detailed diagnostic report with color-coded charts summarizing calibration quality:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cyan&lt;/strong&gt; – spectacular&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Green&lt;/strong&gt; – good&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orange&lt;/strong&gt; – okay, but generally poor&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Red&lt;/strong&gt; – bad&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Visualization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;display mode&lt;/code&gt;, calibration results are visualized using &lt;a href="https://rerun.io/" rel="noopener noreferrer"&gt;&lt;strong&gt;Rerun, an open-source tool for multimodal data visualization&lt;/strong&gt;&lt;/a&gt;. It allows you to quickly verify the calibration quality before exporting.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Typically, the same dataset is used for visualization, but you can also use a different one if it has the same topic names.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;6. Export&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;shape mode&lt;/code&gt;, the optimized plex can be transformed into various configurations for use in deployed systems, for example, ROS URDFs or pixel-wise lookup tables.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;MetriCal also includes several additional modes to support advanced workflows: &lt;code&gt;completion mode&lt;/code&gt;, &lt;code&gt;consolidate object spaces mode&lt;/code&gt;, &lt;code&gt;pipeline mode&lt;/code&gt;, and &lt;code&gt;pretty print mode&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  MetriCal Example&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#metrical-example" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;To test MetriCal’s multi-sensor capabilities, we use the &lt;a href="https://gitlab.com/tangram-vision/platform/metrical/-/tree/main/examples/camera_lidar" rel="noopener noreferrer"&gt;&lt;strong&gt;official example featuring two cameras and a LiDAR&lt;/strong&gt;&lt;/a&gt;. The dataset contains synchronized observations from all three sensors, capturing a LiDAR circle target from different angles. This allows MetriCal to calculate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Intrinsics and poses for both cameras&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Extrinsics between each camera and the LiDAR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Target geometry and consistency across different views&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Installation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We installed MetriCal via Docker. Make sure to set up an alias for convenient access during installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/.zshrcmetrical() { docker run --rm --tty --init --user="$(id -u):$(id -g)" \ --volume="$PATH/metrical/":"/datasets" \ --volume=metrical-license-cache:/.cache/tangram-vision \ --workdir="/datasets" \ --add-host=host.docker.internal:host-gateway \ tangramvision/cli:latest \ --license="LICENSE KEY" \ "$@";}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;span&gt;&lt;/span&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;MetriCal requires a license key.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can also install MetriCal natively on &lt;code&gt;Ubuntu&lt;/code&gt; or &lt;code&gt;Pop!_OS&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Calibration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After cloning the repository, download and unzip the &lt;code&gt;.zip&lt;/code&gt; file. Place the observations folder into:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$PATH/metrical/examples/camera_lidar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, set the &lt;code&gt;LICENSE&lt;/code&gt; variable inside &lt;code&gt;metrical_alias.sh&lt;/code&gt;, located in the same directory.&lt;/p&gt;

&lt;p&gt;Once everything is configured, you can run the full calibration pipeline using the provided shell script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$PATH/metrical/examples/camera_lidar/camera_lidar_runner.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Visualization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To visualize the results, install Rerun via &lt;code&gt;pip&lt;/code&gt; and launch the Rerun server in a separate terminal tab.&lt;/p&gt;

&lt;p&gt;Then, run the following command to display calibration results in &lt;code&gt;display mode&lt;/code&gt; and view the data in real time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;metrical display /datasets/examples/camera_lidar/observations /datasets/examples/camera_lidar/results.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bz7fl6ey1ryd7rjqi1f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bz7fl6ey1ryd7rjqi1f.png" alt="correction" width="701" height="646"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding Results&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#understanding-results" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;During calibration, MetriCal produces charts and diagnostics that show the quality of the process and highlight areas that may need improvement.&lt;/p&gt;

&lt;h4&gt;
  
  
  Data Inputs (DI Section)&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#data-inputs-di-section" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;The Data Inputs section provides an overview of the input data and ensures that the dataset is appropriate for a successful calibration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Metrics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Calibration Inputs (DI-1):&lt;/strong&gt; Displays basic configuration parameters.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ DI-1 █ Calibration Inputs
+--------------------------------------+----------+
| Calibration Parameter                | Value    |
+--------------------------------------+----------+
| MetriCal Version                     | 13.2.1   |
+--------------------------------------+----------+
| Optimization Profile                 | Standard |
+--------------------------------------+----------+
| Camera Motion Threshold              | Disabled |
+--------------------------------------+----------+
| Lidar Motion Threshold               | Disabled |
+--------------------------------------+----------+
| Preserve Input Constraints           | Disabled |
+--------------------------------------+----------+
| Object Relative Extrinsics Inference | Enabled  |
+--------------------------------------+----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Object Space Descriptions(DI-2):&lt;/strong&gt; Describes the calibration targets (object spaces).
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ DI-2 █ Object Space Descriptions
+-------------+-------------------------+---------------------------------------+------------------------+
| Type        | UUID                    | Detector                              | Variance               |
+-------------+-------------------------+---------------------------------------+------------------------+
| Circle      | 34e6df7b...45d796bf     | - 0.6m radius                         | 1e-8, 1e-8, 1e-8       |
|             |                         | - 0.375m x offset                     |                        |
|             | Mutual Group A          | - 0.375m y offset                     |                        |
|             | |-- 24e6df7b...45d796bf | - 0m z offset                         |                        |
|             |                         | - 0.05m reflective tape width         |                        |
|             |                         | - Detect interior points: true        |                        |
+-------------+-------------------------+---------------------------------------+------------------------+
| Markerboard | 24e6df7b...45d796bf     | - 7x7 grid                            | 0.0002, 0.0002, 0.0002 |
|             |                         | - 0.097m markers                      |                        |
|             | Mutual Group A          | - 0.125m checkers (aka solid squares) |                        |
|             | |-- 34e6df7b...45d796bf | - Dictionary: Aruco4x4_1000           |                        |
|             |                         | - Marker IDs start at 0               |                        |
|             |                         | - Top-left corner is a Marker         |                        |
+-------------+-------------------------+---------------------------------------+------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Processed Observation Count (DI-3):&lt;/strong&gt; Shows how many observations were processed from the dataset.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ DI-3 █ Processed Observation Count
+----------------------------------+--------+-------------------+------------------------+-----------------------+
| Component                        | # read | # with detections | # after quality filter | # after motion filter |
+----------------------------------+--------+-------------------+------------------------+-----------------------+
| infra1_image_rect_raw (f7df04cc) |    283 |               276 |                    273 |                   273 |
+----------------------------------+--------+-------------------+------------------------+-----------------------+
| infra2_image_rect_raw (34ed8934) |    284 |               282 |                    278 |                   278 |
+----------------------------------+--------+-------------------+------------------------+-----------------------+
|      velodyne_points1 (38140838) |   2750 |              2026 |                   2026 |                  2026 |
+----------------------------------+--------+-------------------+------------------------+-----------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Camera FOV Coverage (DI-4):&lt;/strong&gt; Displays how well the calibration data covers the field of view (FOV) of each camera. Ideal coverage is characterized by minimal red cells, which represent areas without detected features.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6galt99iai13y8bncyp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6galt99iai13y8bncyp.png" alt="DI-4" width="761" height="768"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Detection Timeline(DI-5):&lt;/strong&gt; Displays when detections occurred across the dataset timeline. Each row corresponds to a different sensor, making it easier to check synchronization.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ DI-5 █ Detection Timeline
+----------------------------------+-------------------------------------------------------------------------------------------------+
|            Components            |          Detection Timeline (x axis is seconds elapsed since first observation)                 |
|                                  |          Every point on the timeline represents an observation with detected features.          |
+----------------------------------+-------------------------------------------------------------------------------------------------+
|                                  | ⡁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ 4.0 |
| infra1_image_rect_raw (f7df04cc) | ⠄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀     |
| infra2_image_rect_raw (34ed8934) | ⠂⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠁⠉⠉⠉⠁⠉⠀⠉⠉⠉⠉⠁⠁⠉⠀⠈⠉⠉⠉⠉⠉⠈⠉⠁⠀⠀⠈⠁⠈⠈⠉⠉⠉⠁⠁⠀⠉⠀⠈⠁⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠈⠈⠀⠉⠉⠉⠁⠉⠉⠈⠉⠉⠈⠉⠉⠈⠉⠁⠉⠉⠈⠉⠉⠀⠉⠁     |
| velodyne_points1 (38140838)      | ⡁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀     |
|                                  | ⠄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠁⠉⠉⠉⠁⠉⠀⠉⠉⠉⠉⠉⠀⠉⠁⠈⠉⠉⠉⠉⠉⠈⠉⠀⠀⠀⠀⠀⠉⠉⠉⠉⠉⠁⠉⠉⠉⠁⠀⠉⠉⠉⠁⠉⠈⠁⠉⠉⠉⠉⠉⠈⠁⠉⠉⠉⠉⠉⠉⠉⠉⠈⠁⠉⠉⠈⠉⠁⠉⠉⠈⠉⠉⠀⠉⠁     |
|                                  | ⠂⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀     |
|                                  | ⡉⠀⠀⠈⠀⠉⠈⠈⠀⠀⠈⠁⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠈⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠁⠉⠁     |
|                                  | ⠄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀     |
|                                  | ⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁ 0.0 |
|                                  | 0.0                                                                                  269.1      |
+----------------------------------+-------------------------------------------------------------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Camera Modeling (CM Section)&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#camera-modeling-cm-section" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;This section shows how well the camera models fit the actual calibration data — that is, how accurately the system understood the camera’s behavior based on the collected data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Metrics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Binned Reprojection Errors (CM-1):&lt;/strong&gt; A heatmap showing reprojection errors across the camera’s FOV. If certain areas show high error (orange or red), it could indicate problems with the camera model or lens distortion that isn't being captured correctly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7y93ja9g0rg5l0hlg5dw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7y93ja9g0rg5l0hlg5dw.png" alt="CM-1" width="758" height="696"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Stereo Pair Rectification Error (CM-2):&lt;/strong&gt; For multi-camera setups, this shows the stereo rectification error between camera pairs, indicating how well the cameras are aligned for stereo vision.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ CM-2 █ Stereo Pair Rectification Error
+---------------------------------------+--------------+-------+-------------------------------------------------------------------------------------+
| Stereo Pair                           | # Mutual Obs | RMSE  | Binned rectified error (px)                                                         |
+---------------------------------------+--------------+-------+-------------------------------------------------------------------------------------+
| Dominant eye:  infra1_image_rect_raw  | 155          | 0.742 | ⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ 3202.0 |
| Secondary eye: infra2_image_rect_raw  |              |       | ⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀        |
|                                       |              |       | ⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀        |
|                                       |              |       | ⣇⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀        |
|                                       |              |       | ⡇⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀        |
|                                       |              |       | ⡇⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀        |
|                                       |              |       | ⡇⢸⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀        |
|                                       |              |       | ⠇⠸⠀⠏⠹⠒⠖⠲⠒⠖⠲⠒⠦⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠀⠀ 0.0    |
|                                       |              |       | 0.0                                                                     7.0         |
+---------------------------------------+--------------+-------+-------------------------------------------------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Extrinsics Info (EI Section)&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#extrinsics-info-ei-section" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;This section focuses on the spatial relationships between components in the calibration setup. Accurate extrinsic calibration ensures that the relative positions and orientations of the sensors are well understood.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Metrics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Component Extrinsics Errors (EI-1):&lt;/strong&gt; Displays the extrinsic errors between each pair of components. If the errors are large, check whether all components are positioned and oriented correctly.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ EI-1 █ Component Extrinsics Errors
+--------------------------------------------+----------+----------+----------+----------+-----------+---------+
| Weighted Component Relative Extrinsic RMSE | X (m)    | Y (m)    | Z (m)    | Roll (°) | Pitch (°) | Yaw (°) |
| Rotation is Euler XYZ ext                  |          |          |          |          |           |         |
+--------------------------------------------+----------+----------+----------+----------+-----------+---------+
| To: infra1_image_rect_raw (f7df04cc),      | 2.254e-3 | 1.802e-3 | 3.780e-3 |    0.077 |     0.100 |   0.148 |
|    From: infra2_image_rect_raw (34ed8934)  |          |          |          |          |           |         |
+--------------------------------------------+----------+----------+----------+----------+-----------+---------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;IMU Preintegration Errors (EI-2):&lt;/strong&gt; Displays a summary of all IMU preintegration errors from the system. In this example, IMUs were not calibrated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Observed Camera Range of Motion (EI-3):&lt;/strong&gt; Shows how much motion was observed for each camera during the data collection. Sufficient motion is necessary to avoid projective compensation errors.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ EI-3 █ Observed Camera Range of Motion
+----------------------------------+--------+----------------------+--------------------+
| Camera                           | Z (m)  | Horizontal angle (°) | Vertical angle (°) |
+----------------------------------+--------+----------------------+--------------------+
| infra1_image_rect_raw (f7df04cc) | 6.308  | 127.081              | 63.801             |
+----------------------------------+--------+----------------------+--------------------+
| infra2_image_rect_raw (34ed8934) | 6.434  | 144.606              | 126.280            |
+----------------------------------+--------+----------------------+--------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Calibrated Plex (CP Section)&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#calibrated-plex-cp-section" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;This section displays the final results of the calibration, including the intrinsic and extrinsic parameters that can be used for updating the system configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Metrics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Camera Metrics (CP-1):&lt;/strong&gt; Contains the intrinsic parameters of each camera, such as focal length, principal point, and distortion parameters. Standard deviations indicate the uncertainty of each parameter.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ CP-1 █ Camera Metrics
+----------------------------------+-------------------------+-----------------------------------------+--------------------------------------+
| Camera                           | Specs                   | Projection Model                        | Distortion Model                     |
+----------------------------------+-------------------------+-----------------------------------------+--------------------------------------+
| infra1_image_rect_raw (f7df04cc) |  width (px)        848  |  Pinhole                                |      OpenCV Distortion               |
|                                  |  height (px)       480  |  f (px)      431.914 ±      0.224 (1σ)  |  k1   -1.574e-3 ±   8.715e-4 (1σ)    |
|                                  |  pixel pitch (um)  1    |  cx (px)     421.938 ±      0.395 (1σ)  |  k2       0.011 ±   1.876e-3 (1σ)    |
|                                  |                         |  cy (px)     230.592 ±      0.465 (1σ)  |  k3   -6.171e-3 ±   1.241e-3 (1σ)    |
|                                  |                         |                                         |  p1   -2.037e-3 ±   2.680e-4 (1σ)    |
|                                  |                         |                                         |  p2   -1.479e-3 ±   2.443e-4 (1σ)    |
|                                  |                         |                                         |                                      |
+----------------------------------+-------------------------+-----------------------------------------+--------------------------------------+
| infra2_image_rect_raw (34ed8934) |  width (px)        848  |  Pinhole                                |      OpenCV Distortion               |
|                                  |  height (px)       480  |  f (px)      429.085 ±      0.215 (1σ)  |  k1   -3.050e-4 ±   8.638e-4 (1σ)    |
|                                  |  pixel pitch (um)  1    |  cx (px)     421.203 ±      0.387 (1σ)  |  k2    1.517e-3 ±   1.809e-3 (1σ)    |
|                                  |                         |  cy (px)     230.821 ±      0.436 (1σ)  |  k3   -6.881e-4 ±   1.170e-3 (1σ)    |
|                                  |                         |                                         |  p1   -1.887e-3 ±   2.510e-4 (1σ)    |
|                                  |                         |                                         |  p2   -1.630e-3 ±   2.358e-4 (1σ)    |
|                                  |                         |                                         |                                      |
+----------------------------------+-------------------------+-----------------------------------------+--------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimized IMU Metrics (CP-2):&lt;/strong&gt; In this example, IMUs were not calibrated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Calibrated Extrinsics (CP-3):&lt;/strong&gt; Shows the minimum spanning tree of spatial constraints in the plex, highlighting only the most critical constraints needed to keep the structure intact.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ CP-3 █  Calibrated Extrinsics
+---------------------------+------------+-----------------+----------------------+---------------+---------------------+
| Final Extrinsics          | Subplex ID | Translation (m) | Diff from input (mm) | Rotation (°)  | Diff from input (°) |
| 'To' component is Origin  |            |                 |                      |               |                     |
| Rotation is Euler XYZ ext |            |                 |                      |               |                     |
+---------------------------+------------+-----------------+----------------------+---------------+---------------------+
| To: infra1_image_rect_raw | A          | X: 0.360        | ΔX: 359.862          | Roll: -85.208 | ΔRoll: -85.208      |
|     f7df04cc, RDF         |            | Y: 0.083        | ΔY: 82.722           | Pitch: -2.812 | ΔPitch: -2.812      |
| From: velodyne_points1    |            | Z: 0.048        | ΔZ: 48.451           | Yaw: 171.579  | ΔYaw: 171.579       |
|     38140838, Unknown     |            |                 |                      |               |                     |
+---------------------------+------------+-----------------+----------------------+---------------+---------------------+
| To: infra2_image_rect_raw | A          | X: 0.319        | ΔX: 318.513          | Roll: -85.317 | ΔRoll: -85.317      |
|     34ed8934, RDF         |            | Y: 0.086        | ΔY: 85.533           | Pitch: -2.717 | ΔPitch: -2.717      |
| From: velodyne_points1    |            | Z: 0.033        | ΔZ: 33.454           | Yaw: 171.470  | ΔYaw: 171.470       |
|     38140838, Unknown     |            |                 |                      |               |                     |
+---------------------------+------------+-----------------+----------------------+---------------+---------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Summary Statistics (SS Section)&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#summary-statistics-ss-section" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;This section provides a high-level overview of the optimization process and the overall calibration quality.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Optimization Summary Statistics (SS-1):&lt;/strong&gt; Includes overall reprojection error and posterior variance, which indicates the calibration’s uncertainty.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ SS-1 █ Optimization Summary Statistics
+------------------------+----------+
| Optimized Object RMSE, | 0.206 px |
| based on all cameras   |          |
+------------------------+----------+
| Posterior Variance     | 0.731    |
+------------------------+----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Camera Summary Statistics (SS-2):&lt;/strong&gt; Summarizes the reprojection errors for each camera. An RMSE under 0.5 pixels is typically acceptable, and under 0.2 pixels is excellent.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ SS-2 █ Camera Summary Statistics
+----------------------------------+------------------------------------------+
| Camera                           | Reproj. RMSE, outliers downweighted (px) |
+----------------------------------+------------------------------------------+
| infra1_image_rect_raw (f7df04cc) | 0.209 px                                 |
+----------------------------------+------------------------------------------+
| infra2_image_rect_raw (34ed8934) | 0.204 px                                 |
+----------------------------------+------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LiDAR Summary Statistics (SS-3):&lt;/strong&gt; Shows the RMSE of various residual metrics: circle misalignment, interior points to plane error, paired 3D point error, and paired plane normal error.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ SS-3 █ LiDAR Summary Statistics
+-----------------------------+-------------------------------+------------------------------------+--------------------------+--------------------------+
| LiDAR                       | Circle misalignment RMSE with | Circle edge misalignment RMSE with | Interior point RMSE with | Plane normal difference, |
|                             | all cameras, outliers         | all cameras, outliers              | all cameras, outliers    | lidar-lidar, outliers    |
|                             | downweighted (m)              | downweighted (m)                   | downweighted (m)         | downweighted (deg)       |
+-----------------------------+-------------------------------+------------------------------------+--------------------------+--------------------------+
| velodyne_points1 (38140838) | 0.020 m                       | 0.028 m                            | 0.018 m                  | (n/a)                    |
+-----------------------------+-------------------------------+------------------------------------+--------------------------+--------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Data Diagnostics&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#data-diagnostics" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;This section highlights potential issues with the calibration setup, data, or process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fji00y3klhhs0xffb4oip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fji00y3klhhs0xffb4oip.png" alt="diagnostics" width="748" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High-Risk Diagnostics:&lt;/strong&gt; Critical issues such as insufficient camera motion or missing required components must be addressed for successful calibration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Medium and Low-Risk Diagnostics:&lt;/strong&gt; Less critical issues, such as poor feature coverage, should still be monitored and corrected when possible to improve calibration quality.&lt;/p&gt;

&lt;h4&gt;
  
  
  Output Summary&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#output-summary" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+------------------------+------------------------------------------------------------+
| Results JSON           | /datasets/camera_lidar/results.json                        |
+------------------------+------------------------------------------------------------+
| Calibrated Plex        | Run `jq .plex [results.json] &amp;gt; optimized_plex.json`        |
+------------------------+------------------------------------------------------------+
| Optimized Object Space | Run `jq .object_space [results.json] &amp;gt; optimized_obj.json` |
+------------------------+------------------------------------------------------------+
| Cached Detections JSON | /datasets/camera_lidar/observations.detections.json        |
+------------------------+------------------------------------------------------------+
| Report Path            | /datasets/camera_lidar/report.html                         |
+------------------------+------------------------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The calibration process generates several output files, located in the &lt;code&gt;$PATH/metrical/examples/camera_lidar&lt;/code&gt; directory.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;init_plex.json:&lt;/strong&gt; A raw input plex from the &lt;code&gt;init mode&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;observations.detections.json:&lt;/strong&gt; Cached detections for faster reruns in &lt;code&gt;calibrate mode&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;results.json:&lt;/strong&gt; The main output file, containing calibrated plex and object space.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;report.html:&lt;/strong&gt; An HTML report summarizing calibration performance visually.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;results_urdf.xml:&lt;/strong&gt; A ROS-compatible URDF file that describes the spatial relationships between the two calibrated cameras and the LiDAR, enabling tools like &lt;code&gt;robot_state_publisher&lt;/code&gt; to publish real-time TF transforms based on these relationships.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#conclusion" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;MetriCal simplifies multimodal sensor calibration by offering a fully scriptable, CLI-based workflow with detailed diagnostics and seamless ROS integration. One of the key takeaways from working with this tool is that successful calibration depends heavily on the quality of the captured data. Carefully choosing calibration targets, ensuring sufficient sensor motion, and achieving full field-of-view coverage all have a major impact on the results. For those just starting out, prioritizing high-quality data capture and closely following the recommended guidelines is essential for obtaining reliable outcomes.&lt;/p&gt;




&lt;p&gt;We hope this tutorial provided a clear and practical introduction to using MetriCal for multi-sensor calibration. If you have any questions or comments, feel free to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community Forum&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>tutorials</category>
      <category>robotics</category>
      <category>ros</category>
    </item>
    <item>
      <title>ReductStore v1.15.0 Released With Extension API and Improved Web Console</title>
      <dc:creator>Alexey Timin</dc:creator>
      <pubDate>Wed, 07 May 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/reductstore/reductstore-v1150-released-with-extension-api-and-improved-web-console-2gdk</link>
      <guid>https://forem.com/reductstore/reductstore-v1150-released-with-extension-api-and-improved-web-console-2gdk</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2qcxb0djg8he29zjn3ap.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2qcxb0djg8he29zjn3ap.webp" alt="ReductStore v1.15.0 Released" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are pleased to announce the release of the latest minor version of &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;, &lt;a href="https://github.com/reductstore/reductstore/releases/tag/v1.15.0" rel="noopener noreferrer"&gt;&lt;strong&gt;1.15.0&lt;/strong&gt;&lt;/a&gt;. ReductStore is a high-performance storage and streaming solution designed for storing and managing large volumes of historical data.&lt;/p&gt;

&lt;p&gt;To download the latest released version, please visit our &lt;a href="https://www.reduct.store/download" rel="noopener noreferrer"&gt;&lt;strong&gt;Download Page&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's new in 1.15.0?&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_15_0-released#whats-new-in-1150" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This release includes several new features and enhancements. These are the extension API, the improved Web Console and the new conditional query operators.&lt;/p&gt;

&lt;h3&gt;
  
  
  Extension API&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_15_0-released#extension-api" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;ReductStore is blob storage. It doesn't know anything about the data it stores. We are determined to maintain this because it enables us to ingest and query data in any format with optimal performance. We know that you sometimes need to perform processing and special queries based on the original data format.&lt;/p&gt;

&lt;p&gt;For example, if you ingest data in the JSON format, you should be able to query only some fields of the JSON object or use it in the query condition for filtering. The new extension API makes it possible.&lt;/p&gt;

&lt;p&gt;The extension API is experimental and not yet documented. We are developing extensions for columnar data, CSV and MCAP formats. Once we have enough experience, we will document the API and publish the extensions so that you can build your own extensions for your data formats.&lt;/p&gt;

&lt;p&gt;For most curious users, a demo extension that scales JPEG images on the fly can be found on GitHub: &lt;a href="https://github.com/reductstore/img-ext" rel="noopener noreferrer"&gt;https://github.com/reductstore/img-ext&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Improved Web Console&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_15_0-released#improved-web-console" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;In the v1.14.0 release, we introduced the ability to browse data in the Web Console. This release includes two new features: the ability to upload files to the database and update labels in the Web Console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3fgh29hvhf19pnq2m1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3fgh29hvhf19pnq2m1d.png" alt="Update Labels in ReductStore Web Console" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The upload feature can be useful when you store some artifacts e.g. AI models or configuration files in the storage and want to update it occasionally.&lt;/p&gt;

&lt;h3&gt;
  
  
  New Conditional Query Operators&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_15_0-released#new-conditional-query-operators" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We have expanded the set of conditional query operators with new ones that allow you to filter and aggregate data more effectively:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;$each_n&lt;/code&gt; - keeps only every N-th record in the result set.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;$each_t&lt;/code&gt; - keeps only one record within given time period in seconds.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;$limit&lt;/code&gt; - limits the number of records in the result set.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;$timestamp&lt;/code&gt; - allows you to filter records by timestamp.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;$timestamp&lt;/code&gt; operator can be particularly useful if you store timestamps and metadata in another database and want to retrieve blobs from ReductStore:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:8080&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my_bucket&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;start&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1231231081&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;end&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1231231085&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
  &lt;span class="n"&gt;when&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$in&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$timestamp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1231231081&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1231231082&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1231231083&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1231231084&lt;/span&gt;&lt;span class="p"&gt;,]&lt;/span&gt; &lt;span class="p"&gt;},):&lt;/span&gt; 
  &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_all&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can read more about the new operators in the &lt;a href="https://www.reduct.store/docs/conditional-query" rel="noopener noreferrer"&gt;&lt;strong&gt;Conditional Query&lt;/strong&gt;&lt;/a&gt; documentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  What next?&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_15_0-released#what-next" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We are constantly working on improving ReductStore and adding new features to provide the best experience for our users. In the next few releases we plan to add new features and improvements, including:&lt;/p&gt;

&lt;h3&gt;
  
  
  Integration with ROS&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_15_0-released#integration-with-ros" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;ReductStore is a great solution for storing and managing large amounts of data in robotic applications. Currently, we are working on integrating ReductStore with ROS to provide a seamless experience for storing and retrieving data in ROS applications. We have started a new &lt;a href="https://github.com/reductstore/ros2-reduct-agent" rel="noopener noreferrer"&gt;&lt;strong&gt;ROS2 Agent&lt;/strong&gt;&lt;/a&gt; that allows you to store and retrieve data in ReductStore from ROS2 applications. The agent is designed to be easy to use and integrate with existing ROS2 applications. We are also going to add support for the mcap format with the new Extension API, which will allow you to retrieve data in the original format from mcap files, filter topics and many other features.&lt;/p&gt;

&lt;h3&gt;
  
  
  Golang SDK&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_15_0-released#golang-sdk" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Our big goal is to integrate Grafana by the end of 2025. This month we started work on the Golang SDK, which is the first step towards achieving this goal. The project is still in the early stages of development, but you can already check it out on GitHub: &lt;a href="https://github.com/reductstore/reduct-go" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Golang SDK&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;I hope you find those new features useful. If you have any questions or feedback, don’t hesitate to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community&lt;/strong&gt;&lt;/a&gt; forum.&lt;/p&gt;

&lt;p&gt;Thanks for using &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>news</category>
      <category>reductstore</category>
    </item>
    <item>
      <title>How to Analyze ROS Bag Files and Build a Dataset for Machine Learning</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Wed, 30 Apr 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/reductstore/how-to-analyze-ros-bag-files-and-build-a-dataset-for-machine-learning-1fn7</link>
      <guid>https://forem.com/reductstore/how-to-analyze-ros-bag-files-and-build-a-dataset-for-machine-learning-1fn7</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jnfkbfbmx7bgvs1xp0a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jnfkbfbmx7bgvs1xp0a.png" alt="Linear and Angular Velocities over Time" width="690" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Working with real-world robot data depends on how ROS (Robot Operating System) messages are stored. In the article &lt;a href="https://www.reduct.store/blog/store-ros-topics#method-2-store-rosbag-data-in-time-series-object-storage" rel="noopener noreferrer"&gt;&lt;strong&gt;3 Ways to Store ROS Topics&lt;/strong&gt;&lt;/a&gt;, we explored several approaches — including storing compressed Rosbag files in time-series storage and storing topics as separate records.&lt;/p&gt;

&lt;p&gt;In this tutorial, we'll focus on the most common format: &lt;code&gt;.bag&lt;/code&gt; files recorded with Rosbag. These files contain valuable data on how a robot interacts with the world — such as odometry, camera frames, LiDAR, or IMU readings — and provide the foundation for analyzing the robot's behavior.&lt;/p&gt;

&lt;p&gt;You’ll learn how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extract motion data from &lt;code&gt;.bag&lt;/code&gt; files&lt;/li&gt;
&lt;li&gt;Create basic velocity features&lt;/li&gt;
&lt;li&gt;Train a classification model to recognize different types of robot movements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We'll use the &lt;code&gt;bagpy&lt;/code&gt; library to process &lt;code&gt;.bag&lt;/code&gt; files and apply basic machine learning techniques for classification.&lt;/p&gt;

&lt;p&gt;Although the examples in this tutorial use &lt;a href="http://ptak.felk.cvut.cz/darpa-subt/qualification_videos/spot/" rel="noopener noreferrer"&gt;&lt;strong&gt;data from a Boston Dynamics Spot robot&lt;/strong&gt;&lt;/a&gt; (performing movements like moving forward, sideways, and rotating), you can adapt the code for your recordings.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install Required Libraries&lt;a href="https://www.reduct.store/blog/boston-dynamic-example#install-required-libraries" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;numpy pandas matplotlib seaborn scikit-learn bagpy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Loading and Preprocessing Bag Files&lt;a href="https://www.reduct.store/blog/boston-dynamic-example#loading-and-preprocessing-bag-files" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's create a function to load a &lt;code&gt;.bag&lt;/code&gt; file and extract velocity features.&lt;/p&gt;

&lt;p&gt;In our example, the odometry data is published under the &lt;code&gt;/spot/odometry&lt;/code&gt; topic. Make sure to specify the correct topic where your robot's motion data is recorded. Depending on your use case, you might find other features, such as accelerations or additional sensor data, more relevant for recognizing your robot's movements. For this task, we'll primarily focus on linear and angular velocities.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;bagpy&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;bagreader&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;process_bag_to_dataframe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bag_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/spot/odometry&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;target_label&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

    &lt;span class="c1"&gt;# Load a .bag file and generate a DataFrame with velocity features and a target label
&lt;/span&gt;
    &lt;span class="n"&gt;bag&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;bagreader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bag_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bag&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;message_by_topic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="c1"&gt;# Calculate linear and angular velocities
&lt;/span&gt;    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear_velocity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sqrt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.linear.x&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
                                    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.linear.y&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
                                    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.linear.z&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;angular_velocity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sqrt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.angular.x&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
                                     &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.angular.y&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
                                     &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.angular.z&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Assign target label
&lt;/span&gt;    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;target&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;target_label&lt;/span&gt;

    &lt;span class="c1"&gt;# Keep only relevant columns
&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.linear.x&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.linear.y&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.linear.z&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
               &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.angular.x&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.angular.y&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.angular.z&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
               &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear_velocity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;angular_velocity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;target&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Processing three different movement types&lt;a href="https://www.reduct.store/blog/boston-dynamic-example#processing-three-different-movement-types" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Let's use the &lt;code&gt;process_bag_to_dataframe&lt;/code&gt; function to load and process the data for each of the three movement types. Each movement type was recorded in a separate &lt;code&gt;.bag&lt;/code&gt; file, so we'll apply the function to each file individually, and then merge the results into a single DataFrame.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;df_forward&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;process_bag_to_dataframe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear_x.bag&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;target_label&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;   &lt;span class="c1"&gt;# moving forward
&lt;/span&gt;&lt;span class="n"&gt;df_sideways&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;process_bag_to_dataframe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear_y.bag&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;target_label&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# moving sideways
&lt;/span&gt;&lt;span class="n"&gt;df_rotation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;process_bag_to_dataframe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;rotation.bag&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;target_label&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# rotating
&lt;/span&gt;
&lt;span class="c1"&gt;# Combine all samples
&lt;/span&gt;&lt;span class="n"&gt;df_all&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;concat&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;df_forward&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;df_sideways&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;df_rotation&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;ignore_index&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Visualizing velocities&lt;a href="https://www.reduct.store/blog/boston-dynamic-example#visualizing-velocities" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We can visualize the linear and angular velocities over time for each type of motion, as shown in the example for the forward movement. This will help us better understand how the velocities change during each specific motion.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;matplotlib.pyplot&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;plt&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;plot_velocities&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;figure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;figsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subplot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;plot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear_velocity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;color&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;#4B0082&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;title&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Linear Velocity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;xlabel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Time Step&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subplot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;plot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;angular_velocity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;color&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;#9A9E5E&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;title&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Angular Velocity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;xlabel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Time Step&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;suptitle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tight_layout&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;show&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nf"&gt;plot_velocities&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df_forward&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Forward Movement&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlo3pvru13i412dtfvdj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlo3pvru13i412dtfvdj.png" alt="Forward Movement" width="681" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Training and Evaluating Classification Models&lt;a href="https://www.reduct.store/blog/boston-dynamic-example#training-and-evaluating-classification-models" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We'll test several popular models, including Logistic Regression, Decision Tree, Random Forest, and Support Vector Machine, and tune their hyperparameters using &lt;code&gt;GridSearchCV&lt;/code&gt;. You can also experiment with other hyperparameters to optimize the models based on your specific data and requirements.&lt;/p&gt;

&lt;p&gt;To evaluate the classifier, we'll use the &lt;strong&gt;F1 Score&lt;/strong&gt; metric, which balances precision and recall and is especially useful for imbalanced datasets. However, you can also choose to evaluate using Accuracy, Precision, or Recall, depending on your needs.&lt;/p&gt;

&lt;p&gt;Now, let's prepare the data for training.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;X&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;df_all&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;target&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;axis&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;df_all&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;target&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The features &lt;code&gt;X&lt;/code&gt; consist of the velocity data, and the labels &lt;code&gt;y&lt;/code&gt; represent the different movement types.&lt;/p&gt;

&lt;p&gt;Next, let’s define the scalers, models, and their respective hyperparameters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.preprocessing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;StandardScaler&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MinMaxScaler&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;RobustScaler&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.linear_model&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;LogisticRegression&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.tree&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;DecisionTreeClassifier&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.ensemble&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;RandomForestClassifier&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.svm&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SVC&lt;/span&gt;

&lt;span class="c1"&gt;# Scalers
&lt;/span&gt;&lt;span class="n"&gt;scalers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Standard Scaler&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;StandardScaler&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;MinMax Scaler&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;MinMaxScaler&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Robust Scaler&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;RobustScaler&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Models
&lt;/span&gt;&lt;span class="n"&gt;models&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Logistic Regression&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;LogisticRegression&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max_iter&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Decision Tree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;DecisionTreeClassifier&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Random Forest&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;RandomForestClassifier&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;SVM&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;SVC&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;probability&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Hyperparameters for tuning
&lt;/span&gt;&lt;span class="n"&gt;parameters&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Logistic Regression&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;C&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mf"&gt;0.01&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;]},&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Decision Tree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;max_depth&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;]},&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Random Forest&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;n_estimators&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;]},&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;SVM&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;C&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;kernel&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;rbf&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's train and test the models using the defined scalers, models, and hyperparameters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.model_selection&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;train_test_split&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;GridSearchCV&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;StratifiedKFold&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.metrics&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;f1_score&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;confusion_matrix&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run_classification&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;scaler_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

    &lt;span class="c1"&gt;# Split data into train and test sets
&lt;/span&gt;    &lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_test&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;train_test_split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;stratify&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;test_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Apply the chosen scaler to the data
&lt;/span&gt;    &lt;span class="n"&gt;scaler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scalers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;scaler_name&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;X_train&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scaler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit_transform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;X_test&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scaler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;transform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_test&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Set up hyperparameter grid and cross-validation
&lt;/span&gt;    &lt;span class="n"&gt;param_grid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;cv&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;StratifiedKFold&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n_splits&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;shuffle&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;grid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;GridSearchCV&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;models&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;param_grid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cv&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;cv&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;scoring&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;f1_weighted&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Get the best model from the grid search
&lt;/span&gt;    &lt;span class="n"&gt;best_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;best_estimator_&lt;/span&gt;

    &lt;span class="c1"&gt;# Make predictions on the test set
&lt;/span&gt;    &lt;span class="n"&gt;y_pred&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;best_model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_test&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Best parameters: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;best_params_&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;F1 Score: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;f1_score&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_pred&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;average&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;weighted&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_pred&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;best_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;columns&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After training and testing the models, we'll plot the confusion matrix to visualize how well our model is performing by comparing the predicted labels with the actual labels.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;seaborn&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sns&lt;/span&gt;

&lt;span class="c1"&gt;# Plot confusion matrix
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;plot_confusion_matrix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_pred&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

    &lt;span class="n"&gt;cm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;confusion_matrix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_pred&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;sns&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;heatmap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;annot&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;d&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cmap&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Purples&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;title&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Confusion Matrix&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;xlabel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Predicted&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ylabel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Actual&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;show&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After evaluating the model's performance, we’ll visualize the feature importance for the Decision Tree and Random Forest models to understand which features contribute the most to the model’s predictions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Feature importance for Decision Tree and Random Forest
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;plot_feature_importance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;best_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X_columns&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

    &lt;span class="n"&gt;importances&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;best_model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;feature_importances_&lt;/span&gt;
    &lt;span class="n"&gt;sorted_idx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;importances&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;argsort&lt;/span&gt;&lt;span class="p"&gt;()[::&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="n"&gt;sns&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;barplot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;importances&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;sorted_idx&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;X_columns&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;sorted_idx&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;color&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;#50208B&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;title&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Feature Importances&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;xticks&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rotation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tight_layout&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;show&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Training a Random Forest classifier&lt;a href="https://www.reduct.store/blog/boston-dynamic-example#training-a-random-forest-classifier" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Finally, let's apply the Random Forest classifier to our data and evaluate its performance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_pred&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;best_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X_columns&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;run_classification&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Random Forest&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;MinMax Scaler&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The optimal parameter for &lt;strong&gt;n_estimators&lt;/strong&gt; is &lt;strong&gt;100&lt;/strong&gt; , and the model achieved an &lt;strong&gt;F1 score&lt;/strong&gt; of &lt;strong&gt;0.976&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We’ll plot the &lt;strong&gt;confusion matrix&lt;/strong&gt; to assess the classifier's performance across different movement types. The diagonal elements represent the correctly classified instances, while the off-diagonal elements indicate the misclassifications.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;plot_confusion_matrix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_pred&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fikl864mb3auak88xqu69.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fikl864mb3auak88xqu69.png" alt="Confusion Matrix" width="530" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After evaluating the Random Forest model, we can check the &lt;strong&gt;Feature Importance&lt;/strong&gt; to see which velocity components were most important in distinguishing the movement types. This is especially useful for Decision Tree and Random Forest models, as they automatically rank features by their importance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;plot_feature_importance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;best_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X_columns&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfcosa4ckp4hq1uy6ak1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfcosa4ckp4hq1uy6ak1.png" alt="Feature Importance" width="590" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Best practices&lt;a href="https://www.reduct.store/blog/boston-dynamic-example#best-practices" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;When working with &lt;code&gt;.bag&lt;/code&gt; files and training machine learning models, these best practices can help you manage data more effectively and build better-performing models:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Split large files:&lt;/strong&gt; If your &lt;code&gt;.bag&lt;/code&gt; files are too large, divide them into smaller episodes. This helps avoid memory issues and makes the files easier to process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Separate topics by type:&lt;/strong&gt; If your &lt;code&gt;.bag&lt;/code&gt; file includes both lightweight messages (like battery level) and large data streams (like images or LiDAR), store them in separate &lt;code&gt;.bag&lt;/code&gt; files. This separation can optimize performance and make your workflow simpler.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Randomized Search for tuning:&lt;/strong&gt; If your model’s accuracy isn’t good enough, try using RandomizedSearchCV instead of GridSearchCV. It can find the best hyperparameters faster and more efficiently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Try different algorithms:&lt;/strong&gt; Experiment with different algorithms to find what works best for your specific data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consider ensemble methods:&lt;/strong&gt; Techniques like bagging or boosting can improve accuracy by combining multiple models and leveraging their strengths.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explore deep learning:&lt;/strong&gt; If you have a large dataset and enough computing power, deep learning models can capture complex patterns that simpler models may miss.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prevent overfitting:&lt;/strong&gt; Make sure that your model generalizes well by splitting your dataset into training, validation, and test sets. Use cross-validation to evaluate your model’s performance more reliably.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion&lt;a href="https://www.reduct.store/blog/boston-dynamic-example#conclusion" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we walked through the process of handling robot movement data stored in &lt;code&gt;.bag&lt;/code&gt; files. We extracted key velocity features and used them to train machine learning models for classifying different types of robot movements.&lt;/p&gt;

&lt;p&gt;As a next step, you can experiment with various models, hyperparameters, or additional features to improve classification performance. You can also explore advanced techniques such as deep learning for more complex tasks.&lt;/p&gt;




&lt;p&gt;We hope this tutorial provided a clear starting point for processing robot data and building basic movement classification models. If you have any questions or comments, feel free to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community Forum&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>robotics</category>
      <category>ros</category>
      <category>tutorials</category>
    </item>
  </channel>
</rss>
