DEV Community

Cameron Archer for Tinybird

Posted on

1

Build a Real-Time Shipment Tracking API with Tinybird

Tinybird is a data analytics backend for software developers. You use Tinybird to build real-time analytics APIs without needing to set up or manage the underlying infrastructure. Tinybird offers a local-first development workflows, git-based deployments, resource definitions as code, and features for AI-native developers. In this tutorial, you'll learn how to leverage Tinybird to create a real-time API for tracking shipments, including insights into shipment statuses, late deliveries, and product-specific shipment summaries. By employing Tinybird's data sources and pipes, you'll be able to handle and analyze shipment data and product information efficiently, enabling you to expose this data through scalable APIs.

Understanding the data

Imagine your data looks like this:

{"shipment_id": "SHIP-12749", "product_id": "PROD-3749", "origin_location": "New York", "destination_location": "San Francisco", "quantity": 426, "shipment_timestamp": "2025-03-24 16:53:37", "estimated_delivery_timestamp": "2025-05-31 16:53:37", "actual_delivery_timestamp": "2025-06-10 16:53:37", "status": "Cancelled"}
Enter fullscreen mode Exit fullscreen mode

This sample represents a shipment from New York to San Francisco that was unfortunately cancelled. You'll store this data in Tinybird data sources, which are essentially tables optimized for real-time analytics. To begin, you'll create two data sources in Tinybird: product_catalog and raw_shipments. Here's how you define the product_catalog data source:

DESCRIPTION >
    Product catalog with product details. SCHEMA >
    `product_id` String `json:$.product_id`,
    `product_name` String `json:$.product_name`,
    `category` String `json:$.category`,
    `unit_price` Float32 `json:$.unit_price`

ENGINE "MergeTree"
ENGINE_PARTITION_KEY "category"
ENGINE_SORTING_KEY "product_id"
Enter fullscreen mode Exit fullscreen mode

And the raw_shipments data source:

DESCRIPTION >
    Raw shipment data ingested from a source like Kafka or S3. SCHEMA >
    `shipment_id` String `json:$.shipment_id`,
    `product_id` String `json:$.product_id`,
    `origin_location` String `json:$.origin_location`,
    `destination_location` String `json:$.destination_location`,
    `quantity` UInt32 `json:$.quantity`,
    `shipment_timestamp` DateTime `json:$.shipment_timestamp`,
    `estimated_delivery_timestamp` DateTime `json:$.estimated_delivery_timestamp`,
    `actual_delivery_timestamp` DateTime `json:$.actual_delivery_timestamp`,
    `status` String `json:$.status`

ENGINE "MergeTree"
ENGINE_PARTITION_KEY "toYYYYMM(shipment_timestamp)"
ENGINE_SORTING_KEY "shipment_timestamp, product_id, origin_location, destination_location"
Enter fullscreen mode Exit fullscreen mode

These schemas highlight thoughtful design choices, such as using the MergeTree engine for efficient querying and sorting keys to optimize query performance. For data ingestion, Tinybird's Events API allows you to stream JSON/NDJSON events from your application frontend or backend with a simple HTTP request. This feature is crucial for real-time data processing, offering low latency. Here's how you'd send data to the raw_shipments data source:

curl -X POST "https://api.europe-west2.gcp.tinybird.co/v0/events?name=raw_shipments&utm_source=DEV&utm_campaign=tb+create+--prompt+DEV" \
     -H "Authorization: Bearer $TB_ADMIN_TOKEN" \
     -d '{"shipment_id":"shipment456","product_id":"product123","origin_location":"New York","destination_location":"Los Angeles","quantity":10,"shipment_timestamp":"2024-01-01 10:00:00","estimated_delivery_timestamp":"2024-01-05 18:00:00","actual_delivery_timestamp":"2024-01-05 17:00:00","status":"Delivered"}'
Enter fullscreen mode Exit fullscreen mode

Additionally, for event or streaming data, you might consider using the Kafka connector for its robustness, and for batch or file data, the Data Sources API and S3 connector are valuable.

Transforming data and publishing APIs

Tinybird's pipes are at the heart of data transformation and API publication. They allow for batch transformations, real-time transformations, and creating API Endpoints. Let's dive into the endpoints you'll be creating.

Late shipments

The late_shipments endpoint identifies shipments delivered past their estimated delivery date:

DESCRIPTION >
    Endpoint to get a list of late shipments. NODE late_shipments_node
SQL >
    SELECT
        shipment_id,
        product_id,
        origin_location,
        destination_location,
        estimated_delivery_timestamp,
        actual_delivery_timestamp
    FROM raw_shipments
    WHERE actual_delivery_timestamp > estimated_delivery_timestamp

TYPE endpoint
Enter fullscreen mode Exit fullscreen mode

The SQL logic is straightforward: it selects shipments where the actual delivery timestamp is later than the estimated one.

Shipment status counts

Next, the shipment_status_counts endpoint:

DESCRIPTION >
    Endpoint to get the count of shipments by status. NODE shipment_status_counts_node
SQL >
    SELECT
        status,
        count() AS shipment_count
    FROM raw_shipments
    GROUP BY status

TYPE endpoint
Enter fullscreen mode Exit fullscreen mode

This pipe groups shipments by their status and counts them, useful for quickly assessing the overall distribution of shipment states.

Product shipment summary

Finally, the product_shipment_summary endpoint provides detailed summaries:

DESCRIPTION >
    Endpoint to get shipment summary by product. NODE product_shipment_summary_node
SQL >
    SELECT
        rs.product_id,
        pc.product_name,
        pc.category,
        sum(rs.quantity) AS total_quantity_shipped,
        count() AS total_shipments,
        avg(rs.actual_delivery_timestamp - rs.shipment_timestamp) AS avg_delivery_time
    FROM raw_shipments rs
    JOIN product_catalog pc ON rs.product_id = pc.product_id
    WHERE pc.category = {{String(product_category, "Electronics")}}
    GROUP BY rs.product_id, pc.product_name, pc.category

TYPE endpoint
Enter fullscreen mode Exit fullscreen mode

By joining the raw_shipments and product_catalog data sources, this endpoint calculates the total quantity shipped, total shipments, and average delivery time for products, optionally filtered by category.

Deploying to production

To deploy these resources to the Tinybird Cloud, use the Tinybird CLI:

tb --cloud deploy
Enter fullscreen mode Exit fullscreen mode

This command prepares your data sources and pipes, making them ready for scalable, real-time access. Tinybird manages these resources as code, facilitating integration with CI/CD pipelines and ensuring your data analytics backend is production-ready. To secure your APIs, Tinybird employs token-based authentication. Here's how you might call the deployed late_shipments endpoint:

curl -X GET "https://api.europe-west2.gcp.tinybird.co/v0/pipes/late_shipments.json?token=%24TB_ADMIN_TOKEN&utm_source=DEV&utm_campaign=tb+create+--prompt+DEV"
Enter fullscreen mode Exit fullscreen mode

Conclusion

Throughout this tutorial, you've built a real-time shipment tracking API using Tinybird, covering data ingestion, transformation, and API publication. Tinybird empowers developers to handle real-time data analytics at scale, without the overhead of managing infrastructure. Sign up for Tinybird to build and deploy your first real-time data APIs in a few minutes.

ACI image

ACI.dev: Best Open-Source Composio Alternative (AI Agent Tooling)

100% open-source tool-use platform (backend, dev portal, integration library, SDK/MCP) that connects your AI agents to 600+ tools with multi-tenant auth, granular permissions, and access through direct function calling or a unified MCP server.

Star our GitHub!

Top comments (0)

Tiger Data image

🐯 🚀 Timescale is now TigerData: Building the Modern PostgreSQL for the Analytical and Agentic Era

We’ve quietly evolved from a time-series database into the modern PostgreSQL for today’s and tomorrow’s computing, built for performance, scale, and the agentic future.

So we’re changing our name: from Timescale to TigerData. Not to change who we are, but to reflect who we’ve become. TigerData is bold, fast, and built to power the next era of software.

Read more

👋 Kindness is contagious

If this **helped, please leave a ❤️ or a friendly comment!

Okay