<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Fauna</title>
    <description>The latest articles on Forem by Fauna (@fauna_admin).</description>
    <link>https://forem.com/fauna_admin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/fauna_admin"/>
    <language>en</language>
    <item>
      <title>Comparing Fauna and DynamoDB</title>
      <dc:creator>Fauna</dc:creator>
      <pubDate>Wed, 24 May 2023 15:39:21 +0000</pubDate>
      <link>https://forem.com/fauna_admin/comparing-fauna-and-dynamodb-ie</link>
      <guid>https://forem.com/fauna_admin/comparing-fauna-and-dynamodb-ie</guid>
      <description>&lt;p&gt;Fauna and DynamoDB are both serverless databases, but their design goals, architecture, and use cases are very different. In this post, I will overview both systems, discuss where they shine and where they don’t, and explain how various engineering and product decisions have created fundamentally different value propositions for database users.&lt;/p&gt;

&lt;h2&gt;
  
  
  DynamoDB’s design philosophy: Availability and predictability
&lt;/h2&gt;

&lt;p&gt;AWS DynamoDB was developed in response to the success of Apache Cassandra. The Cassandra database was originally open sourced and abandoned by Facebook in 2008. My team at Twitter contributed extensively to it alongside the team from Rackspace that eventually became DataStax.&lt;/p&gt;

&lt;p&gt;However, in an odd twist of history, Cassandra itself was inspired by a &lt;a href="https://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf"&gt;2007 paper&lt;/a&gt; from Amazon about a different, internal database called Dynamo — an eventually-consistent &lt;a href="https://fauna.com/blog/what-exactly-is-a-key-value-store"&gt;key-value store&lt;/a&gt; that was used for high-availability shopping cart storage. Amazon cared a lot about shopping carts long before they had a web services business. Within Amazon, the Dynamo paper, and thus the roots of DynamoDB, predate any concept of offering a database product to external customers.&lt;/p&gt;

&lt;p&gt;DynamoDB and Cassandra both focused on two things: high availability and low latency. To achieve this, their initial releases sacrificed everything else one might value from traditional operational databases like PostgreSQL: transactionality, database normalization, document modeling, indexes, foreign keys, and even a query planner. DynamoDB did improve on the original Dynamo architecture by making single-key writes serializable and dropping the baroque CRDT reconciliation scheme, and on Cassandra by having a somewhat more humane API.&lt;/p&gt;

&lt;h2&gt;
  
  
  DynamoDB’s architecture
&lt;/h2&gt;

&lt;p&gt;DynamoDB’s architecture essentially puts a web server in front of a collection of B-tree partitions (think &lt;a href="https://en.wikipedia.org/wiki/Berkeley_DB"&gt;BDB&lt;/a&gt; databases) into which documents are consistently hashed. Documents are columnar, but do not have a schema.&lt;/p&gt;

&lt;p&gt;Within a DynamoDB region, each data partition is replicated three times. Durability is guaranteed by requiring synchronous majority commits on writes. Consistency is only enforced within a single partition which, in practice, means a single document since partition boundaries can not be directly managed. Writes always go through a leader replica first; reads can come from any replica in eventually-consistent mode, or the leader replica in strongly consistent mode.&lt;/p&gt;

&lt;p&gt;Although DynamoDB has recently added some new features like secondary indexes and multi-key transactions, their limitations reflect the iron law of DynamoDB: “everything is a table”.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tables, of course, are tables.&lt;/li&gt;
&lt;li&gt;Replication to other regions is implemented by creating additional tables that asynchronously apply changes from a per-replica, row-based changelog.&lt;/li&gt;
&lt;li&gt;Secondary indexes are implemented by asynchronously projecting data into additional tables — they are not serializable and not transactional.&lt;/li&gt;
&lt;li&gt;Transactionality is implemented via a multi-phase lock — presumably DynamoDB keeps a hidden lock table, which is directly reflected in the additional costs for transactionality. DynamoDB transactions are not ACID (they are &lt;a href="https://fauna.com/blog/a-comparison-of-scalable-database-isolation-levels"&gt;neither isolated nor serializable&lt;/a&gt;) and cannot effectively substitute for relational transactions. Transaction state is not visible to replicas or even to secondary indexes within the same replica.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you may predict from the above, the DynamoDB literature is absolutely packed with examples of “single-table design” using aggressive NoSQL-style denormalization. Using the more complex features is generally discouraged. DynamoDB’s pricing is also designed around eventually-consistent single tables, even though in replicated and indexed scenarios individual queries must often repeatedly interact with multiple tables. Furthermore, even though global tables were added a few years later, the overall pricing becomes significantly higher, while the same eventually-consistent data integrity compromise remains. designed around single-table, eventually-consistent usage, even though in replicated and indexed scenarios individual queries must interact with multiple tables, often multiple times.&lt;/p&gt;

&lt;p&gt;Additional challenges lie in the query model itself. Unlike Fauna’s query language FQL or SQL, DynamoDB’s API does not support dependent reads or intra-query computation. In contrast, Fauna allows developers to encapsulate complex business logic in transactions without any consistency, latency, or availability penalty.&lt;/p&gt;

&lt;p&gt;DynamoDB works best for the use cases for which it was originally designed — scenarios where data can be organized by hand to match a constrained set of predetermined query patterns; where low latency from a single region is enough; and where multi-document updates are the exception, not the rule. Examples of these restricted use cases include storing locks as a durable cache for different, less scalable database like an RDBMS, or for less-critical, transient data like the original shopping cart use case.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fauna’s design philosophy: A productivity journey
&lt;/h2&gt;

&lt;p&gt;Fauna, on the other hand, was inspired from our experience at Twitter delivering a global real-time consumer internet service and API. Our team has extensively used and contributed to MySQL, Cassandra, Memcache, Redis, and many other popular data systems. Rather than focus on helping people optimize workloads that are already at scale, we wanted to help people develop functionality quickly and scale it easily over time.&lt;/p&gt;

&lt;p&gt;We wanted to make it possible for any development team to iterate on their application along the journey from small to large &lt;em&gt;without&lt;/em&gt; having to become database experts and spend their time on caching, denormalization, replication, architectural rewrites, and everything else that distracts from building a successful software product.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fauna’s architecture
&lt;/h2&gt;

&lt;p&gt;To further this goal, Fauna uses a &lt;a href="https://fauna.com/blog/consistency-without-clocks-faunadb-transaction-protocol"&gt;unique architecture&lt;/a&gt; that guarantees low latency and transactional consistency across all replicas and indexes even with global replication, and offers a &lt;a href="https://docs.fauna.com/fauna/current/api/fql/"&gt;unique query language&lt;/a&gt; that preserves key relational concepts like ACID transactions, foreign keys, unique constraints, and stored procedures, while also enabling modern non-relational concepts like document-oriented modeling and declarative procedural indexing.&lt;br&gt;
If everything is a table in DynamoDB, in Fauna, everything is a transaction:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All queries are expressed as atomic transactions.&lt;/li&gt;
&lt;li&gt;Transactions are made durable in a partitioned, replicated, strongly-consistent statement-based log.&lt;/li&gt;
&lt;li&gt;Data replicas apply transaction statements from the log in deterministic order, guaranteeing ACID properties without additional coordination.&lt;/li&gt;
&lt;li&gt;These properties apply to everything, including secondary indexes and other read and write transactions.&lt;/li&gt;
&lt;li&gt;Read-only transactions achieve lower latency than writes by skipping the log, while remaining fully consistent with additional safeguards.
Unlike DynamoDB, Fauna shines in the same areas the SQL RDBMS does: modeling messy real-world interaction patterns that start simply but must evolve and scale over time. Unlike SQL, Fauna’s API and security model is designed for the modern era of mobile, browser, edge, and &lt;a href="https://fauna.com/client-serverless"&gt;serverless&lt;/a&gt; applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Like DynamoDB, and unlike the RDBMS, Fauna transparently manages operational concerns like replication, data consistency, and high availability. However, a major difference from DynamoDB is the scalability model. DynamoDB scales by predictively splitting and merging partitions based on observed throughput and storage capacity. By definition, this works well for predictable workloads, while being less ideal for unpredictable ones, because &lt;a href="https://aws.amazon.com/blogs/database/how-amazon-dynamodb-adaptive-capacity-accommodates-uneven-data-access-patterns-or-why-what-you-know-about-dynamodb-might-be-outdated/"&gt;autoscaling changes take time&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Fauna, on the other hand, scales dynamically. As an API, all resources including compute and storage are potentially available to all users at any time. Similar to operating system multithreading, Fauna is continuously scheduling, running, and coorindating queries across all users of the service. Resource consumption is tracked and billed, and our team scales the capacity of each region in aggregate, not on a per-user basis.&lt;/p&gt;

&lt;p&gt;Naturally, this design, and the related benefits, has a different cost structure than something like DynamoDB. For example, there is no way to create an unreplicated Fauna database or to disable transactions. Like DynamoDB, Fauna has metered pricing that scales with the resources your workload actually consumes. But unlike DynamoDB, you are not charged per low-level read and write operation, per replica, per index, because our base case is DynamoDB’s outlier case: the normalized, indexed data model, with the transactional, multi-region access pattern.&lt;/p&gt;

&lt;p&gt;Higher levels of abstraction exist to deliver higher levels of productivity. Fauna offers a much higher level of abstraction than DynamoDB, and our pricing reflects that as well — it includes by default key characteristics that DynamoDB does not. At Fauna we want to provide a database with the highest possible level of abstraction that solves the use cases you would traditionally turn to a relational database for, so that you don’t have to worry about any of the low level concerns at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an API worth?
&lt;/h2&gt;

&lt;p&gt;Most other databases aside from DynamoDB and Fauna are delivered as managed &lt;a href="https://fauna.com/blog/cloud-databases-types-advantages-and-considerations"&gt;cloud infrastructure&lt;/a&gt;, and billed on a provisioned basis that directly reflects the vendor’s costs and those costs alone. Serverless infrastructure is relatively new — S3 is perhaps the first service with a serverless billing model to reach widespread adoption — and serverless databases are even newer. The serverless model in DynamoDB is a retrofit. It is essentially still a provisioned system with the addition of predictive autoscaling.&lt;/p&gt;

&lt;p&gt;Instead, serverlessness to date has mainly been restricted to vertically-integrated, single-purpose APIs. These APIs have been monetized indirectly like Twitter, billed per-action like Twilio, or billed as a percentage of the value exchanged via the API between third parties — like Stripe.&lt;br&gt;
Serverless infrastructure, as we all know, is actually &lt;a href="https://twitter.com/IamStan/status/1018755075827814400"&gt;made from servers&lt;/a&gt;. It has a more complex accounting challenge than vertically-integrated APIs, and is constrained by:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Variance in resource utilization per request&lt;/li&gt;
&lt;li&gt;Variance in request volume over time&lt;/li&gt;
&lt;li&gt;Variance in request locality&lt;/li&gt;
&lt;li&gt;Underlying static costs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The multi-tenancy of serverless infrastructure creates a fundamentally better customer experience. Who wants to pay for capacity they aren’t using? Who wants to have their application degrade because they didn’t buy enough capacity in advance? It’s also a better vendor experience, since no vendor wants to waste infrastructure, and it can be more environmentally friendly.&lt;br&gt;
However, the vendor’s aggregate price across all customers must cover the static infrastructure costs, which are tightly coupled and resistant to change. (As a practical matter, a vendor can’t upgrade and downgrade CPUs, memory, disks, and networks independently of either on demand, even when using managed cloud services.) The aggregate price must also correlate with the business value recognized, and it must be appropriately apportioned based on the realization of that value for each individual customer over time.&lt;/p&gt;

&lt;p&gt;Compared to simply marking up the incremental cost of a server, this pricing problem is hard. Let’s discuss the solutions that DynamoDB and Fauna have found.&lt;/p&gt;

&lt;h2&gt;
  
  
  DynamoDB pricing
&lt;/h2&gt;

&lt;p&gt;For DynamoDB’s base use case for which it was designed, its pricing is relatively clear and straightforward. However, if you add in usage of newer features like global replication, indexes, and transactions, the pricing becomes more opaque, and it can become very difficult to predict costs in advance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fauna pricing
&lt;/h2&gt;

&lt;p&gt;Fauna’s &lt;a href="https://docs.fauna.com/fauna/current/learn/understanding/billing"&gt;pricing&lt;/a&gt; by default maps to the underlying architectural differences we mentioned above and reflects the enhanced capability that you can tap into with Fauna.&lt;br&gt;
Some key differences include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compute&lt;/strong&gt; - DynamoDB doesn’t support computation of any kind within a query, but Fauna does. Thus, Fauna charges separately for compute costs. In DynamoDB, since computation for any particular workload can’t be done in the database at all, it must be done application-side in a compute environment like AWS Lambda, which has its own cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Additional pricing dimensions&lt;/strong&gt; - DynamoDB has additional &lt;a href="https://www.logicata.com/blog/aws-dynamodb-pricing/"&gt;pricing differences&lt;/a&gt; depending on specific regions, infrequent access, transaction isolation levels, and many other characteristics. It significantly complicates any cost and capacity planning exercise, which requires detailed and ongoing adjustments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Powering a SaaS application
&lt;/h2&gt;

&lt;p&gt;A good example of how these architectural differences play out is more evident if we consider something like a typical SaaS application, for example, a CRM. We have an accounts table with 20 secondary indexes defined for all the possible sort fields (DynamoDB’s maximum — Fauna has no limit). We also have an activity table with 10 indexes, and a users table with 5 indexes. Viewing just the default account screen queries 7 indexes and 25 documents. A typical activity update transactionally updates 3 documents at a time with 10 dependency checks and modifies all 35 indexes.&lt;/p&gt;

&lt;p&gt;And of course, we have replicated this data globally to two additional regions in DynamoDB. We will also do consistency checks on all data returned from indexes.&lt;/p&gt;

&lt;p&gt;In Fauna, we do not need to configure replication, or do any additional consistency checking. And we benefit greatly from Fauna’s index write coalescing. Thus just the database usage costs alone with Fauna, let alone the engineering savings involved, are less expensive than DynamoDB because Fauna’s architecture was designed to support these use cases from the beginning.&lt;/p&gt;

&lt;p&gt;Even if we assume that Fauna will require multiple write operations per document because of all the indexes, the result does not materially change. Fauna’s query pattern could also be improved by using unique constraints instead of dependency reads, which would reduce costs further.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using the right tool for the job
&lt;/h2&gt;

&lt;p&gt;Surface level comparisons between DynamoDB and Fauna just because they are both “serverless” can result in false conclusions.&lt;/p&gt;

&lt;p&gt;Because DynamoDB is designed for the simple use case rather than the complex, it can be more cost effective for those simple use cases. Even though core pricing for those simple use cases might be lower, you always have to consider TCO — for DynamoDB that means factoring in additional costs like manual partitioning and the eventually-consistent transactional behavior that reflect its roots as a lower-level system.&lt;/p&gt;

&lt;p&gt;As sophistication grows — for example,  if you configure a dozen indexes for a global deployment in DynamoDB — you will find your write and storage costs have multiplied by an order of magnitude compared to a single region, unindexed table. If you then change to make those writes transactional or start doing dependent reads, your costs increase even more. On the other hand, if you try to use Fauna as an application-adjacent durable cache at scale, you may find you are paying for data replication and transactional consistency that you don’t actually need.&lt;/p&gt;

&lt;p&gt;It is more accurate to say that both DynamoDB and Fauna are great solutions that deliver on the promise of serverless when they are used for their correct purposes, and expensive when used incorrectly. This seems like a universal rule, but it actually isn’t. Most databases, even in the managed cloud, are disproportionately expensive for intermittent or variable workloads, which are prevalent with real-world workloads. This is the benefit of the serverless model: an order of magnitude less waste for both the customer and the vendor.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;At Fauna, we recognize that DynamoDB has pushed the envelope in distributed cloud databases and we are grateful for that. We share the same greater mission. At the same time, we know we as an industry can do better than &lt;em&gt;all&lt;/em&gt; existing databases for mission-critical operational workloads, whether key-value, document-oriented, or based on SQL.I hope this post has provided you with a clearer understanding of the motivations, architectures, and business value of both DynamoDB and Fauna. And further, that this understanding helps you make more informed decisions about which tool is right for which job.&lt;/p&gt;

</description>
      <category>fauna</category>
      <category>dynamodb</category>
      <category>database</category>
    </item>
    <item>
      <title>Side-by-side comparison of serverless databases</title>
      <dc:creator>Fauna</dc:creator>
      <pubDate>Thu, 29 Dec 2022 16:53:31 +0000</pubDate>
      <link>https://forem.com/fauna/side-by-side-comparison-of-serverless-databases-41hk</link>
      <guid>https://forem.com/fauna/side-by-side-comparison-of-serverless-databases-41hk</guid>
      <description>&lt;p&gt;Serverless databases make it easy to build and scale your application because they abstract away the underlying infrastructure and automatically scale to meet the need of your application. A serverless database lets you focus on your code without worrying about capacity planning, infrastructure maintenance, and server management.&lt;/p&gt;

&lt;p&gt;There are quite a few options for serverless databases, including established solutions like DynamoDB and newer offerings such as Mongo Atlas and CockroachDB serverless. This article compares commonly used serverless databases so you can make an informed decision when picking a new database for your project.&lt;/p&gt;

&lt;h2&gt;
  
  
  MongoDB (Mongo Atlas)
&lt;/h2&gt;

&lt;p&gt;MongoDB is a widely used NoSQL database. MongoDB Atlas offers a provisioned managed service as well as a serverless offering. In this article we will compare the serverless offering. &lt;/p&gt;

&lt;p&gt;MongoDB Atlas offers fully managed serverless instances of MongoDB. With Mongo Atlas, you don’t have to manage any infrastructure by yourself.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ Connection over HTTP
&lt;/h3&gt;

&lt;p&gt;Mongo Atlas allows your application to connect to your database through an HTTP connection.&lt;/p&gt;

&lt;h3&gt;
  
  
  ⚠️ Multi-region availability
&lt;/h3&gt;

&lt;p&gt;By default, MongoDB Atlas is deployed in a single region. However, MongoDB Atlas can automatically replicate data across multiple servers for improved reliability and availability. It is partially automated, and you’ll need prior knowledge of sharding and replication.&lt;/p&gt;

&lt;h3&gt;
  
  
  ⚠️ Schema design (Schema-less, non-relational)
&lt;/h3&gt;

&lt;p&gt;MongoDB is flexible and suitable for both structured and unstructured data. Unlike a relational database, MongoDB doesn’t support foreign key joins. Because of this modeling, your data relationships can be tricky with Mongo. There are other ways to get around this problem in MongoDB. Review &lt;a href="https://stackoverflow.com/questions/31480088/join-two-collection-in-mongodb-using-node-js"&gt;this thread&lt;/a&gt; to learn more about data modeling in MongoDB.&lt;/p&gt;

&lt;h3&gt;
  
  
  ❌ No cold starts
&lt;/h3&gt;

&lt;p&gt;If you have a large database then MongoDB Atlas clusters &lt;strong&gt;&lt;em&gt;can have a long cold start time&lt;/em&gt;&lt;/strong&gt;. &lt;/p&gt;

&lt;h3&gt;
  
  
  ⚠️  ACID transactions
&lt;/h3&gt;

&lt;p&gt;Multi-document transactions are not enabled by default in MongoDB or MongoDB Atlas clusters. However, you can configure MongoDB to support multi-document ACID transactions based on your use case. &lt;/p&gt;

&lt;h2&gt;
  
  
  DynamoDB
&lt;/h2&gt;

&lt;p&gt;DynamoDB is a fully managed No-SQL database from Amazon. It supports key-value and document data structures. DynamoDB has a pay-as-you-go model, and it provides auto-scaling. &lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ Connection over HTTP
&lt;/h3&gt;

&lt;p&gt;DynamoDB allows connection over HTTP.&lt;/p&gt;

&lt;h3&gt;
  
  
  ⚠️ Multi-region availability
&lt;/h3&gt;

&lt;p&gt;You can setup multi-region availability with DynamoDB’s global tables feature. However, keep in mind there is significant configuration required to get it up and running properly. &lt;/p&gt;

&lt;p&gt;DynamoDB requires users to choose a partition key that determines how data is grouped and distributed among partitions. How you choose this key impacts DynamoDB’s scalability.  Having a thorough understanding of how your data will be accessed will be critical to selecting the most appropriate partitioning key and strategy. &lt;/p&gt;

&lt;h3&gt;
  
  
  ⚠️ Schema design (Schema-less)
&lt;/h3&gt;

&lt;p&gt;DynamoDB is largely schema-less. Like many of its NoSQL siblings, DynamoDB lacks explicit support for relational data. It is designed with a denormalized schema in mind. Document sizes are also limited to 400kb in DynamoDB, forcing developers to denormalize the application data. &lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ No cold starts
&lt;/h3&gt;

&lt;p&gt;DynamoDB doesn’t have any cold start-related concerns. &lt;/p&gt;

&lt;h3&gt;
  
  
  ⚠️ ACID transactions
&lt;/h3&gt;

&lt;p&gt;DynamoDB transactions are only ACID-compliant within a single region. DynamoDB does not support strongly consistent reads across regions.&lt;/p&gt;

&lt;p&gt;Follow this article for a detailed comparison of &lt;a href="https://fauna.com/blog/compare-fauna-vs-dynamodb"&gt;Fauna and DynamoDB&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  CockroachDB
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Like Mongo here we are focusing on the Serverless offering of the CockroachDB not the provisioned managed service since provisioned service doesn’t offer pay per usage.&lt;/em&gt;  &lt;/p&gt;

&lt;p&gt;CockroachDB is a distributed SQL database system. It is based on the idea of a "cockroach cluster," a group of nodes that work together to provide fault tolerance and high availability. Each node in a CockroachDB cluster can serve read and write requests, and the cluster can automatically recover from failures and distribute data evenly across the nodes.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ Connection over HTTP
&lt;/h3&gt;

&lt;p&gt;CockroachDB supports connection over HTTP. &lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ Multi-region availability
&lt;/h3&gt;

&lt;p&gt;CockroachDB provides out-of-box multi-region availability without further configuration. &lt;/p&gt;

&lt;h3&gt;
  
  
  ⚠️ Schema design (SQL )
&lt;/h3&gt;

&lt;p&gt;CockroachDB has a Dynamic schema. It is relational and supports SQL. Since cockroach is based on traditional RDBMS it may take a lot of work to evolve unstructured data as your application grows.&lt;/p&gt;

&lt;h3&gt;
  
  
  ❌ No cold starts
&lt;/h3&gt;

&lt;p&gt;For large amounts of datasets CockroachDB clusters may experience cold starts.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ ACID transactions
&lt;/h3&gt;

&lt;p&gt;CockroachDB fully supports strong consistency and ACID transactions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fauna
&lt;/h2&gt;

&lt;p&gt;Fauna is a serverless database that combines the flexibility of NoSQL with the relational querying capabilities and consistency of SQL. You get the best of both worlds when using Fauna. It is fully managed, auto-scales, and does not require infrastructure management.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ Connection over HTTP
&lt;/h3&gt;

&lt;p&gt;Fauna allows your applications to connect to the database over HTTP. You can use a Fauna driver or Fauna's GraphQL interface to communicate with the database.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ Multi-region availability
&lt;/h3&gt;

&lt;p&gt;Fauna provides multi-region availability out of the box; it will auto-replicate your data across servers. It is distributed by default within a geographic region or across the globe. You always get strong consistency among region groups. Fauna is designed to keep the data closest to your users, improving reliability and availability.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ Schema design (Document-relational)
&lt;/h3&gt;

&lt;p&gt;Fauna follows a &lt;a href="https://docs.fauna.com/fauna/current/learn/introduction/document_relational"&gt;document-relational&lt;/a&gt; database model. It &lt;strong&gt;combines the flexibility and familiarity of JSON documents with the relationships and querying power of a traditional relational database.&lt;/strong&gt; &lt;br&gt;
In short, you get the best of both SQL and NoSQL worlds with Fauna. Because of the flexible nature of the Fauna database, it is effortless to scale and evolve your data schema as your application grows.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ No cold starts
&lt;/h3&gt;

&lt;p&gt;Fauna has zero cold starts. &lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ ACID transactions
&lt;/h3&gt;

&lt;p&gt;One of the features of Fauna that has generated the most excitement is its strongly consistent distributed ACID transaction engine. Fauna provides strong consistency of data globally through an implementation of the &lt;a href="https://fauna.com/blog/distributed-consistency-at-scale-spanner-vs-calvin"&gt;Calvin&lt;/a&gt; protocol.&lt;/p&gt;

&lt;p&gt;Below you’ll find a visual comparison of all these serverless database offerings.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Mongo Atlas&lt;/th&gt;
&lt;th&gt;DynamoDB&lt;/th&gt;
&lt;th&gt;CockroachDB&lt;/th&gt;
&lt;th&gt;Fauna&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Connection of HTTP&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multi-region availability&lt;/td&gt;
&lt;td&gt;⚠️&lt;/td&gt;
&lt;td&gt;⚠️&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No cold starts&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ACID transaction&lt;/td&gt;
&lt;td&gt;⚠️&lt;/td&gt;
&lt;td&gt;⚠️&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Flexible schema design&lt;/td&gt;
&lt;td&gt;⚠️ (NoSQL only)&lt;/td&gt;
&lt;td&gt;⚠️ (NoSQL only)&lt;/td&gt;
&lt;td&gt;⚠️ (SQL only)&lt;/td&gt;
&lt;td&gt;✅ (Document-relational)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you are considering Fauna, create a &lt;a href="https://dashboard.fauna.com/"&gt;free account&lt;/a&gt; (no credit card required) and give it a go. If you have questions, you can reach out in the Fauna &lt;a href="https://discord.gg/NHwJFdG2B2"&gt;Discord channel&lt;/a&gt; or Fauna &lt;a href="https://forums.fauna.com/c/help-general"&gt;community forum&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>database</category>
      <category>fauna</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Implementing Fauna as infrastructure as code with Serverless Framework</title>
      <dc:creator>Fauna</dc:creator>
      <pubDate>Thu, 27 Oct 2022 17:26:24 +0000</pubDate>
      <link>https://forem.com/fauna/implementing-fauna-as-infrastructure-as-code-with-serverless-framework-3m99</link>
      <guid>https://forem.com/fauna/implementing-fauna-as-infrastructure-as-code-with-serverless-framework-3m99</guid>
      <description>&lt;p&gt;This article demonstrates how to use Fauna as infrastructure as code (IaC) in your application using the Serverless Framework, one of the most popular tools for managing infrastructure as code. Fauna has a dedicated &lt;a href="https://www.npmjs.com/package/@fauna-labs/serverless-fauna#installation"&gt;plugin&lt;/a&gt; for the Serverless Framework that gives you complete control to manage your Fauna resources. You can integrate it into your test and CI/CD pipelines to keep your databases in sync across multiple environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of IaC
&lt;/h2&gt;

&lt;p&gt;Before we dive deep into implementing Fauna as IaC, let's discuss why you might want to integrate IaC.&lt;/p&gt;

&lt;p&gt;There are three main benefits of IaC:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Decreased risks:&lt;/strong&gt; Provisioning all of your infrastructures manually can be risky, especially if you have multiple dependencies among services. Complex deployment is prone to human errors. When you automate the process with IaC, you reduce these. Your infrastructure also becomes testable, and you can spin up multiple environments (exact replicas of production).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficient software development lifecycle:&lt;/strong&gt;  With IaC, infrastructure provisioning becomes more reliable and consistent. Developers get complete control of the infrastructure through code. Developers can script once and use that code multiple times, saving time and effort while keeping full control.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-documenting and reduced administration:&lt;/strong&gt; IaC is self-documenting. And it reduces administrative overhead, allowing your engineering efforts to be focused on new feature development.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting started with the Fauna Serverless Framework plugin
&lt;/h2&gt;

&lt;p&gt;Install the Serverless Framework plugin with the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; @fauna-labs/serverless-fauna &lt;span class="nt"&gt;--save-dev&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or using yarn&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;yarn add @fauna-labs/serverless-fauna
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open your &lt;code&gt;serverless.yml&lt;/code&gt; file and add the following code to add Fauna to your project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;@fauna-labs/serverless-fauna"&lt;/span&gt;
&lt;span class="na"&gt;fauna&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${env:FAUNA_ROOT_KEY}&lt;/span&gt;
    &lt;span class="c1"&gt;# domain: db.fauna.com&lt;/span&gt;
    &lt;span class="c1"&gt;# port: 433&lt;/span&gt;
    &lt;span class="c1"&gt;# scheme: https&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, the domain is set to &lt;code&gt;[db.fauna.com](http://db.fauna.com)&lt;/code&gt;. You can create new collections by adding the collection name under the &lt;code&gt;collections&lt;/code&gt; field as demonstrated in the following code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;fauna&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${env:FAUNA_ROOT_KEY}&lt;/span&gt;
    &lt;span class="c1"&gt;# domain: db.fauna.com&lt;/span&gt;
    &lt;span class="c1"&gt;# port: 433&lt;/span&gt;
    &lt;span class="c1"&gt;# scheme: https&lt;/span&gt;
  &lt;span class="na"&gt;collections&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Movies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Movies&lt;/span&gt;
      &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;some_data_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;some_data_value&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The collection configuration accepts the same configuration as &lt;a href="https://docs.fauna.com/fauna/current/api/fql/functions/createcollection?lang=javascript#param_object"&gt;CreateCollection&lt;/a&gt; query. &lt;/p&gt;

&lt;p&gt;Similarly, you can add functions and indexes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;fauna&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${env:FAUNA_ROOT_KEY}&lt;/span&gt;
    &lt;span class="c1"&gt;# domain: db.fauna.com&lt;/span&gt;
    &lt;span class="c1"&gt;# port: 433&lt;/span&gt;
    &lt;span class="c1"&gt;# scheme: https&lt;/span&gt;
  &lt;span class="na"&gt;collections&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Movies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Movies&lt;/span&gt;
      &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;some_data_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;some_data_value&lt;/span&gt;

  &lt;span class="na"&gt;functions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;double&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;double&lt;/span&gt;
      &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${file(./double.fql)}&lt;/span&gt;

  &lt;span class="na"&gt;indexes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;movies_by_type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;movies_by_type&lt;/span&gt;
      &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${self:fauna.collections.Movies.name}&lt;/span&gt;
      &lt;span class="na"&gt;terms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;fields&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;data.type&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can review the &lt;a href="https://www.npmjs.com/package/@fauna-labs/serverless-fauna#installation"&gt;documentation&lt;/a&gt; for the Serverless Framework plugin to learn more about different functionalities and implementations. &lt;/p&gt;

&lt;p&gt;We have also created a hands-on tutorial on &lt;a href="https://fauna.com/blog/building-a-rest-api-with-aws-lambda-fauna-and-serverless-framework"&gt;building a REST API with Fauna, Serverless, and AWS Lambda&lt;/a&gt;. This tutorial will help you get up and running with a simple project with Fauna and Serverless Framework.&lt;/p&gt;

&lt;p&gt;If you are searching for something more comprehensive, we have also created a self-paced workshop that will guide you through building a real-world project with Fauna, Serverless Framework, and AWS services. You can find the workshop &lt;a href="https://aws.workshops.fauna.com/"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;blockquote&gt;
&lt;h4&gt;&lt;a href="https://aws.workshops.fauna.com/"&gt;Fauna AWS Workshop ~ Building an event-driven app with AWS services and Fauna&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;This hands-on guide walks you through building a real-world event-driven serverless application using AWS services (i.e., AWS Lambda, Step Functions, API Gateway) and Fauna. In this workshop, you build a vacation booking application (Similar to Kayak or Redtag deals).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Got questions? Reach out in our &lt;a href="https://discord.gg/NHwJFdG2B2"&gt;Discord channel&lt;/a&gt; or in our &lt;a href="https://forums.fauna.com/c/help-general"&gt;community forum&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>fauna</category>
      <category>tutorial</category>
      <category>aws</category>
    </item>
    <item>
      <title>Delivering personalized content with Netlify’s Next.js Advanced Middleware and Fauna</title>
      <dc:creator>Fauna</dc:creator>
      <pubDate>Wed, 26 Oct 2022 14:10:27 +0000</pubDate>
      <link>https://forem.com/fauna/delivering-personalized-content-with-netlifys-nextjs-advanced-middleware-and-fauna-1n0k</link>
      <guid>https://forem.com/fauna/delivering-personalized-content-with-netlifys-nextjs-advanced-middleware-and-fauna-1n0k</guid>
      <description>&lt;p&gt;It’s become a truism that the speed and performance of web applications, e-commerce sites, and websites are critical variables in converting visitors to paying customers. Modern consumers expect these assets to serve data quickly, accurately, and contextually. API-first platforms like Fauna and &lt;a href="https://www.netlify.com"&gt;Netlify&lt;/a&gt;, paired with other composable technologies designed for distributed workloads, drastically simplify the process of deploying applications that serve dynamic content at the edge with low latency, without complicated stitching or multi-team engineering efforts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serving globally distributed data in legacy architectures
&lt;/h2&gt;

&lt;p&gt;Historically, developers have been constrained in how they optimize the performance attributes and experience of their applications by the underlying infrastructure - typically centralized servers and databases. The emergence of &lt;a href="https://en.wikipedia.org/wiki/Content_delivery_network"&gt;content delivery networks&lt;/a&gt; (CDNs) in the early 2000s enabled companies to host and distribute static assets to a globally distributed audience with low latency - however these CDNs were limited to hosting static content like images, text files, and HTML files. As these CDNs were globally distributed, the performance attributes of this static content was far superior to relying on a more traditional centralized model - a user accessing a static site in Shanghai would access that content from the CDN node in Shanghai, instead of being pushed to a data center across the county. With that said, the benefits of serving static content through a CDN wasn’t extended to dynamic content that could change over time and evolve; this category of content relies on compute and data services to transform the data based on context.&lt;/p&gt;

&lt;p&gt;In the case of a global or even regional application, relying on a centralized architecture to serve dynamic content can be suboptimal. Consider an e-commerce application with users visiting the site from Los Angeles and Toronto. In an ideal world, the application would serve content that’s unique to those users and may change depending on context - where the user is physically located (and the potential discounts or prompts subsequently served to the user’s location), or if the user came from a referring source, for example. This data is ephemeral and dynamic in nature, and unique to each independent user accessing the application. If a user in Los Angeles is being served data hosted in the US-East 1 data center in Virginia, the performance attributes will suffer as a result simply due to the physical distance between the user and the compute. Similar to static content with CDNs, we want to optimize for speed and performance by moving the processing elements as close to each respective user as possible. This means pushing the compute and data out to the edge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dynamic personalization at the edge
&lt;/h2&gt;

&lt;p&gt;Personalization allows companies to serve information that’s tailored for a particular customer based on a defined set of characteristics (most commonly geo-based). For example, if a visitor on an e-commerce site is based in Seattle, they may be served a discount on an item based on an inventory surplus in a local warehouse. Being able to serve this type of dynamic data, however, is not trivial. The ability to move the compute and data for dynamic content as close as possible to users (as demonstrated in CDNs) was until very recently impossible; this has changed with the emergence of platforms like Fauna and Netlify. &lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.netlify.com/blog/nextjs-advanced-middleware-better-runtime-that-extends-nextjs/"&gt;Next.js Advanced Middleware&lt;/a&gt; (powered by &lt;a href="https://docs.netlify.com/edge-functions/overview/"&gt;Netlify Edge Functions&lt;/a&gt;) and Fauna’s distributed-by-default &lt;a href="https://fauna.com/blog/what-is-a-document-relational-database?utm_source=Devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=DevTo_FaunaNetlify"&gt;document-relational&lt;/a&gt; database, we’re able to move servers and databases closer to user locations - similar to the shift we’ve witnessed with static assets served on CDNs, but now for dynamic content. Together, Fauna and Netlify enable a transformation in the application architecture for personalization at the edge: developers now have the option to host business logic wherever it’s most performant. &lt;/p&gt;

&lt;p&gt;When a user writes to or reads from a node, Netlify Edge Functions, the images/CSS files, and pieces of business logic are being grabbed from the closest possible compute node and corresponding data source node. Further, with the power of Fauna’s &lt;a href="https://docs.fauna.com/fauna/current/learn/understanding/user_defined_functions?utm_source=Devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=DevTo_FaunaNetlify"&gt;user-defined functions&lt;/a&gt; and advanced querying, you’re able to access a new mental model that allows you to make strategic decisions on where and how often work needs to be done; if you’re able to move the processing further back in the stack with Fauna, you can build it into the places that have the tightest relationships with the data and be optimized in ways that client-side JavaScript can’t be.&lt;/p&gt;

&lt;p&gt;While the middleware component of Next.js is germane to Next.js, out-of-the-box it is limited to redirects or rewrites (you can send users to another page, or proxy other content into the application and show users new content). Netlify has extended the middleware API to give access to the full request, which allows for full transformation and personalization - whether it’s location or referral based. With a proxy or a redirect, you’d need to develop a custom page for each redirect. The ability to do a direct transformation with Netlify ultimately makes the engineering far less cumbersome. Netlify provides a really good example of the power of page transformations &lt;a href="https://www.netlify.com/blog/rewrite-html-transform-page-props-in-nextjs/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Fauna + Netlify for edge personalization
&lt;/h2&gt;

&lt;p&gt;Fauna is delivered as a global API and is complementary to Netlify’s Edge Functions; ultimately, you don’t want to bloat your edge functions with code. Everything in the Fauna-Netlify architecture is server-side rendered, so it creates a faster experience by avoiding bloated client-side code that would flood the client CPU or the browser memory. You’re able to modify your code from being a large response down to the few lines you need to write to account for personalization, and it’s possible to do this in the edge function instead of the client-side without client-side impact to the user and without sacrificing performance. Until very recently, this type of architecture wasn’t possible to build, especially for small teams. With Fauna and Netlify, you can get up and running with a working solution within hours. &lt;/p&gt;

&lt;p&gt;Companies would historically need DevOps, frontend, backend, and database management teams to handle the orchestration of all of the elements associated with a global application. Now, a single full-stack developer can handle all of these elements and sketch out an application in a matter of hours that has global deployment, global database replication, and all of the front end and back end configured. Fauna’s auto-replication, distribution-by-default, and serverless attributes paired with Netlify’s edge compute make it possible. There’s no need to account for sharding, provisioning, or global replication - it’s all delivered out-of-the-box.&lt;/p&gt;

&lt;h2&gt;
  
  
  Netlify Edge Functions + Fauna example and code walk-through
&lt;/h2&gt;

&lt;p&gt;To move out of the abstract and into a real-world example, we hosted a webinar with Netlify where we did a code walk-through of a basic marketing site pushed to the edge on Fauna and Netlify. Check out the &lt;a href="https://www.youtube.com/watch?v=X5uesJKkbh8"&gt;accompanying webinar&lt;/a&gt; and start digging into the code in the &lt;a href="https://github.com/Shadid12/netlify-edge"&gt;repo&lt;/a&gt; to learn how to build the functionality with just a few lines of code. You’ll see that in this example, one user is making a request to the site from Toronto, Canada, and another from Portland, Oregon. When each respective user makes a request, they hit the compute node closest to their respective locations and each is served a unique page. Meanwhile, the request is also directed to the nearest Fauna replica to read and serve the data. In this example, the read is modified at the edge and as close as possible to the user - which wouldn’t be possible with a regionally hosted database or server. Both the compute and data is at the edge - which ultimately reduces latency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;Fauna and Netlify unlock the ability to optimize for the most performant architectural decision based on the use case a team might be trying to solve, instead of being limited to whatever your legacy infrastructure may dictate.&lt;/p&gt;

&lt;p&gt;Are you ready to get started?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://app.fauna.com/sign-up?utm_source=Devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=DevTo_FaunaNetlify"&gt;Sign up for a free Fauna account&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Grab your &lt;a href="https://dashboard.fauna.com/db/keys?utm_source=Devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=DevTo_FaunaNetlify"&gt;Fauna API key&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Click the &lt;a href="https://app.netlify.com/start/deploy?repository=https://github.com/netlify/netlify-faunadb-example"&gt;Deploy to Netlify button&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Have any questions or want to engage with the Fauna and Netlify communities? Reach out in the Fauna &lt;a href="https://discord.gg/NHwJFdG2B2"&gt;Discord channel&lt;/a&gt; or Fauna &lt;a href="https://forums.fauna.com/c/help-general?utm_source=Devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=DevTo_FaunaNetlify"&gt;community forum&lt;/a&gt;, and also check out the &lt;a href="https://answers.netlify.com"&gt;Netlify Community Forums&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>netlify</category>
      <category>fauna</category>
      <category>database</category>
      <category>nextjs</category>
    </item>
    <item>
      <title>Transfer data in Fauna to your analytics tool using Airbyte</title>
      <dc:creator>Fauna</dc:creator>
      <pubDate>Tue, 25 Oct 2022 21:48:01 +0000</pubDate>
      <link>https://forem.com/fauna/transfer-data-in-fauna-to-your-analytics-tool-using-airbyte-3g03</link>
      <guid>https://forem.com/fauna/transfer-data-in-fauna-to-your-analytics-tool-using-airbyte-3g03</guid>
      <description>&lt;p&gt;We are excited to introduce Fauna’s new &lt;a href="https://airbyte.com"&gt;Airbyte&lt;/a&gt; open source &lt;a href="https://docs.airbyte.com/integrations/sources/fauna/"&gt;connector&lt;/a&gt;. This connector lets you replicate Fauna data into your data warehouses, lakes, and analytical databases, such as Snowflake, Redshift, S3, and more.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Airbyte
&lt;/h3&gt;

&lt;p&gt;With the proliferation of applications and data sources, companies are often required to build custom connectors for data transfer across their architectures. Most of these ETL (extract, transform, and load) tools require maintaining and updating the connectors as requirements change over time. Airbyte is an open source data pipeline platform that eliminates this burden by offering a robust connector ecosystem that can scale without having to maintain the connector itself.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Fauna
&lt;/h3&gt;

&lt;p&gt;Fauna is a distributed &lt;a href="https://fauna.com/blog/what-is-a-document-relational-database"&gt;document-relational database&lt;/a&gt; delivered as a cloud API. Developers choose Fauna’s document-relational model because it combines the flexibility of NoSQL databases with the relational querying and ACID capabilities of SQL databases. This model is delivered as an API so you can focus on building features and not to worry about any operations or infrastructure management.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Fauna + Airbyte
&lt;/h3&gt;

&lt;p&gt;Fauna and Airbyte both enable improved productivity and developer experience - together, the connector will allow developers to port and migrate transactional data in Fauna to your choice of analytical tools to drive business insights.&lt;br&gt;
Continue reading for a guide on how to configure the Fauna &lt;em&gt;source&lt;/em&gt; connector to transfer your database to one of the data analytics or warehousing &lt;a href="https://airbyte.com/connectors?connector-type=Destinations"&gt;destination connectors&lt;/a&gt; supported by &lt;a href="https://airbyte.com/"&gt;Airbyte&lt;/a&gt;.&lt;br&gt;
The Fauna source supports the following ways to export your data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Full refresh append sync mode&lt;/em&gt; copies all of your data to the destination, without deleting existing data.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Full refresh overwrite sync mode&lt;/em&gt; copies the whole stream and replaces data in the destination by overwriting it.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Incremental append sync mode&lt;/em&gt; periodically transfers new, changed, or deleted data to the destination.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Incremental deduped history sync mode&lt;/em&gt; copies new records from stream and appends data in the destination, while providing a de-duplicated view mirroring the state of the stream in the source&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;You need a destination database account and need to set up the &lt;a href="https://www.getdbt.com/"&gt;Data Build Tool&lt;/a&gt; (dbt™) to transform fields in your documents to columns in your destination. You also need to install Docker.&lt;/p&gt;
&lt;h3&gt;
  
  
  Create a destination database account
&lt;/h3&gt;

&lt;p&gt;If you do not already have an account for the database associated with your destination connector, create an account and save the authentication credentials for setting up the destination connector to populate the destination database.&lt;/p&gt;
&lt;h3&gt;
  
  
  Set up dbt
&lt;/h3&gt;

&lt;p&gt;To access the fields in your Fauna source using SQL-style statements, create a dbt account and set up dbt as described in the Airbyte &lt;a href="https://docs.airbyte.com/operator-guides/transformation-and-normalization/transformations-with-dbt/"&gt;Transformations with dbt&lt;/a&gt; setup guide. The guide steps you through the setup for transforming the data between the source and destination, and connects you to the destination database.&lt;/p&gt;
&lt;h3&gt;
  
  
  Install Docker
&lt;/h3&gt;

&lt;p&gt;The Fauna connector is an Airbyte Open Source integration, deployed as a Docker image. If you do not already have Docker installed, follow the &lt;a href="https://docs.docker.com/engine/install/"&gt;Install Docker Engine&lt;/a&gt; guide.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: Set up the Fauna source
&lt;/h2&gt;

&lt;p&gt;Depending on your use case, set up one of the following sync modes for your collection.&lt;/p&gt;
&lt;h3&gt;
  
  
  Full refresh sync mode
&lt;/h3&gt;

&lt;p&gt;Follow these steps to fully sync the source and destination database.&lt;/p&gt;

&lt;p&gt;1- Use the &lt;a href="https://dashboard.fauna.com/"&gt;Fauna Dashboard&lt;/a&gt; or &lt;code&gt;fauna-shell&lt;/code&gt; shell to create a role that can read the collection to be exported. The Fauna Source needs access to the Collections resource so that it can find which collections are readable. This does not give it access to all the collections, just the names of all the collections. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CreateRole({
  name: "airbyte-readonly",
  privileges: [{
    resource: Collection("COLLECTION_NAME"),
    actions: { read: true }
  }],
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;COLLECTION_NAME&lt;/code&gt; with the collection name for this connector.&lt;/p&gt;

&lt;p&gt;2- Create a secret that has the permissions associated with the role, using the &lt;code&gt;name&lt;/code&gt; of the role you created. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CreateKey({
  name: "airbyte-readonly",
  role: Role("airbyte-readonly"),
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  ref: Key("341909050919747665"),
  ts: 1662328730450000,
  role: Role("airbyte-readonly"),
  secret: "fnAEjXudojkeRWaz5lxL2wWuqHd8k690edbKNYZz",
  hashed_secret: "$2a$05$TGr5F3JzriWbRUXlKMlykerq1nnYzEUr4euwrbrLUcWgLhvWmnW6S"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the returned &lt;code&gt;secret&lt;/code&gt;, otherwise, you need to create a new key.&lt;/p&gt;

&lt;h3&gt;
  
  
  Incremental append sync mode
&lt;/h3&gt;

&lt;p&gt;Use incremental sync mode to periodically sync the source and destination, updating only new and changed data.&lt;/p&gt;

&lt;p&gt;Follow these steps to set up incremental sync.&lt;/p&gt;

&lt;p&gt;1- Use the &lt;a href="https://dashboard.fauna.com/"&gt;Fauna Dashboard&lt;/a&gt; or &lt;code&gt;fauna-shell&lt;/code&gt; to create an index, which lets the connector do incremental syncs. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CreateIndex({
  name: "INDEX_NAME",
  source: Collection("COLLECTION_NAME"),
  terms: [],
  values: [
    { "field": "ts" },
    { "field": "ref" }
  ]
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;INDEX_NAME&lt;/code&gt; with the name you configured for the Incremental Sync Index. Replace &lt;code&gt;COLLECTION_NAME&lt;/code&gt; with the name of the collection configured for this connector.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;|Index values|Description|
| --- | ----------- |
|`ts`| Last modified timestamp.|
|`ref`|Unique document identifier.|
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;2- Create a role that can read the collection and index, and can access index metadata to validate the index settings. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CreateRole({
  name: "airbyte-readonly",
  privileges: [
    {
      resource: Collection("COLLECTION_NAME"),
      actions: { read: true }
    },
    {
      resource: Index("INDEX_NAME"),
      actions: { read: true }
    },
    {
      resource: Indexes(),
      actions: { read: true }
    }
  ],
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;COLLECTION_NAME&lt;/code&gt; with the name of the collection configured for this connector. Replace &lt;code&gt;INDEX_NAME&lt;/code&gt; with the name that you configured for the Incremental Sync Index.&lt;/p&gt;

&lt;p&gt;3- Create a secret key that has the permissions associated with the role, using the &lt;code&gt;name&lt;/code&gt; of the role you created. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CreateKey({
  name: "airbyte-readonly",
  role: Role("airbyte-readonly"),
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  ref: Key("341909050919747665"),
  ts: 1662328730450000,
  role: Role("airbyte-readonly"),
  secret: "fnAEjXudojkeRWaz5lxL2wWuqHd8k690edbKNYZz",
  hashed_secret: "$2a$05$TGr5F3JzriWbRUXlKMlykerq1nnYzEUr4euwrbrLUcWgLhvWmnW6S"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the returned &lt;code&gt;secret&lt;/code&gt;. You need to enter the secret in step 2 of the &lt;a href="https://deploy-preview-1193--fauna-docs.netlify.app/fauna/current/build/integrations/airbyte#install-docker"&gt;Install Docker&lt;/a&gt; procedure. It is important to save the key, otherwise, you need to create a new key if you lose the provided secret.&lt;/p&gt;

&lt;p&gt;The Fauna source iterates through all indexes on the database. For each index it finds, the following conditions must be met for incremental sync:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The source must be able to &lt;code&gt;Get()&lt;/code&gt; the index, which means it needs read access to this index.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The source of the index must be a reference to the collection you are trying to sync&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The number of values must be two.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The number of terms must be zero.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The values must be equal to:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;field&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;field&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ref&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All of the above conditions are checked in the order listed. If a check fails, it skips that index.&lt;/p&gt;

&lt;p&gt;If no indexes are found in the initial setup, incremental sync isn't available for the given collection. No error is emitted because it can't be determined whether or not you are expecting an index for that collection.&lt;/p&gt;

&lt;p&gt;If you find that the collection doesn't have incremental sync available, make sure that you followed all the setup steps, and that the source, terms, and values all match for your index.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Deploy and launch Airbyte
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Refer to the &lt;a href="https://docs.airbyte.com/quickstart/deploy-airbyte"&gt;Deploy Airbyte&lt;/a&gt; instructions to install and deploy Airbyte. Enter the following commands to deploy the Airbyte server:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/airbytehq/airbyte.git
&lt;span class="nb"&gt;cd &lt;/span&gt;airbyte
docker-compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When the Airbyte banner displays, launch the Airbyte dashboard at &lt;code&gt;http://localhost:8000&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the &lt;strong&gt;Connections&lt;/strong&gt; menu item to start setting up your data source.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 3: Set up the Fauna source
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the Airbyte dashboard, click the &lt;strong&gt;+ New connection button&lt;/strong&gt;. If you previously set up a source, click the &lt;strong&gt;Use existing source button&lt;/strong&gt; to choose that source.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the &lt;strong&gt;Source type&lt;/strong&gt; dropdown, choose &lt;strong&gt;Fauna&lt;/strong&gt; and click the &lt;strong&gt;Set up source&lt;/strong&gt; button. This lists the configurable Fauna connector parameters. An in-app &lt;strong&gt;Setup Guide&lt;/strong&gt; in the right-side panel also gives detailed setup instructions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set the following required parameters:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Enter a descriptive name for this connection. The &lt;em&gt;name&lt;/em&gt; is displayed in the &lt;strong&gt;Connections&lt;/strong&gt; window connections list.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Domain&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Enter the domain of the collection you want to export. See &lt;a href="https://deploy-preview-1193--fauna-docs.netlify.app/fauna/current/learn/understanding/region_groups"&gt;Region Groups&lt;/a&gt; for region domains.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Port&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Enter the default port number: &lt;code&gt;443&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Scheme&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Enter the scheme used to connect to Fauna: &lt;code&gt;https&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Fauna Secret&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Enter the saved Fauna secret that you use to authenticate with the database.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Page Size&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;The page size lets you control the memory size, which affects connector performance.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Deletion Mode&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;The deletion mode lets you specify whether to ignore document deletions or flag documents as deleted, depending on your use case.&lt;br&gt; + Choose from the following options:&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;Ignore&lt;/strong&gt; option ignores document deletions.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Deleted Field&lt;/strong&gt; option adds a date column that has the date when you deleted the document. This maintains document history while letting the destination reconstruct deletion events.&lt;/li&gt;
&lt;/ul&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After setting up the source, click the &lt;strong&gt;Set up source&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;The "All connection tests passed!" message confirms successful connection to the Fauna source. This minimally confirms:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-   The secret is valid.

&lt;ul&gt;
&lt;li&gt;  The connector can list collections and indexes.
&lt;/li&gt;
&lt;/ul&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;


Step 4: Set up the destination
&lt;/h2&gt;


&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the &lt;strong&gt;New connection&lt;/strong&gt; window, choose a &lt;strong&gt;Destination&lt;/strong&gt; type and click the &lt;strong&gt;Set up destination&lt;/strong&gt; button. If you previously set up a destination, click the &lt;strong&gt;Use existing destination&lt;/strong&gt; button to select and use that destination. Otherwise, continue to set up a new destination.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Destination connector configuration parameters are unique to the destination. Populate the &lt;strong&gt;Set up the destination&lt;/strong&gt; fields according to the connector requirements, including authentication information if needed. A &lt;strong&gt;Setup Guide&lt;/strong&gt; is provided in the right-side panel with detailed setup instructions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When you are done, click the &lt;strong&gt;Set up destination&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 5: Set up the connection
&lt;/h2&gt;

&lt;p&gt;Set up the connection to sync the source and destination.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Enter a descriptive name for the connection in the &lt;strong&gt;Name&lt;/strong&gt; field.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Choose a &lt;strong&gt;Transfer &amp;gt; Replication&lt;/strong&gt; frequency, which is the data sync interval.&lt;/p&gt;

&lt;p&gt;You can choose the &lt;strong&gt;Manual&lt;/strong&gt; option to manually sync the data.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the &lt;strong&gt;Streams &amp;gt; Destination Namespace&lt;/strong&gt; field, choose a destination namespace where the data is stored. Options include:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Option&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Mirror source structure&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Sets the name in the destination database to the name used for the Fauna source.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Other&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Uses another naming option, such as prefixing the database name with a string.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optionally, enter a stream name prefix in the &lt;strong&gt;Streams &amp;gt; Destination Stream Prefix&lt;/strong&gt; field.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the &lt;strong&gt;Activate the streams you want to sync&lt;/strong&gt; section, click the &lt;code&gt;&amp;gt;&lt;/code&gt; arrow to expand the available fields:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;data&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Collection data.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ref&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Unique document identifier.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ts&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Data timestamp.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ttl&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Time-to-live interval.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The document is deleted if it is not modified in the &lt;code&gt;ttl&lt;/code&gt; time interval. The default value is &lt;code&gt;null&lt;/code&gt; for not used. After document deletion, it is not displayed in temporal queries and the connector does not emit a &lt;code&gt;deleted_at&lt;/code&gt; row.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;code&gt;ref&lt;/code&gt; as the &lt;strong&gt;Primary key&lt;/strong&gt;. This uniquely identifies the document in the collection.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Choose a &lt;strong&gt;Sync mode&lt;/strong&gt; as the source sync behavior, full or incremental.&lt;/p&gt;

&lt;p&gt;A new incremental sync gets the full database, the same as a full sync.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Sync mode&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Incremental - Deduped history&lt;/td&gt;
&lt;td&gt;Sync new records from stream and append data in destination, also provides a de-duplicated view mirroring the state of the stream in the source.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Full refresh - Overwrite&lt;/td&gt;
&lt;td&gt;Sync the whole stream and replace data in destination by overwriting it.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Incremental - Append&lt;/td&gt;
&lt;td&gt;Sync new records from stream and append data in destination.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Full refresh - Append&lt;/td&gt;
&lt;td&gt;Sync the whole stream and append data in destination.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If fewer than four options are displayed, it indicates that the index is incorrectly set up. See &lt;a href="https://fauna.com/blog/transfer-data-in-fauna-to-your-analytics-tool-using-airbyte#step-1-set-up-the-fauna-source"&gt;Step 1: Set up the Fauna source&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Choose the Normalization and Transformation data format:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Data format&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Raw data (JSON)&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Put all the source data in a single column.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Normalized tabular data&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Put the &lt;code&gt;ref&lt;/code&gt;, &lt;code&gt;ts&lt;/code&gt;, &lt;code&gt;ttl&lt;/code&gt;, and &lt;code&gt;data&lt;/code&gt; fields in separate columns.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click the &lt;strong&gt;+ Add transformation&lt;/strong&gt; button to add the dbt transform.&lt;/p&gt;

&lt;p&gt;To extract the fields in the source &lt;code&gt;data&lt;/code&gt; column, you need to configure dbt to map source data to destination database columns. For example, the following SQL-based query extracts the &lt;code&gt;name&lt;/code&gt;, &lt;code&gt;account_balance&lt;/code&gt;, and &lt;code&gt;credit_card/expires&lt;/code&gt; fields from the source &lt;code&gt;data&lt;/code&gt; column to populate three separate columns of the destination data:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="k"&gt;output&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="k"&gt;select&lt;/span&gt;
    &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;
    &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;account_balance&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;balance&lt;/span&gt;
    &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;credit_card&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;expires&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;cc_expires&lt;/span&gt;
  &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;airbyte_schema&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;users&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="k"&gt;output&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the &lt;strong&gt;Set up connection&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 6: Sync the data
&lt;/h2&gt;

&lt;p&gt;On the &lt;strong&gt;Connection&lt;/strong&gt; page for the connection you created, click the &lt;strong&gt;Sync now&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;The time to run a sync varies with the status displayed in &lt;strong&gt;Sync History&lt;/strong&gt;. When the sync completes, the status changes from &lt;strong&gt;Running&lt;/strong&gt; to &lt;strong&gt;Succeeded&lt;/strong&gt; and shows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The number of bytes transferred.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The number of records emitted and committed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The sync duration.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 7: Verify the integration
&lt;/h2&gt;

&lt;p&gt;To expand the sync log, click the &lt;code&gt;&amp;gt;&lt;/code&gt; arrow to the right of the displayed time. This gives you a detailed view of the sync events.&lt;/p&gt;

&lt;p&gt;Finally, verify successful database transfer by opening and viewing the destination database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Integrating Fauna with the  Airbyte open source solution arms developers building on Fauna with a powerful tool for gaining insights into the operational data living on Fauna. We’re excited to build on our partnership with Airbyte by working towards introducing an Airbyte Cloud connector. If you have any interest in a Fauna + Airbyte Cloud integration or questions about the open source connector, feel free to reachout to us and ask questions in our &lt;a href="https://forums.fauna.com/"&gt;forum&lt;/a&gt; or on our &lt;a href="https://discord.gg/NHwJFdG2B2"&gt;Discord&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>fauna</category>
      <category>analytics</category>
      <category>tutorial</category>
      <category>database</category>
    </item>
    <item>
      <title>Edge computing reference architectures</title>
      <dc:creator>Fauna</dc:creator>
      <pubDate>Thu, 04 Aug 2022 15:27:10 +0000</pubDate>
      <link>https://forem.com/fauna/edge-computing-reference-architectures-390n</link>
      <guid>https://forem.com/fauna/edge-computing-reference-architectures-390n</guid>
      <description>&lt;p&gt;Cloud computing has transformed how businesses conduct their work, but many face challenges regarding data protection (&lt;a href="https://permission.io/blog/data-residency/"&gt;residency&lt;/a&gt; and &lt;a href="https://permission.io/blog/data-sovereignty/"&gt;sovereignty&lt;/a&gt;), network latency and throughput, and industrial integration. On-premises infrastructure, especially when it’s tightly integrated with edge computing services provided by various cloud platforms such as &lt;a href="https://aws.amazon.com/lambda/edge/"&gt;AWS Lambda@Edge&lt;/a&gt;, &lt;a href="https://workers.cloudflare.com/"&gt;Cloudflare Workers&lt;/a&gt;, &lt;a href="https://www.fastly.com/products/edge-compute"&gt;Fastly Compute@Edge&lt;/a&gt;, etc., can be helpful in solving these problems. But that’s just part of the edge computing puzzle.&lt;/p&gt;

&lt;p&gt;Cloud data centers and on-premises infrastructures form the near edge of the network. The other part of the network can be considered the far edge. Many businesses need the &lt;a href="https://www.forbes.com/sites/forbestechcouncil/2019/11/07/bridging-the-last-mile-convergence-at-the-infrastructure-edge/"&gt;last mile coverage&lt;/a&gt; of the cloud to ensure that the cloud connects and integrates directly with controllers, sensors, and far-end devices. Currently, this workload is handled by specialized programmable logic controllers (PLCs) and human-machine interfaces (HMIs). These devices work great but depend on proprietary operating systems and are costly to acquire, maintain, and upgrade.&lt;/p&gt;

&lt;p&gt;The following table lists the different parts of an edge network and their usual distance from the devices on the far edge of the network, along with the network latency from applications to those devices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; &lt;strong&gt;The cloud (regional and zonal data centers)&lt;/strong&gt; — is 10+ km away from edge devices and has a latency of over 10 ms&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Near edge (on-prem data centers)&lt;/strong&gt; — is between 1 to 10 km away from the edge devices and has a latency from 1 to 10 ms&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Far edge L1 (gateways, controllers) &lt;/strong&gt;— is between 10 m to 1 km with a latency of somewhere between 0.1 ms and 1 ms&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Far edge L2 (sensors, devices)&lt;/strong&gt; — is in the 10 m range of the edge devices and has sub-millisecond latency&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This article will take you through some of the edge computing reference architectures proposed by different organizations, researchers, and technologists so that you have a better understanding of the solutions that edge computing offers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do you need edge computing?
&lt;/h2&gt;

&lt;p&gt;Content delivery networks are based on the idea that to provide a great user experience, you need to deliver the content as quickly as possible. You can only do that by physically bridging the network latency gap to place the content closer to your end users. &lt;a href="https://fauna.com/features"&gt;Fast-access storage&lt;/a&gt; solves this problem. Big companies in specific domains like content delivery and mobile networking already use the edge network for storage, if not for computing.&lt;/p&gt;

&lt;p&gt;But many businesses can’t solve their problems with delivery networks; they need computing power without the latency lag of the cloud. They would rather have something closer to the edge in addition to the cloud.&lt;/p&gt;

&lt;h3&gt;
  
  
  Edge computing use cases
&lt;/h3&gt;

&lt;p&gt;Take vehicle autonomy, for instance. An average car without autonomy has over fifty sensors that help the vehicle’s critical and non-critical functions. Traditionally these sensors were lightweight, but with autonomous car cameras, LiDAR sensors, and others, there’s an &lt;a href="https://www.forbes.com/sites/forbestechcouncil/2021/09/27/edge-computings-applications-in-autonomous-driving-and-business-at-large/?sh=102143796aa8"&gt;ever-increasing demand for computing power on board&lt;/a&gt;. You can’t rely on APIs and the network for processing specific information in the operations of a critical machine like a car.&lt;br&gt;
Another example is &lt;a href="https://premioinc.com/blogs/blog/rugged-nvr-computers-podcast"&gt;surveillance&lt;/a&gt;. Businesses might use AI and ML applications to process a camera feed in real time. Offloading that amount of raw data to the cloud for processing might not be the wisest decision, as it will need tremendous network bandwidth and computing power to move data back and forth. To save time spent moving data around, you need more computing power at the source, and you can send the workloads that can wait to the cloud.&lt;br&gt;
The same principles apply to many specialized industries, such as aerospace, live entertainment, mass transportation, and manufacturing. This is why the edge is becoming more relevant by the day. Other applications of edge computing include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Decoupling automation workloads from specialized hardware like PLCs and HMIs by using open source software like &lt;a href="https://www.freertos.org/openrtos.html"&gt;FreeRTOS&lt;/a&gt; or &lt;a href="https://www.highintegritysystems.com/openrtos/"&gt;OPENRTOS&lt;/a&gt; and running the workloads on containers&lt;/li&gt;
&lt;li&gt;Offloading custom data-intensive critical workloads to the near edge network&lt;/li&gt;
&lt;li&gt;Minimizing your maintenance and downtime with a redundant computing system for your business’s critical systems, which adds real value to your &lt;a href="https://www.ovhcloud.com/en-au/stories/backup-strategy/"&gt;business continuity planning (BCP) and disaster recovery planning (DRP) efforts&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Optimizing business processing with the help of faster response times from the computing power on the network’s edge&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;Interested in understanding edge computing beyond its use in IoT?&lt;/h3&gt;

&lt;p&gt;
Fauna can help you build faster, more performant edge applications. Since Fauna is distributed dy default, you can reduce latency for your apps with a close-proximity database for your edge nodes. Easily integrate with Cloudlfare workers and Fastly Compute@Edge platforms out of the box.
&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://go.fauna.com/campaign-fauna-edge"&gt;On-demand webinar: Realize the full potential of your edge architecture with Fauna&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.fauna.com/fauna/current/build/integrations/cloudflare"&gt;Getting started with Fauna and Cloudflare Workers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.fauna.com/fauna/current/build/integrations/fastly"&gt;How to build an edge API gateway with Fastly's Compute@Edge and Fauna&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Edge computing reference architectures
&lt;/h2&gt;

&lt;p&gt;To properly design and implement edge computing networks, businesses rely on reference architectures. The following are some reasons why they’re an invaluable resource.&lt;/p&gt;

&lt;h3&gt;
  
  
  Design considerations
&lt;/h3&gt;

&lt;p&gt;Businesses are still finding different ways to exploit the power of edge computing, and edge computing reference architectures help them understand systems that could potentially work for them. Not only are these reference architecture patterns well researched and thought out by industry experts, but they try to encapsulate the key features that apply to various businesses.&lt;/p&gt;

&lt;p&gt;You can use reference architectures for inspiration as well as innovate on top of them. There are several factors and design considerations that you should take into account while architecting an edge computing system for your business:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Time sensitivity:&lt;/strong&gt; With edge computing, critical business functions don’t have to wait for workloads to be offloaded to the cloud for processing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network bandwidth:&lt;/strong&gt; Bandwidth is limited; you don’t want to choke the network by blocking other important operations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security and privacy:&lt;/strong&gt; Data sovereignty and data residency, both for security and privacy, can be enabled by the edge&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operational continuity:&lt;/strong&gt; During network disruptions, you can offload the extremely critical operations of your business to the edge&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Implementation outcomes
&lt;/h3&gt;

&lt;p&gt;The outcomes and benefits of architecting edge computing systems align strongly with the abovementioned factors. With a successfully deployed edge computing system, your business gains the following core benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Significantly lower latency and higher network bandwidth by utilizing the proximity from the request origin to the computing system&lt;/li&gt;
&lt;li&gt;Effective network use by filtering and reducing farther data transfer to on-premises or cloud infrastructure&lt;/li&gt;
&lt;li&gt;Better enforcement of security and privacy standards, enabling data sovereignty and data residency&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Development and deployment plans
&lt;/h3&gt;

&lt;p&gt;When developing for the edge, businesses need to ensure that they don’t end up creating the same dependencies and bottlenecks they experienced when using only on-premises data centers or the cloud.&lt;/p&gt;

&lt;p&gt;An excellent way to do this is to research, identify, and pick the right open standards and technologies to help you get the architecture you want. Possible open standards include those for APIs, documentation, virtualization patterns, deployment pipelines, and code promotion strategies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Edge computing reference architectures that use Fauna
&lt;/h2&gt;

&lt;p&gt;There are many reference architectures, some of which heavily borrow from one another. However, each has a unique approach and feature set. &lt;a href="https://fauna.com/"&gt;Fauna&lt;/a&gt;, as a distributed general-purpose database, interacts well with all of them in different ways.&lt;/p&gt;

&lt;p&gt;The following are details on four reference architectures, their relevance and innovation, and how to adopt them with Fauna.&lt;/p&gt;

&lt;h3&gt;
  
  
  Industrial internet reference architecture
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://www.iiconsortium.org/pdf/IIRA-v1.9.pdf"&gt;Industrial Internet Reference Architecture&lt;/a&gt; (IIRA) was first developed in 2015 by the &lt;a href="https://www.iiconsortium.org/"&gt;Industry IoT Consortium&lt;/a&gt; (IIC). This was a collaborative effort from a range of companies, including Bosch and Boeing in manufacturing; Samsung and Huawei in consumer electronics; and IBM, Intel, Microsoft, SAP, Oracle, and Cisco in the software industry.&lt;/p&gt;

&lt;p&gt;Think of IIRA as having four different viewpoints: business, functional, usage, and implementation. The system is divided into enterprise, platform, and edge. You have low-level devices in factories or appliances at the edge, connected with the platform tier using low-level APIs. Data collected from these devices flows to data and analytics services and operations users.&lt;/p&gt;

&lt;p&gt;The data generated by the edge, along with the data and insights gathered and processed by the platform tier, is consumed by the enterprise tier on two levels: the business domain and app domain. The data in the business domain lands in other systems, such as ERP, and the data in the app domain flows further based on business logic and rules to the business users in the form of mobile apps, browser-based frontend applications, or business intelligence tools.&lt;/p&gt;

&lt;p&gt;The following diagram shows the three-tiered approach to IIoT system architecture using IIRA:&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/po4qc9xpmpuh/3X6nUX1YmYwmlnFVY93qr5/9f87a79ce9b1b3d2fb4640c92d213af6/Three-tier_IIOT_System_Architecture.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/po4qc9xpmpuh/3X6nUX1YmYwmlnFVY93qr5/9f87a79ce9b1b3d2fb4640c92d213af6/Three-tier_IIOT_System_Architecture.png" alt="Three-tier IIoT system architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fauna’s &lt;a href="https://fauna.com/platform#distributed-compute-storage"&gt;distributed architecture&lt;/a&gt; allows it to sit across multiple layers of the network while simultaneously guaranteeing you the privileges of a &lt;a href="https://fauna.com/platform#compute-engine"&gt;flexible document-oriented database and ACID transactions&lt;/a&gt;. The data and analytics services and platforms can heavily interact with Fauna, and the operations services and platform will do the same. In this context, Fauna sits mainly on the platform tier but interacts with the edge tier, depending on whether the devices are integrated with the platform tier using a virtualization layer.&lt;/p&gt;

&lt;h3&gt;
  
  
  FAR-EDGE reference architecture
&lt;/h3&gt;

&lt;p&gt;Aside from the previously mentioned features of reference architectures, the &lt;a href="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220408C3.pdf"&gt;FAR-EDGE Reference Architecture&lt;/a&gt; brings &lt;a href="https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures"&gt;many new things&lt;/a&gt; to the table. It explicitly offers a separate logical and ledger layer that handles processes and rules across the distributed computing system by using the sheer power and spread of the system. Like most edge computing reference architectures, FAR-EDGE also concentrates on saving bandwidth and storage, enabling ultra-low latency, proximity processing, and enhanced scalability by exploiting the distributed nature of the edge-cloud hybrid architecture.&lt;/p&gt;

&lt;p&gt;Compare the idea of FAR-EDGE with some of the Web3 architectures built on top of distributed frameworks like the blockchain. Fauna, with its &lt;a href="https://fauna.com/blog/how-to-build-an-edge-api-gateway-with-fastlys-compute-edge-and-fauna"&gt;capability to provide edge functions&lt;/a&gt; as an extension to the application infrastructure, can play a central role by servicing various layers and functions proposed in this reference architecture. Consider the functional view of the FAR-EDGE reference architecture below:&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/po4qc9xpmpuh/6ey7VZmZOWSMiaQeUzW2ZI/573d66d8a86c24952bf01a44ecf637cc/umnZfIY.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/po4qc9xpmpuh/6ey7VZmZOWSMiaQeUzW2ZI/573d66d8a86c24952bf01a44ecf637cc/umnZfIY.png" alt="A functional view of the FAR-EDGE reference architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s worth reiterating that Fauna is a distributed document-relational database, which is ideal for businesses with a variety of software applications and hardware devices. In this functional view, Fauna will sit and interact not only with applications and cloud services but also with ledger and edge processes at the ledger and gateway levels. You can enable this by using well-documented APIs and SDKs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Edge computing reference architecture 2.0
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="http://en.ecconsortium.org/"&gt;Edge Computing Consortium&lt;/a&gt; (ECC) proposed a model-driven reference architecture that attempts to solve the problems businesses face when they try to connect the digital and the physical worlds. Since the reference architecture is multidimensional, you can regard it from different viewpoints. Consider one of its layers, the Edge Computing Node layer, from a functional viewpoint.&lt;/p&gt;

&lt;p&gt;The &lt;a href="http://en.ecconsortium.net/Uploads/file/20180328/1522232376480704.pdf"&gt;Edge Computing Reference Architecture 2.0&lt;/a&gt; is a multidimensional architecture composed of the following components from a high-level view: smart services, a service fabric, a &lt;a href="https://www.ericsson.com/en/reports-and-papers/ericsson-technology-review/articles/network-compute-fabric"&gt;Connectivity and Computing Fabric&lt;/a&gt; (CCF), and Edge Computing Nodes (ECNs).&lt;/p&gt;

&lt;p&gt;As mentioned earlier, edge computing thrives when open standards are followed and hardware devices are decoupled from specifically designed hardware. This problem has long been solved for specific web and desktop applications via virtualization. The Edge Computing Reference Architecture also suggests you create an &lt;a href="https://blog.stratus.com/5-benefits-virtualization-at-the-edge/"&gt;Edge Virtualization Function (EVF) layer&lt;/a&gt; that handles connectivity and &lt;a href="https://fauna.com/features#event-streaming"&gt;data streaming and collection&lt;/a&gt;, along with access policies and security.&lt;/p&gt;

&lt;p&gt;The following diagram of the Edge Computing Reference Architecture 2.0 shows the functional view of an Edge Computing Node (ECN):&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/po4qc9xpmpuh/2Kc43cmfWF5C2hh6Uejkdt/9a3b56d31f2778b708f4e463730e808f/Edge_computing_node.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/po4qc9xpmpuh/2Kc43cmfWF5C2hh6Uejkdt/9a3b56d31f2778b708f4e463730e808f/Edge_computing_node.png" alt="Edge computing node"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fauna is more than capable of taking care of the universal services, such as streaming and time series data, along with any industry-oriented services that might need integration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Edge computing reference architectures can vastly simplify the design and usage of edge computing systems, and &lt;a href="https://fauna.com/"&gt;Fauna&lt;/a&gt; can help at different levels in all these architectures. Fauna is a natively serverless, &lt;a href="https://fauna.com/features#document-relational"&gt;distributed transactional&lt;/a&gt; database that allows for flexible schemas and comes with a well-designed data API for modern applications.&lt;/p&gt;

&lt;p&gt;With Fauna in the cloud and at the edge, you can store and retrieve your data at ultra-low latencies while being closer to the source than ever before. With built-in &lt;a href="https://fauna.com/features#modern-security-model"&gt;data security, authentication, authorization, and attribute-based access control&lt;/a&gt;, Fauna simplifies the implementation of edge computing reference architectures and lets you concentrate more on your business.&lt;/p&gt;

</description>
      <category>distributedsystems</category>
      <category>iot</category>
    </item>
    <item>
      <title>Modernization of the database: DynamoDB to Fauna</title>
      <dc:creator>Fauna</dc:creator>
      <pubDate>Mon, 25 Jul 2022 14:16:10 +0000</pubDate>
      <link>https://forem.com/fauna/modernization-of-the-database-dynamodb-to-fauna-5bc1</link>
      <guid>https://forem.com/fauna/modernization-of-the-database-dynamodb-to-fauna-5bc1</guid>
      <description>&lt;p&gt;Serverless databases like &lt;a href="https://fauna.com/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=fauna-vs-dynamo"&gt;Fauna&lt;/a&gt; fill a crucial role for modern applications. Given the amount of cloud-based traffic and varying workloads that organizations from startups to enterprises manage, serverless databases are the natural fit to the rest of your serverless architecture, adding seamless flexibility, scaling, and low latency for your applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/dynamodb/"&gt;DynamoDB&lt;/a&gt; has been one of the most popular serverless databases. In addition to coming from AWS, it has benefited from being one of the earliest offerings in this space. Its popularity is hard to dispute, backed up by its proven ability to effectively handle traffic spikes and workload variations without overloading infrastructure or racking up unnecessary costs during periods of low traffic. Though DynamoDB has much to offer, Fauna is a much newer offering that comes with a host of powerful and unique features that enhance the serverless experience for organizations of all sizes.&lt;/p&gt;

&lt;p&gt;In this article, we’ll learn about serverless databases, compare key differences between DynamoDB and Fauna, and provide insight on which one to choose for your next big project.&lt;/p&gt;

&lt;h2&gt;
  
  
  What makes a good serverless database?
&lt;/h2&gt;

&lt;p&gt;A great serverless database should offer the following values:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lower operation costs:&lt;/strong&gt; One of the main reasons for choosing software is cost. Traditionally, organizations need to budget for the continuous management of their database(s) and infrastructure, and not just for application development. Serverless architectures promise “zero ops,” which eliminate these management concerns. As headcount makes up your highest spend, leaning out on operations overhead ultimately lowers your TCO (total cost of ownership).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pay as you go:&lt;/strong&gt; It should also offer a simple and transparent pricing model without any traps or unexpected costs. You should only have to pay for the resources you use, with billing being proportional to the amount of data stored and volume of transactions sent to the database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High availability:&lt;/strong&gt; There should be no maintenance windows or unplanned downtimes. Data and compute resources should be automatically distributed and replicated, offering high durability and resiliency to region/zone outages. The more these attributes are provided to the operator without any additional configuration, the better.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low latency:&lt;/strong&gt; Ensuring a good user experience is key to user engagement, ultimately impacting how successful your products are with customers. Serverless architectures should not trade off the ability to scale elastically with cold start-up delays or increased latency due to speed-of-light constraints. A great serverless database offering should be always-on and instantly scale; And provide multiple region choices so that it can be accessed as close to your compute resources as possible. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why should you choose Fauna?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://fauna.com/blog/intro-to-serverless-databases?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=fauna-vs-dynamo"&gt;Serverless databases&lt;/a&gt; are critical components for managing unpredictable and rapidly changing workloads, essentially following a pay-as-you-use model. They are also a good fit for companies with a small workforce, enabling them to consume compute workloads and infrastructure without manual overhead. Using a serverless database allows you to simplify database operations and eliminate problems common with traditional databases, such as maintenance, upgrades and patching, and cost of operations.&lt;/p&gt;

&lt;p&gt;Fauna is accessed as an API and there is nothing to install, maintain, or operate. You can quickly deploy a database in three button clicks, start coding, and immediately connect to the database. You can scale databases without any limitations and create an unlimited number of databases. It is chock full of cloud-native features: login with your GitHub account; integrate with third-party services like Auth0, Netlify, and Vercel; has built-in streaming, user authentication and fine-grain authorization. It supports user-defined functions – similar to stored procedures – which help eliminate redundant code, maintain Fauna Query Language (FQL) consistency, and remove code duplication. And it has a native GraphQL API, allowing organizations adopting GraphQL to quickly get up and running with a data source in minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fauna vs. DynamoDB
&lt;/h2&gt;

&lt;p&gt;Fauna offers numerous features that distinguish it from DynamoDB and similar serverless databases. The following is a comparison between Fauna and DynamoDB and the components that each offers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Geo-distribution
&lt;/h3&gt;

&lt;p&gt;Businesses with global audiences need to ensure fast response and high performance for customers in multiple regions across the globe. This requires a database that allows data to be replicated globally and served from specified locations — the closer the proximity of the data, the better the experience and performance.&lt;/p&gt;

&lt;p&gt;By default, DynamoDB is replicated across multiple availability zones in a single region. But with the “global tables” feature, DynamoDB provides a fully managed solution for deploying a multi-region, multi-active database. You specify the regions you want the database to replicate, and DynamoDB will propagate all data changes to them. Besides reducing read latency, multi-region replication ensures that single region failures do not result in any outages. Using global tables comes with quite a few caveats. Your application can read and write to any replica table, but if it requires strongly consistent reads, it must perform all its strongly consistent reads and writes against the same region. AWS documentation states that any newly written item is propagated to all replica tables within a second — not an insignificant latency — and that ACID guarantees only apply within the AWS Region where the write was originally made. When an AWS region suffers an outage or degradation, your application can redirect to a healthy region but it isn’t automatic: AWS documentation prompts developers to apply custom business logic to determine when to redirect requests to other regions. Global tables are more costly to operate. Your costs essentially multiply several times with the addition of every replica. In addition, you must also use the higher priced on-demand capacity mode, though this can be mitigated as they allow the use of provisioned capacity with auto-scaling.&lt;/p&gt;

&lt;p&gt;Unlike DynamoDB, Fauna is multi-region-distributed by default — every database you create is replicated and distributed across geographic regions. In addition, Fauna also provides the &lt;a href="https://docs.fauna.com/fauna/current/learn/understanding/region_groups?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=fauna-vs-dynamo"&gt;Region Groups&lt;/a&gt; functionality, which allows developers to control which region their data resides in. Other than selecting the Region Group, there is no additional configuration and no custom business logic to implement. When your application makes a request to the Fauna API, it is automatically routed to the region closest to it. Reads are immediately served out of that region. And writes are automatically propagated to the other regions’ replicas automatically. Fauna publicly publishes its &lt;a href="https://fauna.com/blog/real-world-database-latency?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=fauna-vs-dynamo"&gt;“internal latency”&lt;/a&gt; – separated into reads and writes – on &lt;a href="https://status.fauna.com/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=fauna-vs-dynamo"&gt;status.fauna.com&lt;/a&gt;. There is no separate pricing scheme for single- vs multi-region with Fauna since your databases are always multi-region – providing you straightforward and transparent pricing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Transactionality
&lt;/h3&gt;

&lt;p&gt;While you want to ensure the real-time availability of data, your data needs to be consistent and accurate. Not all serverless databases can provide this.&lt;/p&gt;

&lt;p&gt;DynamoDB, for instance, offers serializable read and write transactions but is only ACID-compliant within the region where the transaction occurs. This is a problem for multi-region deployments, because it can lead to errors during transactions that run simultaneously in different regions.&lt;/p&gt;

&lt;p&gt;One of Fauna’s most touted innovations is its approach to isolation and distribution, which is derived from &lt;a href="https://fauna.com/blog/consistency-without-clocks-faunadb-transaction-protocol?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=fauna-vs-dynamo"&gt;Calvin&lt;/a&gt;, an algorithm for achieving distributed consistency at scale. As an oversimplified explanation to what it entails, queries are handled before they interact with the storage system, while deterministic ordering – eschewing the need for expensive atomic-clocks – is employed to provide serializability, ensuring that no two concurrent transactions can commit to the same data. Through Fauna’s Calvin implementation, it is one of the few serverless databases that offers &lt;a href="https://fauna.com/blog/faunadbs-official-jepsen-results?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=fauna-vs-dynamo"&gt;ACID&lt;/a&gt; guarantees while being globally distributed at the same time. Transactions are globally distributed, ACID-compliant, and serializable, with no additional configuration or separate pricing scheme.&lt;/p&gt;

&lt;h3&gt;
  
  
  Streaming
&lt;/h3&gt;

&lt;p&gt;Real-time applications require an endpoint that enables persistent connectivity, allowing the server to push information to the client requesting it. &lt;/p&gt;

&lt;p&gt;With DynamoDB, you’ll need to combine several services in order to implement streaming. The first part is setting up a DynamoDB Stream: DynamoDB integrates with AWS Lambda, where you set up triggers that respond to changes (DynamoDB Stream events) in your table(s). Thus, there’s a bit of configuration, and some coding involved in writing the Lambda. From there, you need to implement the persistent endpoint. AWS can accomplish this in a number of ways, and your choices are using either an EC2, API Gateway Websocket API, or AppSync (via GraphQL Streaming). Regardless, these are additional components that need to be implemented and/or configured and maintained. On top of that are the costs involved in running the additional services.  &lt;/p&gt;

&lt;p&gt;In contrast, Fauna’s API endpoints support streaming right out of the box. In order to consume a stream, all you need to do is write some lines of code on the client side of your application using a driver in your language of choice. For example, here are the instructions for instantiating a stream client in &lt;a href="https://docs.fauna.com/fauna/current/drivers/javascript?lang=shell#streaming?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=fauna-vs-dynamo"&gt;javascript&lt;/a&gt;. Fauna offers two kinds of streaming: document streaming and set streaming. With document streaming, an event notification is sent to the subscriber whenever a document is created, updated, or deleted. With set streaming, events are sent whenever one or more documents enter or exit the set while doing a create, delete, or update.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security
&lt;/h3&gt;

&lt;p&gt;Because they are accessed as APIs, great serverless databases must provide robust security models, so that developers do not have to build their own authentication and authorization. &lt;/p&gt;

&lt;p&gt;Access to DynamoDB is based on AWS’s highly robust Identity and Access Management (IAM), in which granular permissions are applicable to the DynamoDB resources (tables, indexes, and stream). You can also specify IAM policies that grant access to perform specific actions (e.g, read, add, update, batch update, etc.) on specific resources for specific users.&lt;/p&gt;

&lt;p&gt;Fauna supports both API keys and identity-based access tokens, and supports integration with external identity providers that support the OAuth flow (such as Auth0, Okta, OneLogin, etc.). Keys and tokens inherit the permissions of roles upon which they’re assigned. Roles are configured with granular permissions to access every single resource in the database, including collections (tables), indexes, user-defined functions, other roles, and schema. Fauna also provides attribute-based access control (ABAC), allowing permissions to be dynamically assigned based on tokens’ identities’ attributes. With ABAC, you can define custom business logic, creating dynamic rules that control access to all resources, all the way down to specific documents in a collection.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automated functionalities
&lt;/h3&gt;

&lt;p&gt;In DynamoDB, a host of configuration options allow you to operate various capacity modes, single- vs. multi-region, types of indexes and index-partitioning strategy used, logging, and more. The tradeoff for this flexibility is more manual work and time.&lt;/p&gt;

&lt;p&gt;Fauna, however, automates multiple functions for you, such as database provisioning, maintenance, sharding, scaling, and replication. The “zero ops” model of Fauna lets you focus on building the critical aspects of your applications without worrying about the complexity of a distributed architecture. This is especially useful if your business needs to carefully manage limited engineering resources while dealing with unpredictable workloads. &lt;/p&gt;

&lt;p&gt;DynamoDB includes an automated backup feature. It charges for backups separately, and you need to select from on-demand or continuous backups. The backup is optional and by project. &lt;/p&gt;

&lt;p&gt;In Fauna, you can schedule daily backups on any database in the form of snapshots of the entire data-tree. You can also configure a retention period for them. Databases can be “restored” (overridden) in place from any backup, or you can use backups to seed new databases. Fauna also supports temporality, allowing you to go back through history and query data at any arbitrary point in time. This is because in Fauna, data is stored as snapshots across time. You can use this feature to implement point-in-time recovery and targeted data repairs in your database.&lt;/p&gt;

&lt;h3&gt;
  
  
  GraphQL API support
&lt;/h3&gt;

&lt;p&gt;GraphQL is designed to make API development faster and more flexible for software engineers. API developers use GraphQL to have a schema that will showcase all the possible data that the clients will want to query through a specific endpoint. Unlike most other serverless databases like DynamoDB and Upstash, Fauna provides a native GraphQL API for data access in addition to its query language, FQL.&lt;/p&gt;

&lt;p&gt;If you are looking to launch your product in less time and expect a lot of changes on your API endpoint, then using a serverless database that supports GraphQL APIs will be the better choice for you.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Check out our GraphQL workshop that introduces you to Fauna. In two hours, you’ll build an application in either Next.js or SvelteKit that has authentication, user-defined functions, customer resolvers, and more.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Scalability and pricing
&lt;/h3&gt;

&lt;p&gt;With some serverless databases like DynamoDB, automatic scaling is an option but isn’t set by default. There are basically three modes in DynamoDB:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The default – called provisioned capacity –&lt;/strong&gt; lets you set the volume and scale that you need up front. You then either manually monitor and manage these parameters or be automatically throttled once the load hits your predefined capacities. This mode is ideal when you have steady, predictable loads, and you need to stay within a narrow budget.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Provisioned capacity with auto-scaling&lt;/strong&gt; is the same as provisioned capacity but also dynamically adjusts throughput capacity based on actual traffic, allowing your application to handle sudden bursts in traffic without being throttled. Pricing is a factor of the baseline capacity you’ve configured and your application's actual load (above that baseline) that your application experiences. It is important to note that because provisioned capacity (with or without auto-scaling) involves setting a baseline desired capacity, you are always paying for that capacity, even when your traffic is zero.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On-demand capacity&lt;/strong&gt; is priced significantly higher (on a per-unit read/write/etc. basis) than provisioned capacity, but is truly serverless and will scale to zero (where you pay nothing) if no traffic is observed or as high as needed when bursts are experienced.&lt;/p&gt;

&lt;p&gt;The one and only mode in Fauna is on-demand. It autoscales without any intervention from you and you don’t need to manage the settings of the desired throughput. There is no need to monitor the database to keep your database safe from being saturated. As such, pricing is much more straightforward and only a factor of how much data you store, how many transactions occurred, and how large (complex) the transactions/queries are. Comparison pricing between Fauna and DynamoDB is highly based on context and use-case and especially how you plan to use DynamoDB, given its flexibility in setting things up. We cover these topics in &lt;a href="https://fauna.com/blog/comparing-fauna-and-dynamodb-pricing-features?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=fauna-vs-dynamo"&gt;“Comparing Fauna and DynamoDB: Pricing and features.”&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Serverless databases offer numerous benefits, including lower costs, less operational workload, faster time to market, and high scalability. When choosing a serverless database, be sure to select one that offers you maximum consistency, high performance, and low latency.&lt;/p&gt;

&lt;p&gt;While there are many serverless databases to choose from, &lt;a href="https://fauna.com/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=fauna-vs-dynamo"&gt;Fauna&lt;/a&gt; offers the largest range of features for your organization’s needs. Automated scaling, multi-region ACID-compliant transactions, and GraphQL support will help you optimize your workload and improve your product for users. Your developers can focus on improving your applications and core services while worrying less about managing the database or infrastructure. This leads to a shorter development time and faster application delivery.&lt;/p&gt;

&lt;p&gt;Interested in learning more about Fauna? &lt;a href="https://go.fauna.com/contact-us?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=fauna-vs-dynamo"&gt;Reach out to our team&lt;/a&gt; or &lt;a href="https://dashboard.fauna.com/accounts/register?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=fauna-vs-dynamo"&gt;sign up&lt;/a&gt; and start using Fauna for free.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>fauna</category>
      <category>aws</category>
      <category>database</category>
    </item>
    <item>
      <title>Multi-region scaling with Fauna</title>
      <dc:creator>Fauna</dc:creator>
      <pubDate>Thu, 21 Jul 2022 18:30:06 +0000</pubDate>
      <link>https://forem.com/fauna/multi-region-scaling-with-fauna-35k</link>
      <guid>https://forem.com/fauna/multi-region-scaling-with-fauna-35k</guid>
      <description>&lt;p&gt;Because the digital era has grown so competitive — providing users with everything from banking systems to online food delivery — it’s critical that applications never go down for any reason. In fact, users expect that the applications they access will remain available and highly responsive with zero downtime from any part of the world, 24/7.&lt;/p&gt;

&lt;p&gt;To provide highly available and fault-tolerant services, platforms must leverage their architecture in order to ensure that their applications and database systems, especially, are always available. They achieve this by replicating apps and databases in geographically isolated parts of the world. With this redundancy, applications remain highly available in different locations; in a situation such as an outage in a data center, the data isn’t lost and the application can keep serving end users from another location. This is known as multi-region scaling.&lt;/p&gt;

&lt;p&gt;Multi-region scaling is important because even an entire region can experience downtime. For social media, e-commerce, and other business-critical applications, this downtime can result in revenue losses of millions of dollars. Using a multi-region architecture ensures that applications will keep serving users from different regions.&lt;/p&gt;

&lt;p&gt;In this article, you’ll learn more about multi-region scaling, including why it’s so vital and what key issues it solves. You’ll also learn how &lt;a href="https://fauna.com/"&gt;Fauna&lt;/a&gt; can help solve these issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do you need multi-region scaling?
&lt;/h2&gt;

&lt;p&gt;Multi-region scaling involves the use of multiple regions around the world. A region is a collection of data centers, or availability zones, that are connected via high-speed network infrastructure but are physically isolated from one another, usually in different cities within a country. An application deployed in multiple availability zones within a region is considered highly available. Cloud providers have regions available worldwide, allowing an application to be deployed into more than one region.&lt;/p&gt;

&lt;p&gt;This type of scaling becomes necessary as businesses grow to serve millions of users. To keep up with such growth, the underlying tech infrastructure must be designed to be resilient and fault-tolerant while providing a global footprint. Enterprises use multi-region scaling to provide a seamless experience to users by reducing latency as well as ensuring there is a disaster recovery mechanism in place.&lt;/p&gt;

&lt;p&gt;For a global application, apart from the application backend, the database system needs to be available in a global context because a global application with a central database system still has a single point of failure. If the database goes down, the application is of no use to end users. A multi-region setup not only ensures application availability but ensures that data is secure and not lost.&lt;/p&gt;

&lt;h2&gt;
  
  
  How multi-region scaling works
&lt;/h2&gt;

&lt;p&gt;As noted earlier, in multi-region scaling, the infrastructure is deployed into multiple regions. If one region goes down, the service or application remains accessible from a different region, though users in other regions may experience increased latency.&lt;/p&gt;

&lt;p&gt;A multi-region setup involves the following elements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use of CDNs:&lt;/strong&gt; Static content like images, videos, and documents are distributed and cached to a content delivery network (CDN) such as Amazon CloudFront and made local within regions serving users from the closest caching server, instead of bringing the data directly from the source for each request.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Active/active distributed databases:&lt;/strong&gt; Distributed databases are usually deployed in an active/passive fashion, in which the primary instance serves both read and write requests while the secondary instances are only available for reads. The active/active deployment strategy allows for all database instances to serve both read and write requests. The data is asynchronously replicated, although there may be a replica lag known as eventual consistency, or the time it takes for changes to reflect in all instances of the database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stateless architecture:&lt;/strong&gt; Multi-region scaling encourages stateless architectures for applications, meaning they should serve all users with the same responses independent of prior requests and not store or use any local session information. This allows for great horizontal scalability since any available resource can handle the inbound requests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use of DNS routing mechanisms:&lt;/strong&gt; With a multi-region architecture, the DNS routing doesn’t just direct traffic to the primary instance but also to an instance that provides the least latency for users. For example, AWS routes traffic based on the closest location or proximity to resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/po4qc9xpmpuh/qLZ1FMKwfjhL6O0zSlJWk/0cfe63067ea41966b73f3e57146452b1/A_multi-region_application.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/po4qc9xpmpuh/qLZ1FMKwfjhL6O0zSlJWk/0cfe63067ea41966b73f3e57146452b1/A_multi-region_application.png" alt="A multi-region application"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is multi-region scaling important?
&lt;/h2&gt;

&lt;p&gt;There are several advantages to using multi-region scaling.&lt;/p&gt;

&lt;h3&gt;
  
  
  Catering to a wider audience
&lt;/h3&gt;

&lt;p&gt;Digital businesses increasingly serve users from around the globe, which can present a challenge in providing a consistent experience. To serve users in multiple countries and continents, applications must be accessible from all parts of the world. The redundant system architectures of multi-region scaling make this possible, allowing businesses to launch into new markets and continue to grow their userbase.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lowering data latency for global users
&lt;/h3&gt;

&lt;p&gt;When organizations serve a global audience, the latency that users experience can affect how they view the service. Applications must have low latency to provide an instantaneous experience to users. Multi-region architectures make this possible.&lt;/p&gt;

&lt;p&gt;A low-latency multi-region architecture must include these factors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bringing the content close to end users:&lt;/strong&gt; The goal is to serve static content to end users from servers closest to them. CDNs are the key to achieving this. Content is replicated and cached in each region, and users are served locally from the closest point within their region.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploying components into multiple regions:&lt;/strong&gt; Application backends and databases are deployed into multiple regions in order to serve users right from the regions where they live.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Increasing fault tolerance and aiding disaster recovery
&lt;/h3&gt;

&lt;p&gt;Multi-region scaling at its core is about designing a system in a redundant fashion. A redundant architecture is inherently resistant to failures because two identical infrastructures are running at the same time. Even if one part of the architecture fails, the other remains available with an intelligent routing strategy that directs requests from users to their closest resources.&lt;/p&gt;

&lt;p&gt;A multi-region design also allows for disaster recovery, because when the application goes down in one region, it can be restored to its original state in one of the other regions. Since data is already being replicated across regions, once a stable infrastructure is deployed the data and all other content can easily be restored from backups.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ensuring compliance with privacy laws and regulations
&lt;/h3&gt;

&lt;p&gt;Laws and regulations have been enacted in various regions worldwide to mandate data privacy and protect consumer rights. A multi-region strategy can help ensure that specific data is only accessible to users in a certain geographical area. For example, many financial and banking institutions require that the consumer data remains in the country or region of origin. This can easily be handled with a multi-region setup that restricts access to data and its storage to and from the origins only.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing multi-region scaling with Fauna
&lt;/h2&gt;

&lt;p&gt;Deploying a multi-region infrastructure requires a lot of work: deploying databases and application backends in multiple regions; setting up routing mechanisms to direct the traffic properly; setting up replication strategies for data consistency between all databases; and setting up a backup and restoration strategy in case of problems. This can be a nightmare for teams to not only set up but also manage and monitor correctly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://fauna.com/"&gt;Fauna&lt;/a&gt; can help you address these challenges. The distributed database, delivered as an API, is designed to be developer-friendly. It provides both great extensibility and a rich set of features, including the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Document-relational database:&lt;/strong&gt; Fauna stores data in the form of documents (like JSON objects) but also allows for relationships between documents via foreign keys, combining the ease of working with JSON with the querying power of traditional relational databases. It also provides strongly consistent, distributed, and guaranteed &lt;a href="https://en.wikipedia.org/wiki/ACID"&gt;ACID-compliant&lt;/a&gt; transactions, making it an ideal choice for all types of data storage needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Serverless:&lt;/strong&gt; Fauna is delivered as an API running on a serverless architecture, which means there’s no infrastructure to manage. Developers can focus on developing the application while Fauna handles the infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Globally distributed:&lt;/strong&gt; Fauna is distributed globally and multi-region by default. Fauna easily addresses GDPR compliance with its &lt;em&gt;Region Groups&lt;/em&gt; feature that allows control over where data physically resides. Data can be easily split between regions with totally separate compute, databases and storage. And since it’s cloud agnostic it works with all popular cloud providers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Managed data and infrastructure operations:&lt;/strong&gt; Fauna takes care of all the operations related to data and infrastructure management, including data sharding or replications and capacity planning, so your team can focus on improving your application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Other features include &lt;a href="https://docs.fauna.com/fauna/current/build/fql/udfs"&gt;user-defined functions (UDFs)&lt;/a&gt;, &lt;a href="https://docs.fauna.com/fauna/current/learn/understanding/streaming"&gt;event streaming&lt;/a&gt;, and support for popular programming languages like &lt;a href="https://docs.fauna.com/fauna/current/drivers/python"&gt;Python&lt;/a&gt;, &lt;a href="https://docs.fauna.com/fauna/current/drivers/javascript?lang=javascript"&gt;JavaScript&lt;/a&gt;, &lt;a href="https://docs.fauna.com/fauna/current/drivers/jvm"&gt;Java&lt;/a&gt;, &lt;a href="https://docs.fauna.com/fauna/current/drivers/jvm"&gt;Scala&lt;/a&gt;, &lt;a href="https://docs.fauna.com/fauna/current/drivers/csharp"&gt;C#&lt;/a&gt;, and &lt;a href="https://docs.fauna.com/fauna/current/drivers/go"&gt;Go&lt;/a&gt;. Fauna allows you to migrate data from existing sources, and its security features include attribute-based access control (ABAC), integration with Auth0 and other IdPs, and user identity and access management.&lt;/p&gt;

&lt;p&gt;With Fauna, you can easily address common issues like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Disaster recovery:&lt;/strong&gt; The inherently distributed nature of Fauna makes disaster recovery simple. Fauna takes care of monotonous tasks like database backups and globally distributing data, so it’s easier to recover and resume business operations from a down region.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User latency:&lt;/strong&gt; Fauna helps keep user latency down at scale due to its distributed multi-region setup. Data is served to users from their closest regions by automatically routing requests to replicas closest to them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance with regulations:&lt;/strong&gt; Fauna was created with security and compliance in mind, and all its features are designed according to the AWS Well-Architected Framework and ISO27000 controls. Fauna’s security controls are developed according to AICPA’s Trust Services Criteria, are SOC 2 certified, and comply with GDPR and HIPAA. This allows for scalability across regions while complying with major laws and regulations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Multi-region scaling can make a major difference in an organization’s ability to serve a global audience. Using it can minimize or eliminate service disruptions and data loss while ensuring high availability and low latency for end users. A multi-region-enabled architecture helps businesses continue to expand their global reach with confidence.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://fauna.com/"&gt;Fauna&lt;/a&gt; is a key part of a multi-region scaling strategy. It enables better infrastructure management, recovery options, regulatory compliance, and database design. &lt;a href="https://go.fauna.com/contact-us"&gt;Get a demo&lt;/a&gt; or &lt;a href="https://dashboard.fauna.com/accounts/register"&gt;sign up for free&lt;/a&gt; to see how Fauna can help you optimize your applications and improve your business.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Furqan Butt is a software developer and part of the AWS Community Builders Program for data and analytics.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fauna</category>
      <category>serverless</category>
      <category>database</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Overcoming database scaling issues with Fauna’s serverless offering</title>
      <dc:creator>Fauna</dc:creator>
      <pubDate>Thu, 07 Jul 2022 18:43:05 +0000</pubDate>
      <link>https://forem.com/fauna/overcoming-database-scaling-issues-with-faunas-serverless-offering-1402</link>
      <guid>https://forem.com/fauna/overcoming-database-scaling-issues-with-faunas-serverless-offering-1402</guid>
      <description>&lt;p&gt;With the amount of data generated every second, scaling high-volume applications is hard. Scaling databases is harder, even if you’re hosting your database in the cloud. You need to choose the right database for your application, but what if you need data modeled in several different ways? Spinning up MongoDB as your document database, MySQL as your relational database, and a few additional databases might be an option, but that could prevent you from scaling your application.&lt;/p&gt;

&lt;p&gt;One solution is &lt;a href="https://docs.fauna.com/fauna/current/learn/introduction/what_is_fauna"&gt;Fauna&lt;/a&gt;, a &lt;a href="https://fauna.com/serverless"&gt;serverless document-relational database&lt;/a&gt; that gives you the freedom of a document-oriented database along with the safety of using a relational database. Although many databases now come with a serverless offering, Fauna is the only one designed and built as a serverless database to enable schema flexibility, easy integration, and reliable scaling.&lt;/p&gt;

&lt;p&gt;This article will take you through some of the prominent features of Fauna and demonstrate why it’s a better option than MongoDB or CockroachDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why you should choose Fauna
&lt;/h2&gt;

&lt;p&gt;Fauna stands out compared to competitors like MongoDB, CockroachDB, and DynamoDB. Because it’s natively serverless, it scales quickly and efficiently. It’s one of the few databases that solves the problem of globally distributed ACID transactions, which is an essential feature that neither MongoDB nor CockroachDB offers.&lt;/p&gt;

&lt;p&gt;The following is a closer look at what Fauna offers and why it’s superior to other databases.&lt;/p&gt;

&lt;h3&gt;
  
  
  It’s fully serverless
&lt;/h3&gt;

&lt;p&gt;There are a lot of databases that come with a serverless offering, such as &lt;a href="https://www.mongodb.com/cloud/atlas/serverless"&gt;MongoDB Atlas&lt;/a&gt;. However, MongoDB Atlas has essentially been repurposed to work as a serverless database and wasn’t designed for that. Meanwhile, Fauna is &lt;a href="https://fauna.com/client-serverless#why-fauna"&gt;only serverless&lt;/a&gt;. It’s designed to help you concentrate on making your product better rather than on managing database infrastructure just to keep your application running.&lt;/p&gt;

&lt;p&gt;Because it’s serverless, you don’t have to take care of any routine database operations (DBOps) tasks that require a lot of time, such as sharding, cluster management, replication, and version upgrades. You also don’t have to manage any connection pools, because it works on a serverless invocation model. Additionally, it doesn’t have a cold start problem.&lt;/p&gt;

&lt;p&gt;All of this makes Fauna much more developer-friendly than any other multipurpose database.&lt;/p&gt;

&lt;h3&gt;
  
  
  It scales automagically
&lt;/h3&gt;

&lt;p&gt;MongoDB was built to scale horizontally at the expense of ACID transactions, meaning that instead of immediate consistency, you get eventual consistency. This won’t work for many use cases. Moreover, sharding in MongoDB requires maintenance and downtime, and you must set a custom shard key per collection. As for resiliency, MongoDB can be vulnerable to failures caused by network partitions or cluster node outages.&lt;/p&gt;

&lt;p&gt;Fauna, however, was built to be a globally distributed, multi-data-center, active-active, serverless database from the get-go. It succeeds with high scalability because its Calvin-based architecture allows for both &lt;a href="https://codingcat.dev/podcast/2-5-scaling-transactional-data-globally-with-fauna"&gt;horizontal and vertical scaling&lt;/a&gt; and ensures that every node in a Fauna cluster performs the same roles (unlike in many other distributed databases) as a &lt;a href="https://fauna-assets.s3.amazonaws.com/public/FaunaDB-Technical-Whitepaper.pdf"&gt;query coordinator, data replica, and log replica&lt;/a&gt;. The beauty of this architecture is that the provisioning and management of cluster nodes happen entirely behind the scenes without bothering you.&lt;/p&gt;

&lt;h3&gt;
  
  
  It offers global low latency
&lt;/h3&gt;

&lt;p&gt;Database calls are costly. To improve database performance and enhance the user experience by lowering the response latency, you need to reduce the number of calls to your database as much as possible. Fauna does precisely that by &lt;a href="http://cs.yale.edu/homes/thomson/publications/calvin-sigmod12.pdf"&gt;batching the transactions&lt;/a&gt; and applying them across geographical regions, so that you can go through and update multiple collections in one query.&lt;/p&gt;

&lt;p&gt;Fauna is still dependent on external network latency. The low latency &lt;a href="https://fauna.com/blog/real-world-database-latency"&gt;lays the foundation&lt;/a&gt; for enabling strong or immediate consistency. Many use cases require strong consistency to work, such as high-frequency trading, banking, or high-volume booking systems. If you attempt to use MongoDB or CockroachDB for these use cases, you might not be able to scale them efficiently.&lt;/p&gt;

&lt;h3&gt;
  
  
  It provides distributed ACID transactions
&lt;/h3&gt;

&lt;p&gt;Guaranteeing ACID transactions in a globally distributed database was a complex problem to solve, but Yale University’s &lt;a href="http://cs.yale.edu/homes/thomson/publications/calvin-sigmod12.pdf"&gt;Calvin&lt;/a&gt; paper changed everything. &lt;a href="https://fauna.com/blog/serializability-vs-strict-serializability-the-dirty-secret-of-database-isolation-levels"&gt;FaunaDB implemented a system&lt;/a&gt; based on Calvin in which, according to Fauna, “Every read issued by an application is guaranteed to reflect the writes of transactions that have been completed before when the read was issued.”&lt;/p&gt;

&lt;p&gt;Databases such as CockroachDB don’t offer a high level of serializability. They have traditionally needed Network Time Protocol (NTP) clocks to be synchronized across nodes to ensure that the serializability isn’t violated because of time drift.&lt;/p&gt;

&lt;p&gt;In MongoDB, dirty reads and lost transactions are plausible if and when the database fails. Because MongoDB doesn’t have read-side coordination for transactions spanning multiple shards, consistency can be violated because the &lt;a href="https://fauna.com/blog/comparison-of-transaction-models-in-document-databases"&gt;default read consistency level&lt;/a&gt; allows that to happen.&lt;/p&gt;

&lt;p&gt;Only Fauna has implemented a Calvin-style protocol that guarantees the highest level of transaction serializability without using a clock and without increasing latency. Fauna uses strict serializability to enable consistency across all nodes in all geographical regions and prevent stale reads. It achieves this by maintaining a log position that helps guarantee a monotonically increasing number of global transactions. To enable transactional order monotonicity, each of Fauna’s data centers uses a synchronization schema for sharing the log position with all query coordinators. Fauna also &lt;a href="https://status.fauna.com/"&gt;shares systems information publicly&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  It provides security natively
&lt;/h3&gt;

&lt;p&gt;Fauna has a host of database security features that you usually have to build separately for other databases. In addition to standard fine-grained &lt;a href="https://docs.fauna.com/fauna/current/security/abac"&gt;attribute-based access control&lt;/a&gt;, Fauna offers OAuth 2.0 integration out of the box. You can use an identity provider to authenticate your users to Fauna.&lt;/p&gt;

&lt;p&gt;Fauna’s key-based API is great at authenticating connections with particular permissions down to the level of database objects. Even after authentication, the queries use further token-based authentication on a case-by-case basis. You’d probably need to have a token lease manager like &lt;a href="https://www.hashicorp.com/products/vault"&gt;HashiCorp Vault&lt;/a&gt; to do this for another database. This feature isn’t available in either MongoDB or CockroachDB.&lt;/p&gt;

&lt;p&gt;Having said that, the security implementation of both CockroachDB and MongoDB is quite advanced, as both databases provide basic authentication, encryption, and authorization—but not all features are available in all the plans. Encryption at rest in CockroachDB, for instance, is only available in the Enterprise version, while MongoDB doesn’t have that restriction. Fauna, on the other hand, provides all of these features in all its plans.&lt;/p&gt;

&lt;h3&gt;
  
  
  It enables more powerful queries
&lt;/h3&gt;

&lt;p&gt;MongoDB and CockroachDB both have minimal support for custom or user-defined functions. Fauna uses an API-first approach to reading and writing data, meaning you can query the data using several interfaces (such as GraphQL) by exposing its API. Fauna comes with its own query language, FQL, which is an &lt;a href="https://docs.fauna.com/fauna/current/api/fql/"&gt;expression-oriented query language&lt;/a&gt;. You can also write user-defined functions (UDFs). In contrast, CockroachDB doesn’t support GraphQL by default, but you can find ways to work around that. MongoDB provides support for GraphQL via the Atlas App Services API.&lt;/p&gt;

&lt;p&gt;In FQL, everything returns values. This enables you to use complex control structures and conditionals to process and compute your data in the shape and form you want. You can also use a mix of these interfaces to develop your applications. If you’re going to query documents, relational data, graph data, or a combination of all of those, you should be able to do it pretty easily with Fauna.&lt;/p&gt;

&lt;p&gt;FQL also allows you to query different types of data. For instance, you can use the &lt;a href="https://docs.fauna.com/fauna/current/learn/understanding/temporality"&gt;temporality&lt;/a&gt; feature of Fauna to issue point-in-time (PIT) queries. This is especially useful when dealing with time series data or &lt;a href="https://docs.fauna.com/fauna/current/learn/understanding/streaming"&gt;streaming data&lt;/a&gt;, which is a real possibility with Fauna. While MongoDB &lt;a href="https://www.mongodb.com/docs/manual/changeStreams/"&gt;natively supports streaming data&lt;/a&gt;, CockroachDB has &lt;a href="https://www.cockroachlabs.com/blog/from-batch-to-streaming-data-real-time-monitoring-with-snowflake-looker-and-cockroachdb/"&gt;limited support for streaming&lt;/a&gt;, which you can capitalize on by building on top.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There are multiple reasons why Fauna is a better choice for building highly scalable and blazing fast applications. Its Calvin-based architecture has enabled Fauna to be one of a kind, providing globally distributed transactions with the highest level of ACID compliance possible.&lt;/p&gt;

&lt;p&gt;With its support for GraphQL and FQL, Fauna also proves to be highly developer-friendly. It gives you the option to use document-oriented database storage and retrieval while providing a relational-database-like experience.&lt;/p&gt;

&lt;p&gt;Fauna offers fully managed provisioning, maintenance, scaling, sharding, replication, and correctness. Using Fauna, all you have to do is focus on creating great applications.&lt;/p&gt;

&lt;p&gt;To see how Fauna can help you, &lt;a href="https://go.fauna.com/contact-us"&gt;request a demo&lt;/a&gt; or &lt;a href="https://dashboard.fauna.com/accounts/register"&gt;sign up&lt;/a&gt; and start using Fauna for free.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Kovid Rathee is a data and infrastructure engineer working as a senior consultant at Servian in Melbourne, Australia.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fauna</category>
      <category>database</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Serverless patterns reference architectures</title>
      <dc:creator>Fauna</dc:creator>
      <pubDate>Wed, 15 Jun 2022 19:31:56 +0000</pubDate>
      <link>https://forem.com/fauna/serverless-patterns-reference-architectures-i8k</link>
      <guid>https://forem.com/fauna/serverless-patterns-reference-architectures-i8k</guid>
      <description>&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Systems_design" rel="noopener noreferrer"&gt;Systems design&lt;/a&gt; is one of the toughest parts of the software development lifecycle (SDLC). Software architects frequently rely on reference architectures because such tools help them more easily develop and maintain serverless and other complex systems.&lt;/p&gt;

&lt;p&gt;Reference architectures for &lt;a href="https://fauna.com/blog/serverless-architecture" rel="noopener noreferrer"&gt;serverless applications&lt;/a&gt; are a high-level overview of how to solve a particular problem. By following them, you’re building on the knowledge of other teams and utilizing industry best practices. Because many of them were created by companies like Google, Microsoft, and Amazon, they also work well at a large scale.&lt;/p&gt;

&lt;p&gt;Different &lt;a href="https://aws.amazon.com/lambda/resources/reference-architectures/" rel="noopener noreferrer"&gt;reference architectures are available for&lt;/a&gt; different types of systems, such as a mobile or IoT backend, real-time file-processing function, or web application. Reviewing your options helps you to choose the best one for your use case. The added efficiency such architectures provide can also lower the total cost of ownership (TCO) for your organization. To further optimize your architectures, you can use &lt;a href="https://fauna.com/" rel="noopener noreferrer"&gt;Fauna&lt;/a&gt;, a distributed document-relational database that's ACID by default.&lt;/p&gt;

&lt;p&gt;This article will break down serverless reference architectures and examine some common examples while also demonstrating how Fauna makes it easier to implement them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do you need serverless patterns reference architectures?
&lt;/h2&gt;

&lt;p&gt;Serverless app development can benefit from organized architectures and well-tested development techniques. Following are some of the major reasons why you should consider using serverless patterns reference architectures:&lt;/p&gt;

&lt;h3&gt;
  
  
  Optimized resource consumption
&lt;/h3&gt;

&lt;p&gt;While serverless comes with its own share of resource-consumption cuts and cost optimization, you can take that even further by designing it correctly. Well-designed serverless apps automatically cut down on wasted execution cycles and provide you with the best performance-to-cost ratio. &lt;/p&gt;

&lt;p&gt;If you’re trying to build a simple static web app, going with an n-tier application might not make sense; it might even end up adding overheads to your development process. Similarly, relying on a simple web app architecture for an app that scales to hundreds of thousands of users might not be your best choice. This is why you need to use the right reference architecture so that you can automatically optimize your app by relying on a tested pattern.&lt;/p&gt;

&lt;h3&gt;
  
  
  Event streaming and processing
&lt;/h3&gt;

&lt;p&gt;The serverless model relies on invocations and execution cycles. When you choose to build your app using one of the popular serverless reference patterns, you get this feature built into your design. Each of your system’s components communicates with the others using events and messages. This enables them to work asynchronously and increases their resilience. If an invocation fails due to some bad values, the component doesn’t crash; it just records a failure and waits for the next event.&lt;/p&gt;

&lt;p&gt;Event-based execution also offers powerful queuing capabilities. If any of your components receives an unusually high number of requests, it can queue them up and work according to its own capacity. No requests are lost, and your system won’t blow up. Patterns like &lt;a href="https://www.jeremydaly.com/the-scalable-webhook/" rel="noopener noreferrer"&gt;the scalable webhook&lt;/a&gt; are excellent examples of how the right design can help you solve major problems with ease.&lt;/p&gt;

&lt;h3&gt;
  
  
  Trigger-based processing
&lt;/h3&gt;

&lt;p&gt;Apps built using serverless patterns can take advantage of trigger-based processing techniques. Instead of a live application running on a server waiting for user requests to come in, your app becomes a hypothetical function that’s triggered when a request comes in. This request could be for anything from image compression to serving a full-fledged web app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless patterns reference architectures with Fauna
&lt;/h2&gt;

&lt;p&gt;Now that you’ve seen how serverless patterns reference architectures can benefit you, here are five important architectures you should know. Additionally, you’ll see how these architectures are reinforced by using Fauna instead of traditional databases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a basic web application
&lt;/h2&gt;

&lt;p&gt;Building a basic web application that serves content to users as well as enables them to persist data into databases is simple with serverless. This is known as the &lt;a href="https://www.jeremydaly.com/simple-web-service/" rel="noopener noreferrer"&gt;simple web service&lt;/a&gt; pattern. Here’s how the architecture of your app would look:&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/po4qc9xpmpuh/4wKymsLMVhJPyE70UbIvRg/387dcaf4fe8dd33f3ab3494440761757/Simple_web_service_pattern_with_Fauna.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/po4qc9xpmpuh/4wKymsLMVhJPyE70UbIvRg/387dcaf4fe8dd33f3ab3494440761757/Simple_web_service_pattern_with_Fauna.png" alt="Simple web service pattern with Fauna"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The flow here is also simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Client sends a request to the app, which could be viewing the home page of the app or making a change in the user profile details.&lt;/li&gt;
&lt;li&gt;The request is received by the API gateway and forwarded to the serverless function.&lt;/li&gt;
&lt;li&gt;The serverless function interacts with the data store if necessary.&lt;/li&gt;
&lt;li&gt;The data store returns any required data.&lt;/li&gt;
&lt;li&gt;The function sends the response to the API gateway.&lt;/li&gt;
&lt;li&gt;The API gateway responds to the client with the request.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is easy to implement. As with any serverless app, the serverless function replaces a live 24-7 web app server waiting to receive requests. An important thing to realize here is that to talk to a traditional database over TCP/IP, you would have needed a 24-7 web server. Lambda functions wouldn’t support it, because they spin down as soon as they reach the idle state.&lt;/p&gt;

&lt;p&gt;Using Fauna instead of traditional databases provides you with a quick and ready-to-go database setup. If you choose to host your app on a platform like &lt;a href="https://docs.fauna.com/fauna/current/build/integrations/vercel" rel="noopener noreferrer"&gt;Vercel&lt;/a&gt; or &lt;a href="https://docs.fauna.com/fauna/current/build/integrations/netlify" rel="noopener noreferrer"&gt;Netlify&lt;/a&gt;, Fauna offers powerful integration abilities with them and can provide you with an enriched experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migrating legacy applications
&lt;/h2&gt;

&lt;p&gt;A common issue with application maintenance is migrating legacy apps to newer technologies. The hard part is deciding how the migration will be carried out. In many cases, it’s not possible to decommission the old app until the new app is ready. A popular serverless pattern known as &lt;a href="https://www.jeremydaly.com/the-strangler-pattern/" rel="noopener noreferrer"&gt;the Strangler&lt;/a&gt; can help you in such cases.&lt;/p&gt;

&lt;p&gt;The idea behind this pattern is to allow the developers to migrate parts of the legacy app to the new serverless tech stack. Meanwhile, the old application runs in parallel with the new growing app. Here’s how that looks during the migration process:&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/po4qc9xpmpuh/4m1ayUXAekyUFm7adMRDpN/3dd03b961fbd51f992b86a0db7d3f661/Strangler_pattern_with_Fauna.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/po4qc9xpmpuh/4m1ayUXAekyUFm7adMRDpN/3dd03b961fbd51f992b86a0db7d3f661/Strangler_pattern_with_Fauna.png" alt="Strangler pattern with Fauna"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can break down your legacy app into multiple functions while migrating it to the new tech stack, as shown by the two functions above. When your API gateway receives a new request, it knows whether to route that request to one of your new functions or to your legacy app.&lt;/p&gt;

&lt;p&gt;The data stores can be managed independently of the migration process. Once you’re done migrating your app, you can migrate your existing database using the &lt;a href="https://fauna.com/blog/evolving-the-structure-of-your-fauna-database" rel="noopener noreferrer"&gt;in-depth process&lt;/a&gt; laid out by Fauna, which not only migrates your data but also enriches it on the way.&lt;/p&gt;

&lt;p&gt;Once your data is migrated to Fauna, it will be highly available to each of your functions, and it will reduce the infrastructural load from your traditional database management system (DBMS).&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling unpredictable workloads
&lt;/h2&gt;

&lt;p&gt;For many apps, user traffic varies greatly. When you’re building with traditional technologies, accounting for major spikes in traffic and workload can be difficult. However, serverless was built with scalability in mind. The &lt;a href="https://www.jeremydaly.com/the-scalable-webhook/" rel="noopener noreferrer"&gt;scalable webhook&lt;/a&gt; pattern serves this use case perfectly. It’s meant mostly for high-traffic webhooks, and it relies on queueing and throttling to reduce the load on the main handlers, meaning the serverless function.&lt;/p&gt;

&lt;p&gt;Here’s what it looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/po4qc9xpmpuh/fF758guHFdVsQyGh8OgLB/0f80797c3e24356648d0822781c13f3b/Scalable_webhook_with_Fauna.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/po4qc9xpmpuh/fF758guHFdVsQyGh8OgLB/0f80797c3e24356648d0822781c13f3b/Scalable_webhook_with_Fauna.png" alt="Scalable webhook with Fauna"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s how the flow goes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The client sends a request to the API gateway.&lt;/li&gt;
&lt;li&gt;The API gateway sends the request to the first serverless function.&lt;/li&gt;
&lt;li&gt;The serverless function adds the request to the queue and sends an acknowledgment back to the API gateway.&lt;/li&gt;
&lt;li&gt;The API gateway sends the confirmation back to the client.&lt;/li&gt;
&lt;li&gt;The second serverless function is subscribed to the queue and picks up on any incoming requests.&lt;/li&gt;
&lt;li&gt;The function accesses the data store if necessary, performs the required operation, and marks the request as completed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can have another queue component that tracks requests marked as failed by the second serverless function. Since the second serverless function is decoupled from the primary stream of incoming requests, it can moderate the rate at which it picks up tasks, also known as &lt;em&gt;throttling&lt;/em&gt;. You can also have more than one serverless function picking up requests from the queue if there is more than one independent operation associated with them.&lt;/p&gt;

&lt;p&gt;Fauna can help this use case by providing a single source of truth for all of your worker functions and tying them together to achieve higher working bandwidth for your service. Moreover, Fauna would work better than other serverless databases like DynamoDB for a number of reasons, such as ACID compliance and consistency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tracking failed API requests
&lt;/h2&gt;

&lt;p&gt;Another common use case for a web service is to serve user requests while tracking those that an external API failed to handle. This data can be used to judge whether the next calls should be sent to the external API or whether the client should just get an automated failure response. This helps reduce unnecessary calls to the external API, which would still result in a failed response but would add to your costs.&lt;/p&gt;

&lt;p&gt;Here’s what this architecture looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/po4qc9xpmpuh/3LGMOhTHYiQV5sOjNUCltX/273f6df9521b2894318a2a355d6223b3/Tracking_failed_requests_with_Fauna.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/po4qc9xpmpuh/3LGMOhTHYiQV5sOjNUCltX/273f6df9521b2894318a2a355d6223b3/Tracking_failed_requests_with_Fauna.png" alt="Tracking failed requests with Fauna"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s how it works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The client sends the request to the API gateway, which forwards it to the right serverless function.&lt;/li&gt;
&lt;li&gt;Before calling the external API, the function checks the data store to see if the API has been reporting failures lately. A variable (presumably Boolean) is flipped to false in the data store if the number of failure responses from the external API has crossed a certain threshold. &lt;/li&gt;
&lt;li&gt;The false state means that the serverless function isn’t allowed to call the external API anymore. It returns with a failure response to the API gateway and subsequently to the client.&lt;/li&gt;
&lt;li&gt;Periodically, a request is forwarded to the external API just to check if it’s available again. If yes, the Boolean variable is set to true, and all further incoming requests are allowed to go to the external API.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Fauna improves this flow by providing you with a fast and easy-to-set-up data API that you can use to implement this architecture. If your clients are distributed, using Fauna gives you an edge over an eventually consistent database since Fauna is distributed and consistent by default. If this weren’t the case, a client in another region might think that the API is active when it is already marked unresponsive from a previous status-check made from a different region.&lt;/p&gt;

&lt;h2&gt;
  
  
  Storing data before processing
&lt;/h2&gt;

&lt;p&gt;In some cases, you might want to store user data in your database or data store before you process it. Instead of waiting for your serverless functions to finish running operations on your data, you can dump it in your DB first and then let them run. This can sometimes be necessary to reduce latency in your endpoints.&lt;/p&gt;

&lt;p&gt;Here’s what this architecture looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimgur.com%2FTIspEHF.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimgur.com%2FTIspEHF.png" alt="Storage-first service"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s how it works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The client sends a request to the API gateway, which forwards it to the first serverless function.&lt;/li&gt;
&lt;li&gt;The function persists the data in the data store and returns a confirmation to the client.&lt;/li&gt;
&lt;li&gt;The second serverless function picks up requests and their data from the data store, processes them, and then stores them back in.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Some might consider using a different data store for storing incoming requests and processed results, but that would just bring you closer to the scalable webhook pattern, in which the data store for storing incoming requests is a dedicated queue.&lt;/p&gt;

&lt;p&gt;Fauna provides a highly available, fast, managed data-storage alternative that makes it easy for you to use it as the queue store as well as the results store.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Serverless patterns reference architectures can give you guidelines to follow in creating your own serverless projects, saving you both time and money as well as the headache of trying to address problems that others in the industry have already solved. You can build on the expertise of others to achieve more success with your code.&lt;/p&gt;

&lt;p&gt;As you’ve seen throughout this article, &lt;a href="https://fauna.com/" rel="noopener noreferrer"&gt;Fauna&lt;/a&gt; can be an invaluable tool in this process. The serverless transactional database and data API help you create new applications or migrate and improve on existing projects while enabling easy scalability and auth integrations. Fauna offers event streaming, data imports, and distributed compute and storage, as well as other features to help with serverless reference architectures.&lt;/p&gt;

&lt;p&gt;To see more of what Fauna can do for you, &lt;a href="https://go.fauna.com/contact-us" rel="noopener noreferrer"&gt;sign up for a demo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Kealan Parr is a senior software engineer and technical writer who works with clients on technical reviewing and software development. He is a member of the Unicode Consortium.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>fauna</category>
      <category>webdev</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Comparing DynamoDB and Fauna for multi-region data stores</title>
      <dc:creator>Fauna</dc:creator>
      <pubDate>Fri, 20 May 2022 16:37:19 +0000</pubDate>
      <link>https://forem.com/fauna/comparing-dynamodb-and-fauna-for-multi-region-data-stores-96p</link>
      <guid>https://forem.com/fauna/comparing-dynamodb-and-fauna-for-multi-region-data-stores-96p</guid>
      <description>&lt;p&gt;One of the critical architectural decisions you must make when designing modern applications is selecting the right data storage technology. Not only can this decision be expensive to change later, but it also deeply affects your application’s availability, security, and performance. &lt;/p&gt;

&lt;p&gt;Depending on the size and reach of your organization, you might need to replicate your data to multiple geographical locations to serve a global base of users. Multi-region storage can reduce access latency, speed up recovery, and help you manage legal compliance for your data.&lt;/p&gt;

&lt;p&gt;There are many services available to provide multi-region storage, including Amazon DynamoDB and Fauna. &lt;a href="https://aws.amazon.com/dynamodb/"&gt;Amazon DynamoDB&lt;/a&gt; is an AWS-managed service that provides a serverless key-value NoSQL database. &lt;a href="https://fauna.com/"&gt;Fauna&lt;/a&gt; is a multi-region, serverless, document-relational database. Because these two services use different data storage methods, their &lt;a href="https://fauna.com/blog/compare-fauna-vs-dynamodb"&gt;terminology and features&lt;/a&gt; will vary.&lt;/p&gt;

&lt;p&gt;This article will compare the behavior of these two NoSQL databases in multi-region deployments. You’ll learn more about how they are configured for multi-region deployments, how their distributed transaction behaviors differ, and how they select the closest replica, so that you can decide which database would be a better choice for your projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring multi-region deployments
&lt;/h2&gt;

&lt;p&gt;Running multi-region data stores comes with its own challenges. Due to the geographically distributed nature of these systems, network latency and interruptions can cause major performance and reliability issues.&lt;/p&gt;

&lt;p&gt;By default, DynamoDB tables are deployed within a single AWS region. To achieve multi-region deployment, a &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GlobalTables.html"&gt;global DynamoDB table&lt;/a&gt; has to be created instead.&lt;/p&gt;

&lt;p&gt;A global DynamoDB table is composed of multiple regional replicas. Each one of these replicas should be created and managed independently (such as in their capacity provisioning and storage class) and should be named identically. Queries can be run on any replica, and AWS will ensure the data is &lt;a href="https://en.wikipedia.org/wiki/Eventual_consistency"&gt;eventually consistent&lt;/a&gt; across all the replicas.&lt;/p&gt;

&lt;p&gt;Fauna provides a different level of abstraction. Instead of configuring each regional replica independently, each database is configured to a specific &lt;a href="https://docs.fauna.com/fauna/current/learn/understanding/region_groups"&gt;region group&lt;/a&gt;. Replicas can run in the United States, Europe, or across both. Fauna uses a &lt;a href="https://fauna.com/blog/consistency-without-clocks-faunadb-transaction-protocol"&gt;Calvin-inspired&lt;/a&gt; transaction engine to provide &lt;a href="https://fauna.com/blog/distributed-consistency-at-scale-spanner-vs-calvin"&gt;distributed consistency&lt;/a&gt; and a high-performance system.&lt;/p&gt;

&lt;p&gt;While global DynamoDB tables are available in more regions than Fauna (such as Asia, for instance), additional setup and maintenance are necessary to run them. Fauna is more straightforward to set up across multiple regions, but currently only supports the United States, Europe, or both as available Region Groups. This means your best choice here depends on which regions your organization serves. If you only do business in the US and Europe, then Fauna will work better for you. If you have users in other regions, you may prefer DynamoDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using database transactions
&lt;/h2&gt;

&lt;p&gt;Database transactions atomically perform database operations that affect multiple data elements like tables and fields. A database transaction’s atomicity ensures &lt;a href="https://fauna.com/blog/using-acid-transactions-to-combine-queries-and-ensure-integrity"&gt;data consistency&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With DynamoDB, it’s possible to bundle independent queries via &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/transaction-apis.html"&gt;transactions&lt;/a&gt;. However, the best practice is to use regular, non-transactional queries wherever possible and use transactions only when necessary. &lt;a href="https://en.wikipedia.org/wiki/ACID"&gt;ACID compliance&lt;/a&gt; is only guaranteed in DynamoDB transactions when performed within the same region. It’s not available for global tables.&lt;/p&gt;

&lt;p&gt;Transactions can target up to twenty-five items in DynamoDB tables in the same region within the same AWS account. Multiple tables can be included in the same transaction. Note that read and write queries can’t be mixed on the same transaction—for example, multiple write operations can be bundled into a transaction, but a read operation will have to use a different transaction. The aggregated size for all items in a transaction can’t exceed 4 MB.&lt;/p&gt;

&lt;p&gt;Due to the nature of DynamoDB, serializability between transactions and other queries is provided only in some cases. Developers need to ensure they understand the &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/transaction-apis.html#transaction-isolation"&gt;documentation&lt;/a&gt;, because caveats might apply to their systems.&lt;/p&gt;

&lt;p&gt;On the other hand, Fauna was designed to be a fully transactional database, treating transactions as a first-class concept. Fauna’s database engine was built to provide strictly serializable transactions and guarantee short latencies even between geographically distributed replicas. The effect is that transactions can have any number of documents and can be up to &lt;a href="https://docs.fauna.com/fauna/current/learn/understanding/limits"&gt;16 MB in size&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;While DynamoDB is designed to run a high number of small, independent queries, Fauna is designed to support transactions to always keep a consistent state of the data. DynamoDB best practices prescribe using transactions sparsely, whereas with Fauna, all queries are considered transactions. If your application heavily depends on transactions and can’t handle incomplete or inconsistent data, Fauna may be the better choice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using transactions in multi-region deployments
&lt;/h2&gt;

&lt;p&gt;As noted above, there are transaction limitations that apply to both single-region and multi-region data stores. Because of the distributed nature of multi-region deployments, though, there are several other caveats when using transactions.&lt;/p&gt;

&lt;p&gt;DynamoDB transactions, as previously noted, are only applied—and relevant to—the region where the transaction was initiated. After a transaction successfully completes in the source region, the changes will be propagated to all replicas. By design, as transactions are region-specific, there’s no ACID compliance between different regions.&lt;/p&gt;

&lt;p&gt;This means that you can potentially see partially completed transactions in different replicas until DynamoDB finishes the replication. The application needs to be architected to address such data anomalies and possible &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/transaction-apis.html#transaction-conflict-handling"&gt;transaction conflicts&lt;/a&gt;. Your application needs to treat all data read from a replica as if there’s no concept of a transaction, but rather just individual, independent queries. This also needs to be addressed if you are caching the data, as you might have read inconsistent data.&lt;/p&gt;

&lt;p&gt;With Fauna, transaction guarantees are fully supported in all configurations, including multi-region setups. Fauna’s engine will automatically ensure that no replica is exposed to transaction anomalies like &lt;a href="https://en.wikipedia.org/wiki/Isolation_(database_systems)"&gt;dirty or phantom reads&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If your application can’t tolerate inconsistent data, or if ACID compliance is required from all replicas, Fauna will be the better choice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Routing connections to the nearest replica
&lt;/h2&gt;

&lt;p&gt;Application architectures often use multi-region data stores to reduce latency for users. This ensures that the data is stored closest to users’ geographical locations.&lt;/p&gt;

&lt;p&gt;With DynamoDB, your application must provide a specific endpoint to choose the region for accessing the nearest replica table. Global DynamoDB tables ensure data is synchronized between replicas, but your application still needs to decide which replica to use and how to access it. That means a few more coding and configuration changes are necessary, and your application must identify the closest region and change the AWS region accordingly. For example, an EC2 instance in &lt;code&gt;eu-west-1&lt;/code&gt; should reach a DynamoDB replica in &lt;code&gt;eu-west-2&lt;/code&gt; instead of &lt;code&gt;us-east-1&lt;/code&gt; if those are the two regions the global DynamoDB table is deployed to.&lt;/p&gt;

&lt;p&gt;With Fauna, no extra code or configuration is necessary to select the fastest route. Client requests &lt;a href="https://docs.fauna.com/fauna/current/learn/understanding/region_groups"&gt;will always be automatically routed&lt;/a&gt; to the closest replica every time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This article demonstrated how DynamoDB and Fauna differ in multi-region setups. Though both services are strong choices, as you saw, they are each best for different use cases.&lt;/p&gt;

&lt;p&gt;If your application needs strong ACID compliance across multi-region replicas, Fauna may be a better choice. It eliminates data anomalies in replicas by serializing transactions. Fauna offers fewer regions than DynamoDB, but it requires less configuration tweaking and maintenance. Additionally, it can configure multi-region setups and routing connections transparently.&lt;/p&gt;

&lt;p&gt;If you’re interested in learning more about Fauna, its developer-friendly &lt;a href="https://fauna.com/features"&gt;features&lt;/a&gt;, and data API, you can &lt;a href="https://dashboard.fauna.com/accounts/register"&gt;sign up for free&lt;/a&gt; to give it a try.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Cintia Del Rio helps companies improve their infrastructure in the cloud. An engineering manager at Envato, she has been working with infrastructure and DevOps for more than ten years, and before that she was a developer. She has been the lead in infrastructure for the OpenMRS open source community for the past seven years.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fauna</category>
      <category>serverless</category>
      <category>aws</category>
    </item>
    <item>
      <title>Introduction to serverless databases</title>
      <dc:creator>Fauna</dc:creator>
      <pubDate>Thu, 12 May 2022 18:51:15 +0000</pubDate>
      <link>https://forem.com/fauna/introduction-to-serverless-databases-2c25</link>
      <guid>https://forem.com/fauna/introduction-to-serverless-databases-2c25</guid>
      <description>&lt;p&gt;Serverless is a new paradigm in which servers operate on a more automated level, freeing developers from the time and effort of managing them. This is an advantage when it comes to development because developers and engineers don’t need to handle as much in terms of infrastructure, which can be time-consuming and expensive if you don't have the necessary in-house expertise.&lt;/p&gt;

&lt;p&gt;Similarly, a serverless database takes the features of a traditional database and combines them with the values and flexibility of a serverless architecture. Working with a serverless database reduces much of the complexity of a database into a simple cloud-based API. It can provide an organization with more automated scaling, stronger resilience, and reduced time to market.&lt;/p&gt;

&lt;p&gt;In this guide, you’ll learn what a serverless database is and how it works, as well as more about the benefits it can offer to your organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a serverless database?
&lt;/h2&gt;

&lt;p&gt;A serverless database is any database that grows automatically to meet the changing demands of an application and manages unexpected workloads that cannot be predicted or scheduled. &lt;/p&gt;

&lt;p&gt;The benefits of serverless computing include only paying for the resources you use, scaling up and down to match demand, eliminating the need to manage servers, and lowering costs. If you use a non-serverless database in a serverless computing architecture, you lose these advantages. The major feature of a serverless database is its ability to adjust capacity based on its workload.&lt;/p&gt;

&lt;p&gt;A serverless database works when and wherever it is needed. A service provider will manage the database for you, including the provisioning of instances or clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why should you use a serverless database?
&lt;/h2&gt;

&lt;p&gt;A serverless database enables developers to work on projects without needing a specialized hardware platform or needing to worry about getting necessary resources for their applications.&lt;/p&gt;

&lt;p&gt;When you use a serverless database, you avoid several problems associated with a traditional setup. You save on costs because you only pay for what you use, and you save time because you don’t have to spend it patching, provisioning, and managing servers. A serverless database also improves security by ensuring that all applications interacting with the same data set pass the same access control, thus reducing the attack surface.&lt;/p&gt;

&lt;p&gt;A serverless database can be used for prototype testing, automatic website scaling, and continuous integration/continuous deployment (CI/CD) practices.&lt;/p&gt;

&lt;p&gt;There are several key features of a serverless database:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automatic elastic scaling and geographical distribution:&lt;/strong&gt; You don’t have to worry about sharding or whether your database can handle a sudden spike in traffic, because a serverless database scales up and down automatically to meet demand. This means scaling all the way down to zero when the database isn't being utilized while promptly responding when a query arrives. A serverless database can also scale geographically, moving and storing data dynamically around the globe to minimize latency and give users worldwide a consistently fast experience.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Native resilience and fault tolerance:&lt;/strong&gt; You also won’t have to worry about faulty storage nodes or zone disruptions bringing your services down. A serverless database can survive node and zone outages, as well as implement software updates and online schema changes, without any planned downtime required.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simplicity and familiarity:&lt;/strong&gt; Working with your database is as simple and intuitive as using an API. With features such as self-service starts, completely managed operations, and the ability to create clusters with the press of a button or a single command, a serverless database makes life simpler for everyone who works with it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Consumption-based billing:&lt;/strong&gt; You don’t pay for storage and compute resources you’re not using. A serverless database only bills you for resources you used, and you can set spending limits so that you don’t overrun your budget.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ACID:&lt;/strong&gt; You don’t have to sacrifice consistency for scale. Some serverless databases like Fauna provide the needed atomicity, consistency, isolation, and durability (ACID) properties to your transactions without sacrificing speed, no matter what scale you’re operating at.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Serverless databases are ideal for smaller businesses because they don’t require much maintenance, infrastructure support, or labor time. They work well for the following use cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Prototyping and testing:&lt;/strong&gt; With increasing competition across all industries, it is critical to produce prototypes and gather meaningful feedback from users as soon as possible. Since serverless database features such as self-serve, speed, and pay-per-use billing are both economical and relatively fast to implement, you can focus on writing code and responding to user feedback in the testing environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reduced time to market:&lt;/strong&gt; Serverless databases can help you drastically reduce time to market. Instead of performing complex deployment procedures to push out bug patches and new features, you can add and alter code as needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automatic scaling:&lt;/strong&gt; A serverless database can be utilized with existing or new applications that have unpredictably high database demand; it automatically scales up when needed and down when not. You don’t need to spend time configuring an autoscaling policy for your database system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Managing resource demands:&lt;/strong&gt; With a serverless database, you don’t have to worry about allocating enough resources to fulfill inconsistent resource requests. A serverless database will expand automatically when needed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Examples of serverless databases
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/rds/aurora/serverless/"&gt;Amazon Aurora Serverless&lt;/a&gt; is a proprietary service from AWS that’s compatible with Postgres and MySQL, which means you can connect to your Aurora database as if you’re connecting to Postgres or MySQL. It is also AWS cloud optimized. Aurora storage automatically grows in increments of 10 GB, up to 64 TB.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://cloud.google.com/firestore"&gt;Google Firestore&lt;/a&gt; is a serverless document database that provides direct database access for web, IoT, and mobile app development. It’s highly scalable with no maintenance window and zero downtime. Firestore enables offline data access for web and mobile SDKs, enables ACID-compliant transactions, supports multiple server-side development libraries and programming languages, enables data validation and identity-based security access controls, and offers real-time data synchronization with offline data access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/dynamodb/"&gt;Amazon DynamoDB&lt;/a&gt; is a NoSQL database service with single-digit millisecond response times. AWS manages everything, allowing you to store as much data as you need while also handling unpredictable demands. It's also a fully managed NoSQL database service with built-in scalability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.cockroachlabs.com/lp/serverless-22/"&gt;CockroachDB&lt;/a&gt;, an SQL relational serverless database, is considered one of the most evolved serverless databases. It offers a completely elastic and robust data architecture, distributed globally to help developers rapidly develop apps at a low cost. It is a single Postgres instance in many aspects.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://fauna.com/serverless"&gt;Fauna&lt;/a&gt; is a versatile transactional database that’s supplied as a secure, scalable cloud API with native GraphQL. Fauna blends the flexibility of NoSQL systems with SQL databases’ relational querying and transactional capabilities. It supports drivers for languages including Python, Java, Scala, and GraphQL, and promises full ACID compliance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Does a Serverless Database Work?
&lt;/h2&gt;

&lt;p&gt;For an example of a serverless database, look at Fauna. It offers on-demand autoscaling, which means that the database starts up, grows capacity based on the demand of your application, and shuts down when not in use.&lt;/p&gt;

&lt;p&gt;Fauna is a multi-model database with a data API for client-serverless applications. Its semi-structured data model supports relational, document, object-oriented, and graph data. &lt;/p&gt;

&lt;p&gt;Fauna is also a NoSQL database, which means you can’t use SQL to access it. Instead, your primary interface is the Fauna Query Language (FQL). FQL is not a general-purpose programming language, but it allows for advanced data processing and retrieval from Fauna. All functions, control structures, and literals return values in the language, which is expression-oriented. This makes it simple to combine results into an array or object, or to map over a collection and calculate a result for each member of your team.&lt;/p&gt;

&lt;p&gt;You run a query by sending it to a Fauna cluster, which computes and provides the results. Query execution is transactional, which means that if anything goes wrong, no changes are committed. If a query fails, you get an error rather than a result.&lt;/p&gt;

&lt;p&gt;Fauna’s serverless database design is multi-tenant, unlike the single-tenant design of a traditional database. In a single-tenant architecture, you pay for the server’s entire storage and processing capabilities even if you only utilize a tiny piece of it. In a multi-tenant architecture, you share that server with other users and only pay for the database storage that you use, which helps to decrease costs. Many database systems enable multi-tenancy, but Fauna goes a step further by allowing every database to have numerous offspring databases. This means you can administer a single large Fauna cluster, build some top-level databases, and grant teams complete administrative access to those databases. You can establish as many databases as you need without requiring an operator’s assistance. &lt;/p&gt;

&lt;p&gt;Data protection and security safeguards are a crucial aspect of serverless databases. In Fauna, security is implemented at the API level with access keys to authenticate connections. This access key mechanism applies to both administrator and server-level connections, as well as to object and user-level connections.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the benefits of serverless databases?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost-effectiveness and high scalability:&lt;/strong&gt; You will save time and money by using a serverless database, which eliminates the need for a license, installation, cabling, configuration, maintenance, and support. Scaling computing systems do not need to be set up manually by operators or developers. Furthermore, you can use resources to construct apps in the most efficient way possible based on your needs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Better resilience and availability:&lt;/strong&gt; A serverless database is duplicated across multiple regions, so if a storage node fails, all queries will be diverted to other nodes with load balancers until the node recovers, making it resilient and highly available. A serverless system is more adaptable than a traditional system in general. Serverless databases enable you to reduce latency, and data from event-driven functions is read where it is closest to the user.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Increased productivity and better developer experience (DX):&lt;/strong&gt; Developers don't have to worry about provisioning, configuring, or managing database infrastructure while using a serverless database. Developers just need to concentrate on creating applications, which increases productivity and provides a better DX.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Serverless databases can be a real asset to developer teams. They can increase your compute speed and resilience while decreasing the amount of time and money you spend on resources and scaling. Implementing a serverless database can vastly improve the DX at your organization.&lt;/p&gt;

&lt;p&gt;If you’re looking at options for a serverless database, consider &lt;a href="https://fauna.com/"&gt;Fauna&lt;/a&gt;. The transactional database uses a cloud API to provide simple, intuitive access to your data. It supports real-time streaming and GraphQL, and you can scale it globally as needed. To see how Fauna can help you, &lt;a href="https://dashboard.fauna.com/accounts/register"&gt;sign up for a free account&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was written by Segun Saka-Aiyedun. Segun is a cloud architect, DevOps enthusiast, and Manchester United fan.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>database</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
