<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Hasura</title>
    <description>The latest articles on Forem by Hasura (@hasurahq_staff).</description>
    <link>https://forem.com/hasurahq_staff</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/hasurahq_staff"/>
    <language>en</language>
    <item>
      <title>Scaling frontend app teams using Relay</title>
      <dc:creator>Hasura</dc:creator>
      <pubDate>Mon, 04 Sep 2023 06:13:00 +0000</pubDate>
      <link>https://forem.com/hasurahq_staff/scaling-frontend-app-teams-using-relay-3gf</link>
      <guid>https://forem.com/hasurahq_staff/scaling-frontend-app-teams-using-relay-3gf</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L6WrQUET--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/scaling-fe-teams-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L6WrQUET--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/scaling-fe-teams-1.png" alt="Scaling frontend app teams using Relay" width="800" height="295"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The UI is decomposed into multiple React components.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sHMX8SBw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/relay-og.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sHMX8SBw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/relay-og.png" alt="Scaling frontend app teams using Relay" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The basic idea behind scaling the frontend, much like scaling any other part of the stack, is factoring it into multiple components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can be owned by multiple independent specialised teams&lt;/li&gt;
&lt;li&gt;Reduce coupling between components&lt;/li&gt;
&lt;li&gt;Have clearly specified interfaces between components&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's look at how this strategy plays out, and how to use modern technologies and ideas like Relay and backend-for-frontend (BFF) to scale out frontend applications and teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  Independent isolated teams own different components
&lt;/h2&gt;

&lt;p&gt;This is a useful baseline from which to iterate. It represents the naive scenario in which teams independently develop different components in a larger app but in isolation from each other.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--C1tetON_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/scaling-fe-teams-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C1tetON_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/scaling-fe-teams-2.png" alt="Scaling frontend app teams using Relay" width="800" height="423"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Independently developed components with independent data fetching. The circles are different teams.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem: Independent data fetching
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Poor performance and UX jank&lt;/li&gt;
&lt;li&gt;A component may make multiple network requests&lt;/li&gt;
&lt;li&gt;Component data request may depend on ancestor data, leading to waterfall style requests&lt;/li&gt;
&lt;li&gt;Components may fetch redundant data&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Problem: Independent state management
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Data and UI inconsistency&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Batching of network requests
&lt;/h2&gt;

&lt;p&gt;Teams can manually coordinate data fetching:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Batch all data requests at the root level&lt;/li&gt;
&lt;li&gt;Distribute data through the component tree via props&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--njQX6FG7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/scaling-fe-teams-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--njQX6FG7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/scaling-fe-teams-3.png" alt="Scaling frontend app teams using Relay" width="800" height="425"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Teams manually batch data requests through a shared root level query.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem: Coupling at root query
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Adding a query: can duplicate other similar queries, and fetch redundant data&lt;/li&gt;
&lt;li&gt;Removing or editing a query:&lt;/li&gt;
&lt;li&gt;Can break another component that (implicitly) depends on the same data&lt;/li&gt;
&lt;li&gt;Not removing unused data leads to over-fetch and cruft&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Problem: Poor developer ergonomics
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The data requirements for a component are no longer colocated with the component itself, which breaks encapsulation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  TRPC / React Query is not a complete solution
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Batches parallel network queries into single network requests&lt;/li&gt;
&lt;li&gt;Cannot batch all queries needed for a page&lt;/li&gt;
&lt;li&gt;Cannot solve waterfall requests&lt;/li&gt;
&lt;li&gt;Queries are batched over the network layer, but still execute against the data layer as independent queries i.e. batching cannot leverage the internal structure or relations of queries&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Coordinate data access through a centralized cache store
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YTiFdiSz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/scaling-fe-teams-4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YTiFdiSz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/scaling-fe-teams-4.png" alt="Scaling frontend app teams using Relay" width="800" height="432"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Teams manually coordinate shared state through a central store.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Introduces coupling between teams, with similar issues as with batching queries&lt;/li&gt;
&lt;li&gt;Lots of boilerplate to normalize data, update stores, and plumb data to components&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Backend for frontend
&lt;/h2&gt;

&lt;p&gt;BFF is useful for making lighter and more performant client applications by moving compute and data transformations to the server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LrgKNynw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/scaling-fe-teams-5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LrgKNynw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/scaling-fe-teams-5.png" alt="Scaling frontend app teams using Relay" width="800" height="418"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Backend for frontend: Moves compute and data transformations to the server. Collectively owned by all frontend teams.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Owned by the client app team&lt;/li&gt;
&lt;li&gt;Doesn't solve any of the coupling problems, just moves it around&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  GraphQL
&lt;/h2&gt;

&lt;p&gt;Batched queries organized around client pages instead of server functionality couple the frontend and the backend teams.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---48FxxCt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/scaling-fe-teams-6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---48FxxCt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/scaling-fe-teams-6.png" alt="Scaling frontend app teams using Relay" width="800" height="477"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Backend developers build APIs organized around Frontend pages/routes.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Instead, a GraphQL API exposes all features of the backend at a single endpoint.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The client app can craft queries that fetch exactly the data needed in one shot&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DYh6bAVD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/scaling-fe-teams-7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DYh6bAVD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/scaling-fe-teams-7.png" alt="Scaling frontend app teams using Relay" width="800" height="316"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Backend GraphQL API organized around server capabilities allows for flexible frontend queries.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Even better, we begin to see that the query structure beings to mirror the component tree!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This means we can refactor a root level query into fragments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--80NVxrpx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/image-7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--80NVxrpx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/image-7.png" alt="Scaling frontend app teams using Relay" width="800" height="222"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Queries decompose into fragments.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting it all together with Relay
&lt;/h2&gt;

&lt;p&gt;In Relay, every component defines its own data requirements&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qgJDRq6F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/image-9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qgJDRq6F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/image-9.png" alt="Scaling frontend app teams using Relay" width="800" height="340"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Data dependencies colocated with components. Fragment structure mirrors the component tree.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Significantly, a component can only access the data it has explicitly requested. This is "data masking" and is enforced through the &lt;code&gt;useFragment&lt;/code&gt; hook.&lt;/li&gt;
&lt;li&gt;This means that teams can modify individual components with confidence knowing that nothing will break as there are no implicit data dependencies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4gvYAs34--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/scaling-fe-teams-8-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4gvYAs34--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/scaling-fe-teams-8-1.png" alt="Scaling frontend app teams using Relay" width="800" height="344"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Independently declared data dependencies are compiled into a single root level query by the Relay compiler.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;At build time, the Relay compiler builds an optimized set of top level queries from all the fragments&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The compiler can check for common errors, and run optimizations such as deduplication across the whole codebase&lt;/li&gt;
&lt;li&gt;You get the developer ergonomics of independently developed components, but with the efficiency of globally optimized and batched data fetching&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b2yUmc5D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/relay-can-use-types-and-relations-to-build-a-cache-store.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b2yUmc5D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/relay-can-use-types-and-relations-to-build-a-cache-store.png" alt="Scaling frontend app teams using Relay" width="800" height="391"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Relay can leverage the rich information present in the GraphQL schema. Incrementally adopt the global node id and connection spec.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Relay uses the rich type information in the GraphQL schema, and automatically builds a local cache of data from all queries.&lt;/p&gt;

&lt;p&gt;Further, by incrementally adopting features such as the node global id spec and the connection spec, you get advanced features such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reloading only a portion of a query via fragments&lt;/li&gt;
&lt;li&gt;Cursor based pagination&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The JavaScript ecosystem has produced a variety of solutions for frontend development, data fetching, and state management; but solutions often fall short for ambitious projects.&lt;/p&gt;

&lt;p&gt;GraphQL, Relay, and React were built to work together and have the huge benefit of being driven by Meta's (Facebook) experience building and maintaining extremely large and and complex applications developed by many teams.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;if you’re not composing GraphQL fragments from multiple components into one query (as Relay does), i think you’re missing 80% of the point of GraphQL.  &lt;/p&gt;

&lt;p&gt;which is ok but isn’t talked about enough &lt;a href="https://t.co/ooohCDV6ZO"&gt;https://t.co/ooohCDV6ZO&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;— danabramov.bsky.social (&lt;a class="mentioned-user" href="https://dev.to/dan_abramov"&gt;@dan_abramov&lt;/a&gt;) &lt;a href="https://twitter.com/dan_abramov/status/1634959944733835265?ref_src=twsrc%5Etfw"&gt;March 12, 2023&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;



&lt;p&gt;It seems that GraphQL and React has taken over the world, but people are often put off by the new conventions espoused by the Relay library.&lt;/p&gt;

&lt;p&gt;The good news is that Relay can be incrementally adopted.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The client library will work with any existing GraphQL API&lt;/li&gt;
&lt;li&gt;Adopting bits of the Relay spec unlocks additional features&lt;/li&gt;
&lt;li&gt;Relay can be adopted selectively by some components and not others, for an easy migration path&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even better, it seems that the people who've implemented Relay, have been happy with it for quite a while.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;How Coinbase is scaling their app with Relay&lt;br&gt;&lt;br&gt;
"Relay is unique among GraphQL client libraries in how it allows an application to scale to more contributors while remaining malleable and performant."&lt;br&gt;&lt;br&gt;
&lt;a href="https://t.co/D45yEid8l0"&gt;https://t.co/D45yEid8l0&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;— Relay (@RelayFramework) &lt;a href="https://twitter.com/RelayFramework/status/1522643637456150528?ref_src=twsrc%5Etfw"&gt;May 6, 2022&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;



&lt;blockquote&gt;
&lt;p&gt;Join us for a live Q&amp;amp;A session with Tanmai Gopal on the &lt;a href="https://twitter.com/GraphQL?ref_src=twsrc%5Etfw"&gt;@GraphQL&lt;/a&gt; Discord. 🎙️ Discover how to scale UI development with Relay &lt;a href="https://twitter.com/hashtag/GraphQL?src=hash&amp;amp;ref_src=twsrc%5Etfw"&gt;#GraphQL&lt;/a&gt;.  &lt;/p&gt;

&lt;p&gt;⏰ July 12, 11AM PT&lt;br&gt;&lt;br&gt;
Join GraphQl Discord server ➡️ &lt;a href="https://t.co/MM3HN7DpbW"&gt;https://t.co/MM3HN7DpbW&lt;/a&gt; &lt;a href="https://t.co/hhKYYcnbKc"&gt;pic.twitter.com/hhKYYcnbKc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;— Hasura (@HasuraHQ) &lt;a href="https://twitter.com/HasuraHQ/status/1677362161503412242?ref_src=twsrc%5Etfw"&gt;July 7, 2023&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;



&lt;blockquote&gt;
&lt;p&gt;It still surprises me how Twitter embraces GraphQL / Relay (for RWeb) but keeping their Timeline model instead of adapting the Relay connection spec &lt;a href="https://t.co/skOcPvrmMv"&gt;https://t.co/skOcPvrmMv&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;— Jane Manchun Wong (&lt;a class="mentioned-user" href="https://dev.to/wongmjane"&gt;@wongmjane&lt;/a&gt;) &lt;a href="https://twitter.com/wongmjane/status/1593905033522733056?ref_src=twsrc%5Etfw"&gt;November 19, 2022&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 &lt;a href="https://twitter.com/hashtag/RelayJs?src=hash&amp;amp;ref_src=twsrc%5Etfw"&gt;#RelayJs&lt;/a&gt; feature I appreciate - Relay is optimized for performance by default. It “forces” you to break down the data requirements of your UI in small, reusable parts, like React does with components. It'll then subscribe to changes of only the data each component asks for 1/x&lt;/p&gt;

&lt;p&gt;— Gabriel Nordeborn (&lt;a class="mentioned-user" href="https://dev.to/___zth___"&gt;@___zth___&lt;/a&gt;) &lt;a href="https://twitter.com/%20___zth___%20/status/1510699572175450113?ref_src=twsrc%5Etfw"&gt;April 3, 2022&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;



&lt;p&gt;Use Hasura to easily generate a Relay compatible API with no code, across multiple data stores, with cross data store joins, filtering, and aggregations, and declaratively defined permissions.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;a href="https://cloud.hasura.io/signup"&gt;Sign up&lt;/a&gt; now for Hasura Cloud to get started!&lt;/strong&gt;
&lt;/h3&gt;

</description>
      <category>relay</category>
      <category>graphql</category>
      <category>bff</category>
      <category>react</category>
    </item>
    <item>
      <title>Announcing Hasura Notebook: Prototype fast on your GenAI apps</title>
      <dc:creator>Hasura</dc:creator>
      <pubDate>Thu, 31 Aug 2023 15:24:53 +0000</pubDate>
      <link>https://forem.com/hasurahq_staff/announcing-hasura-notebook-prototype-fast-on-your-genai-apps-4kf2</link>
      <guid>https://forem.com/hasurahq_staff/announcing-hasura-notebook-prototype-fast-on-your-genai-apps-4kf2</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IdDCgqMJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/notebook-og-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IdDCgqMJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/notebook-og-1.png" alt="Announcing Hasura Notebook: Prototype fast on your GenAI apps" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hasura empowers you to rapidly create high-quality data APIs&lt;/strong&gt; for production purposes. As data sources continue to evolve, Hasura evolves as well, providing you with the capability to construct secure data APIs over versatile data stores, including vectorized data.&lt;/p&gt;

&lt;p&gt;If you are new to Hasura, we recommend &lt;a href="https://hasura.io/docs/latest/getting-started/overview/"&gt;getting started for free with Hasura Cloud&lt;/a&gt;.  &lt;/p&gt;

&lt;p&gt;As part of Hasura’s evolution, today we’re introducing Hasura Notebooks, a tool designed to facilitate the swift prototyping of cutting-edge GenAI applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g9yjH9QZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/notebook-og.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g9yjH9QZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/notebook-og.png" alt="Announcing Hasura Notebook: Prototype fast on your GenAI apps" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this blog, we’ll take you through this new tool to gain a comprehensive understanding of how it works and why you need it.🙂&lt;/p&gt;

&lt;h2&gt;
  
  
  GenAI revolution
&lt;/h2&gt;

&lt;p&gt;Large Language Models (LLMs) are very large deep neural networks trained on vast amounts of text data mostly scraped from the internet.&lt;/p&gt;

&lt;p&gt;LLMs have taken the world by storm because these generalized language models are so good at understanding context, retrieving information, and generating content that can help us with numerous applications. And why not? Automation is the key to efficient applications.&lt;/p&gt;

&lt;p&gt;At its core, LLMs learn the word probabilities to predict the next most suitable word given in a sentence. If the sentence is from a domain which the LLM hasn’t been exposed in the training period, then the LLM will make low confidence predictions, which are known as hallucinations.We can deal with this problem by providing LLM context in prompts and instructing it to use the context to complete the task. This process is called grounding.&lt;/p&gt;

&lt;p&gt;In order to maximize the potential of LLMs across a wide range of applications, it's essential to expand the scope of secure data sources. Given the fast-paced environment, it's equally important to avoid dedicating excessive time to constructing data APIs (a crucial yet somewhat mundane task 🤷‍♀️) when you could be focused on creating exciting and lucrative applications.&lt;/p&gt;

&lt;p&gt;Hasura to the rescue!&lt;/p&gt;

&lt;p&gt;Let’s learn how you can  quickly build secure data APIs and prototypes with a product search demo.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Hasura Notebook
&lt;/h2&gt;

&lt;p&gt;Hasura Notebook is a remote Jupyter Notebook with ready-to-run examples and Jupyter Kernel Gateway baked-in to turn your code cells into endpoints.&lt;/p&gt;

&lt;p&gt;Hasura Notebooks are great if you want to quickly prototype GenAI application with Hasura or learn GenAI from our existing templated projects.&lt;/p&gt;

&lt;p&gt;We are going to building on Hasura Notebook in this blog. Recommended reading before we proceed - &lt;a href="https://hasura.io/docs/latest/jupyter-notebooks/"&gt;Hasura Jupyter Python Notebook &amp;amp; API Server documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use case: Get a contextual product search powered by OpenAI in under 10 minutes 🚀
&lt;/h2&gt;

&lt;p&gt;As you might have experienced yourself, searches on most e-commerce websites are keyword-oriented. This results in a lot of false positives or irrelevant results and can lead to dissatisfied users. Contextual search is an answer to this problem. Contextual search matches the user query with the product description.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Hasura CLI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If this is your first time using Hasura CLI, you will need to install it. Follow the installation instructions &lt;a href="https://hasura.io/docs/latest/hasura-cli/install-hasura-cli/%20"&gt;in this doc&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Hasura Notebook&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Follow the steps in the documentation below to instantiate your Hasura Notebook.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FNuoO_Kv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/hasura-notebook-architecture.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FNuoO_Kv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/hasura-notebook-architecture.png" alt="Announcing Hasura Notebook: Prototype fast on your GenAI apps" width="800" height="1105"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Architecture of Hasura + Jupyter Notebook + VectorDB&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You will know you are ready when you are able to access the &lt;a href="https://hasura.io/docs/latest/jupyter-notebooks/"&gt;Hasura Notebook&lt;/a&gt; with a landing page that looks like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5iNePShS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/hkxH2LThGp34LxaNGhI2YBluzlbIdlnS-sFvD1Xq3HbdFKy7U-DFfXiJzW7pK0wp_EOSQGCPk6CwEiNJ6p_VKi4zx4t1OQdCU-Ngs9Cmjq8WV474bU49LZxVwk0XbA3TaiS_r2G8khPJNxv9jPRgWrE" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5iNePShS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/hkxH2LThGp34LxaNGhI2YBluzlbIdlnS-sFvD1Xq3HbdFKy7U-DFfXiJzW7pK0wp_EOSQGCPk6CwEiNJ6p_VKi4zx4t1OQdCU-Ngs9Cmjq8WV474bU49LZxVwk0XbA3TaiS_r2G8khPJNxv9jPRgWrE" alt="Announcing Hasura Notebook: Prototype fast on your GenAI apps" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can find all the code under &lt;code&gt;https://&amp;lt;connector url&amp;gt;/jupyter/notebook/product_search&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let’s get building! 🚀&lt;/p&gt;
&lt;h2&gt;
  
  
  Setup your PostgreSQL database
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Start a new Neon Cloud PostgreSQL DB or link your existing DB.&lt;/p&gt;

&lt;p&gt;More details on Neon Cloud PostgreSQL DB here: &lt;a href="https://hasura.io/docs/latest/databases/postgres/neon/"&gt;https://hasura.io/docs/latest/databases/postgres/neon/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Create a new table called &lt;code&gt;base_products&lt;/code&gt; and track it from the data tab.&lt;br&gt;&lt;br&gt;
Tracking enables the table to be accessible through GraphQL query.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RfcoGqip--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/XNYDiJUZxRIFbC7AYmI6EbSK8WQypFYh1NYRjzIGf2sagdDsuFu5toByV0FtB-D8d1g0Tp2wJuvlNWvWOgIUSW9OFYkp__WyWpLWLuTZeCBTRF2UlJEWPYhF99_Uv7LtFWRv8lwbf-GRYruAVaTjBrI" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RfcoGqip--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/XNYDiJUZxRIFbC7AYmI6EbSK8WQypFYh1NYRjzIGf2sagdDsuFu5toByV0FtB-D8d1g0Tp2wJuvlNWvWOgIUSW9OFYkp__WyWpLWLuTZeCBTRF2UlJEWPYhF99_Uv7LtFWRv8lwbf-GRYruAVaTjBrI" alt="Announcing Hasura Notebook: Prototype fast on your GenAI apps" width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Set up your vector database
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Create a free 14-day cluster on Weaviate.&lt;/p&gt;

&lt;p&gt;Head to &lt;a href="https://console.weaviate.cloud/"&gt;https://console.weaviate.cloud/&lt;/a&gt; and register for an account. After confirming via email, click + Create cluster and fill in a name before clicking Create. Once Weaviate has provisioned your sandbox cluster, proceed to the next step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dNocURYT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/Od-Xl1LrXjq0HsHJyu8m1j0z7ksyLZ8FsRZcSuf-qjUZwWRNuBPGC4wiH2c1RMe14wFakxRGRC6EjsbjWqCjsQXGPObL-VKFqgCR4ZQyRbn08aXEC95c9P65eFHjmlB32aUe4dWiOSIt08S-2uamcDA" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dNocURYT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/Od-Xl1LrXjq0HsHJyu8m1j0z7ksyLZ8FsRZcSuf-qjUZwWRNuBPGC4wiH2c1RMe14wFakxRGRC6EjsbjWqCjsQXGPObL-VKFqgCR4ZQyRbn08aXEC95c9P65eFHjmlB32aUe4dWiOSIt08S-2uamcDA" alt="Announcing Hasura Notebook: Prototype fast on your GenAI apps" width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Create the Product schema called &lt;code&gt;Product_vectors&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Product schema ID, Product Name, and Product Description. To execute this, run setup_weaviate.ipynb&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Add the Weaviate table to Hasura.&lt;/p&gt;

&lt;p&gt;Currently, the Weaviate connector is not natively available in Hasura. You can add it to Hasura in two simple steps using the Hasura CLI.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Deploy Weaviate connector using Hasura CLI.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# create connector using our weaviate repo
hasura connector create my_weaviate_connector:v1 --github-repo-url https://github.com/hasura/weaviate_gdc/tree/main/
# check deployment status to get the endpoint

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hasura connector status my_weaviate_connector:v1
# you can also use list command
hasura connector list
# view logs at any point of time
hasura connector logs my_weaviate_connector:v1
# for more commands explore help section

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Add the Weaviate connector to Hasura.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Refer to this document for details &lt;a href="https://hasura.io/docs/latest/databases/vector-databases/weaviate/"&gt;https://hasura.io/docs/latest/databases/vector-databases/weaviate/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Track the table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7rl7xnpa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/qEfFxoTlij4_WW16hLTrUqfjBHMzrEEgVyYuGvNt-L5A3Iu4YLc1ttBovPVFDMGNwy4BsOaMnBdnIi6rpTV_5JukgsrJ05kaXffYR0VwlUhCUPJr-HHFptEBIAFjQ1BqpIvhL1mRalbOnLsA3xZz0mg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7rl7xnpa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/qEfFxoTlij4_WW16hLTrUqfjBHMzrEEgVyYuGvNt-L5A3Iu4YLc1ttBovPVFDMGNwy4BsOaMnBdnIi6rpTV_5JukgsrJ05kaXffYR0VwlUhCUPJr-HHFptEBIAFjQ1BqpIvhL1mRalbOnLsA3xZz0mg" alt="Announcing Hasura Notebook: Prototype fast on your GenAI apps" width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Add a remote relationship between PostgreSQL and Weaviate tables
&lt;/h2&gt;

&lt;p&gt;Go to Product_vectors table and add a relationship with the PostgreSQL database as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vRQny046--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/0q1HnJZDPKjbuL2j0PJdd1yS_P0gkL-9QdQzQhdRZKo7LNPDGOnY6udfM0bUewwks60vKwRIcDiXI9AU2pFevXg_oTdZbh6p_499D0OusLf6omqQpdHOjDmAgudiWckjFXOxCA9D3vyDktEnFeRpTSo" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vRQny046--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/0q1HnJZDPKjbuL2j0PJdd1yS_P0gkL-9QdQzQhdRZKo7LNPDGOnY6udfM0bUewwks60vKwRIcDiXI9AU2pFevXg_oTdZbh6p_499D0OusLf6omqQpdHOjDmAgudiWckjFXOxCA9D3vyDktEnFeRpTSo" alt="Announcing Hasura Notebook: Prototype fast on your GenAI apps" width="800" height="616"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Set up an Event Trigger on a PostgreSQL table
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Fetch your Hasura GraphQL API endpoint and Admin secret.&lt;br&gt;&lt;br&gt;
Move over to &lt;code&gt;cloud.hasura.io&lt;/code&gt; and click &lt;code&gt;Projects&lt;/code&gt;. From your project, click on the &lt;code&gt;Settings&lt;/code&gt; icon to access Project details.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--plIotkXB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/zcoADxSrY7nEQOlpp1rbsZ9LATxuAQvYUVcYWxqiIoiyfLGCYDTlJHA1lAoO8_XrBPvIk1goprLXpHAdLpFDycY6Ht89Gvxh8xZ0Q9VyTzCxXQDBloYGdc4vvjY6tTfgPBpMJCd6y6UunbDYRSUm2ZE" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--plIotkXB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/zcoADxSrY7nEQOlpp1rbsZ9LATxuAQvYUVcYWxqiIoiyfLGCYDTlJHA1lAoO8_XrBPvIk1goprLXpHAdLpFDycY6Ht89Gvxh8xZ0Q9VyTzCxXQDBloYGdc4vvjY6tTfgPBpMJCd6y6UunbDYRSUm2ZE" alt="Announcing Hasura Notebook: Prototype fast on your GenAI apps" width="800" height="673"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FDGNN3wO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/Or4p8Hr_3-hl0kcU7OmRnKVGCs5uBrzz-Ow9I5HnzhqzOdApHnPksZHZK-iI4MIOjmsZ2EqfO-Ab6uvvFr85vifSVx4HLT2jAwuAMsZoQJnaxHPYrU0xkP5zva64hmaHcF75D_2mizBFe5gjkiZYdXA" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FDGNN3wO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/Or4p8Hr_3-hl0kcU7OmRnKVGCs5uBrzz-Ow9I5HnzhqzOdApHnPksZHZK-iI4MIOjmsZ2EqfO-Ab6uvvFr85vifSVx4HLT2jAwuAMsZoQJnaxHPYrU0xkP5zva64hmaHcF75D_2mizBFe5gjkiZYdXA" alt="Announcing Hasura Notebook: Prototype fast on your GenAI apps" width="800" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Update Hasura GraphQL details in the notebook under &lt;code&gt;Template for event trigger to ETL data&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Open server.ipynb and you can see there are multiple functions with cells commented with &lt;code&gt;# POST /handle_event&lt;/code&gt; These are the functions that are available on the endpoint &lt;code&gt;/handle_event&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Update the details in the cell, and you are ready to go.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dooEykZW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/Dr0GsL01NJsJ3Ww9Ai5jeHt0tAzzn7fYrg5BVNYPWdVMXRT0VWGGwnxQYMTWb6rbsN17AN_56z9Ib_M9GHOsQpw76XkUc5j9K028tbTBoJKGJ1reFwY9_gioF0tD2onHN_6H7gaznt3sCDw-w1Q0cGQ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dooEykZW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/Dr0GsL01NJsJ3Ww9Ai5jeHt0tAzzn7fYrg5BVNYPWdVMXRT0VWGGwnxQYMTWb6rbsN17AN_56z9Ib_M9GHOsQpw76XkUc5j9K028tbTBoJKGJ1reFwY9_gioF0tD2onHN_6H7gaznt3sCDw-w1Q0cGQ" alt="Announcing Hasura Notebook: Prototype fast on your GenAI apps" width="600" height="190"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Don’t forget to restart the Jupyter gateway server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D8lu0y-a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh4.googleusercontent.com/WGPJvwoe5tB7iR-AnewW7lDj6nnBvCTMHkaYX82rn2Cnn7BJuQfTVDhnlG84nR60QU6QA6orgPhjFwepca4Vrz6DevJ9OWv1I1vCrI-s_Bsjr_52Y0s16aTRAyMl2aqehAhFMNVXo1Nef3pAb-ijYyU" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D8lu0y-a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh4.googleusercontent.com/WGPJvwoe5tB7iR-AnewW7lDj6nnBvCTMHkaYX82rn2Cnn7BJuQfTVDhnlG84nR60QU6QA6orgPhjFwepca4Vrz6DevJ9OWv1I1vCrI-s_Bsjr_52Y0s16aTRAyMl2aqehAhFMNVXo1Nef3pAb-ijYyU" alt="Announcing Hasura Notebook: Prototype fast on your GenAI apps" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Add the Event Trigger in Hasura.&lt;/p&gt;

&lt;p&gt;Head back to the Hasura Console and click on the Event tab to create a new Event. Fill in the required details.&lt;/p&gt;

&lt;p&gt;Your webhook URL is your notebook URL (ends with .app) + &lt;code&gt;invoke/handle_event&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You can also fetch your webhook URL by running this CLI command again &lt;code&gt;hasura notebook status&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2bBB0MDf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh3.googleusercontent.com/AeHEoSl4EUI-cOlTJxz45GgwhL0oaSrdvcdeHcItxKScO9fi4B6CPZDmPoeZ7FU-qI1gEmifVssf5N5DVslQdnsDNx7ENjTtcmb2R0dCbjPjzd5e4AH4e6E9apN7iN0baTTj4iXo1zsfJbbO2Yw9Cio" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2bBB0MDf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh3.googleusercontent.com/AeHEoSl4EUI-cOlTJxz45GgwhL0oaSrdvcdeHcItxKScO9fi4B6CPZDmPoeZ7FU-qI1gEmifVssf5N5DVslQdnsDNx7ENjTtcmb2R0dCbjPjzd5e4AH4e6E9apN7iN0baTTj4iXo1zsfJbbO2Yw9Cio" alt="Announcing Hasura Notebook: Prototype fast on your GenAI apps" width="800" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add the authorization header required to reach the endpoint from the Hasura notebook.&lt;/p&gt;

&lt;p&gt;Generate base64 encoded value. You will need to execute this on terminal of your choice.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo -n "&amp;lt;username&amp;gt;:&amp;lt;password&amp;gt;" | base64

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can fetch the username and password by executing the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hasura notebook status

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Key = Authorization
Value = Basic &amp;lt;base64 encoded value here&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--upls1sqE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/2JDANht9egMaeIIE6wKOszt295vKCRTH7E1OXYAUZXx-ME3aDnlpwdzQvYzmRAanuMtMsifJbnLXykOBvMCwswz1VA11r5LNJwddS4uQb2ZMY7byKZcW8KTjg_brp2GcB0RK4R5yk0tph0AEjId_Acw" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--upls1sqE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/2JDANht9egMaeIIE6wKOszt295vKCRTH7E1OXYAUZXx-ME3aDnlpwdzQvYzmRAanuMtMsifJbnLXykOBvMCwswz1VA11r5LNJwddS4uQb2ZMY7byKZcW8KTjg_brp2GcB0RK4R5yk0tph0AEjId_Acw" alt="Announcing Hasura Notebook: Prototype fast on your GenAI apps" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Insert data into PostgreSQL and watch Vector DB update automatically
&lt;/h2&gt;

&lt;p&gt;Now that we have integrated Weaviate with Hasura, we can use a mutation in Hasura to insert data into our Postgres table &lt;em&gt;and&lt;/em&gt; auto-vectorize and update our vector DB! You can do this by executing &lt;code&gt;insert_data.ipynb&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Voila! We have vectors with product name and description fields.&lt;/p&gt;

&lt;h2&gt;
  
  
  Secure semantic search using Hasura
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d57tm1bt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/r1sNNdB8tor43WVJXQSgfC5mD2hC6-DiCJgBDmM1fEXIlUWweHeMuMKl1BGM0DYwiS2ih9hix62IFD2z6v_fXQswr5cAU33aRcerzjTlEzeDr6zWU5_N1Ev9Pmx00H_MWmYfBAjr-C-A8SEEjgssydA" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d57tm1bt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/r1sNNdB8tor43WVJXQSgfC5mD2hC6-DiCJgBDmM1fEXIlUWweHeMuMKl1BGM0DYwiS2ih9hix62IFD2z6v_fXQswr5cAU33aRcerzjTlEzeDr6zWU5_N1Ev9Pmx00H_MWmYfBAjr-C-A8SEEjgssydA" alt="Announcing Hasura Notebook: Prototype fast on your GenAI apps" width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more on building secure APIs with Hasura, refer to this overview: &lt;a href="https://hasura.io/docs/latest/auth/overview/"&gt;https://hasura.io/docs/latest/auth/overview/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us now go one step further, with complex query powered by LLM.&lt;/p&gt;

&lt;h2&gt;
  
  
  LLM-powered product search
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Update Hasura GraphQL and OpenAI details in the notebook under &lt;code&gt;Template for event trigger to ETL data&lt;/code&gt;.  &lt;/p&gt;

&lt;p&gt;Update the details, and you are ready to go.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vopMeOMY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/0l14DqkMegvmx9_ylBSsXowc26qVmrS6XBz5EDMwArU_cvcvrIjh1xQjoasdFYQLRiN-T-BJ-XvZCbGzxIw7Nwa6kdXz-uYy_SP8COIDg2SwvKmRpk9yUDETm1Q7Uqf8cmPejEOFN0zoxVqZ4X9UN6g" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vopMeOMY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/0l14DqkMegvmx9_ylBSsXowc26qVmrS6XBz5EDMwArU_cvcvrIjh1xQjoasdFYQLRiN-T-BJ-XvZCbGzxIw7Nwa6kdXz-uYy_SP8COIDg2SwvKmRpk9yUDETm1Q7Uqf8cmPejEOFN0zoxVqZ4X9UN6g" alt="Announcing Hasura Notebook: Prototype fast on your GenAI apps" width="514" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After updating any of these values, it’s important to restart your notebook. You can do this from the gateway generated earlier using the CLI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Create LLM prompt&lt;/p&gt;

&lt;p&gt;This step is available as a template for you. Feel free to tweak the prompt and play with it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Create Hasura Action&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UaoDTPW2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/Sw5Nj3QQ7lihaQwHdfqjsUQn5WfuAmC20fuWAlwK_tvMRscwASg6B6hxgBZNM0YxV-S2MywJVpONypAIjW37wZiSn86FKyB9bzxoj1oGRq4BxN9Bh2x-GsZW-X7c_J3geVNTEfIKudvexkwveyfoUpw" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UaoDTPW2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/Sw5Nj3QQ7lihaQwHdfqjsUQn5WfuAmC20fuWAlwK_tvMRscwASg6B6hxgBZNM0YxV-S2MywJVpONypAIjW37wZiSn86FKyB9bzxoj1oGRq4BxN9Bh2x-GsZW-X7c_J3geVNTEfIKudvexkwveyfoUpw" alt="Announcing Hasura Notebook: Prototype fast on your GenAI apps" width="800" height="537"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Like before, you will need to add Authorization headers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; What you just created is a secure API on your LLM query. Integrate the API with your app or just play around on the Hasura Console. You now have the power of the Hasura by your side.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BnrYOVdg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/nps1HPaBz5uEhc-J2O53oVuIaBK3bnk36oETfeqDeiQ_1MF5B1XlOjJsyCB6bKbVC3GuAxu74rOgbIsG7YSq-jaTy9nEpRxPvzzHhNDxbAaupTprav6INWmDxhPWyM2z3Co_XdB_YHdDSTp3SLFzpdg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BnrYOVdg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/nps1HPaBz5uEhc-J2O53oVuIaBK3bnk36oETfeqDeiQ_1MF5B1XlOjJsyCB6bKbVC3GuAxu74rOgbIsG7YSq-jaTy9nEpRxPvzzHhNDxbAaupTprav6INWmDxhPWyM2z3Co_XdB_YHdDSTp3SLFzpdg" alt="Announcing Hasura Notebook: Prototype fast on your GenAI apps" width="800" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By seamlessly deploying a Jupyter Notebook through the Hasura CLI, you've unlocked a world of possibilities. The ability to automatically vectorize relational data has not only accelerated your data manipulation processes but has also paved the way for more efficient and effective analyses.   &lt;/p&gt;

&lt;p&gt;Moreover, the integration of robust security measures into your LLM queries ensures that your data remains protected at all times. This remarkable journey, powered by Hasura, empowers you to harness the true potential of your data, all while simplifying the intricate processes involved!&lt;/p&gt;

&lt;p&gt;With Hasura by your side, what will &lt;strong&gt;you&lt;/strong&gt; build?&lt;/p&gt;

</description>
      <category>notebook</category>
      <category>generativeai</category>
      <category>largelanguagemodels</category>
      <category>ai</category>
    </item>
    <item>
      <title>Instant APIs on Snowflake using UDFs and Hasura</title>
      <dc:creator>Hasura</dc:creator>
      <pubDate>Wed, 09 Aug 2023 13:32:08 +0000</pubDate>
      <link>https://forem.com/hasurahq_staff/instant-apis-on-snowflake-using-udfs-and-hasura-52fl</link>
      <guid>https://forem.com/hasurahq_staff/instant-apis-on-snowflake-using-udfs-and-hasura-52fl</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eFkwIwN4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh4.googleusercontent.com/JvxZM-0b_Y4fBPn5YLtiSgSl9wLdSzU7OFaF8QbmJvTn8eGtgnbivfTviUuRj7xTeTJfwsLSHOemSsOxni_V01v-dVs_D2h_37EI8xzNd446Y5szRHuvMZrmxTMe1fAyp6K7wwnUpL5eGCkeBbvSItg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eFkwIwN4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh4.googleusercontent.com/JvxZM-0b_Y4fBPn5YLtiSgSl9wLdSzU7OFaF8QbmJvTn8eGtgnbivfTviUuRj7xTeTJfwsLSHOemSsOxni_V01v-dVs_D2h_37EI8xzNd446Y5szRHuvMZrmxTMe1fAyp6K7wwnUpL5eGCkeBbvSItg" alt="Instant APIs on Snowflake using UDFs and Hasura" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Hasura Snowflake connector became&lt;/strong&gt; &lt;a href="https://hasura.io/blog/announcing-hasura-integration-snowflake-empowering-developers-effortless-data-apis/"&gt;generally available in June&lt;/a&gt;. Like all other Hasura connectors, the value of the Snowflake connector is the ability to generate APIs (GraphQL and REST) on Snowflake data with minimal coding.&lt;/p&gt;

&lt;p&gt;I wanted to put it to the test by recreating &lt;a href="https://medium.com/snowflake/build-a-data-api-for-your-snowflake-data-b2c82ab4bbf5"&gt;this blog&lt;/a&gt; from Snowflake’s team with Hasura.&lt;/p&gt;

&lt;p&gt;The original post showed how to create an analytics app using &lt;a href="https://app.snowflake.com/marketplace/listing/GZ1M7Z2MQ39"&gt;OAG: Global Airline Schedules&lt;/a&gt;data. The architecture includes using AWS Lambda functions to read from Snowflake and return the data as RESTful APIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hOdaH0lt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/YtOgmfccuqJLV0YnCPDh-tUSFdfG9HWC03iq7RuQG5AcfPE0CVzBIKxhHYYtaVAH4vsktBHVRvim7AgsBSIOeovyOM_jO0vX42sx1zVovRw5AefIMsbmnSXxAdFgY0KdoULLr9Bqr6z2-4f4IccWpDw" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hOdaH0lt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/YtOgmfccuqJLV0YnCPDh-tUSFdfG9HWC03iq7RuQG5AcfPE0CVzBIKxhHYYtaVAH4vsktBHVRvim7AgsBSIOeovyOM_jO0vX42sx1zVovRw5AefIMsbmnSXxAdFgY0KdoULLr9Bqr6z2-4f4IccWpDw" alt="Instant APIs on Snowflake using UDFs and Hasura" width="800" height="745"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Architecture in the original blog.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We will implement these data APIs over Snowflake using Hasura. Right off the bat, we can instantly replace all the different Amazon components with Hasura, greatly simplifying the architecture by reducing five components to one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lKpsEy9_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/image5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lKpsEy9_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/08/image5.png" alt="Instant APIs on Snowflake using UDFs and Hasura" width="616" height="157"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Simplified architecture with Hasura.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Importing sample data to Snowflake
&lt;/h2&gt;

&lt;p&gt;First and foremost, we must import the sample data to our Snowflake instance – the data can be found in the marketplace &lt;a href="https://app.snowflake.com/marketplace/listing/GZ1M7Z2MQ39"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing the APIs in Hasura
&lt;/h2&gt;

&lt;p&gt;This section will show how to instantly create the required APIs using Snowflake’s user-defined functions (UDF) and Hasura’s query engine.&lt;/p&gt;

&lt;p&gt;A UDF is a function you define to call it from SQL. UDF’s logic typically extends or enhances SQL with functionality that SQL doesn’t have or doesn’t do well. A UDF also allows you to encapsulate functionality to call it repeatedly from multiple places in code.&lt;/p&gt;

&lt;p&gt;We will implement the &lt;code&gt;busy_airports&lt;/code&gt; API using UDFs and Hasura from the original blog.&lt;/p&gt;

&lt;h3&gt;
  
  
  Busiest airports API
&lt;/h3&gt;

&lt;p&gt;This API endpoint finds the busiest airports in the data. We will use an API key as an admin secret. We want to filter the data based on &lt;code&gt;flight_date&lt;/code&gt; and restrict the number of rows returned from the API.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Tracking UDFs in Hasura
&lt;/h4&gt;

&lt;p&gt;First, we will create the following UDF in Snowflake’s SQL console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8tOonmPY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/B8PvO3bT5x0YAiTLrgbN6W_JYmtB0MSGZxaj2yf9IRBEUXet8pMy22EeWUdxGCrLSHYr4tJSdK-WXYTsZF_KxkWsJhDVOBcqflfvrerxLYzQFf9_0ljQBs5rtsGp_u_PkHtIp3fMAohHve-UwxqziUE" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8tOonmPY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/B8PvO3bT5x0YAiTLrgbN6W_JYmtB0MSGZxaj2yf9IRBEUXet8pMy22EeWUdxGCrLSHYr4tJSdK-WXYTsZF_KxkWsJhDVOBcqflfvrerxLYzQFf9_0ljQBs5rtsGp_u_PkHtIp3fMAohHve-UwxqziUE" alt="Instant APIs on Snowflake using UDFs and Hasura" width="800" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, we will import the UDF to Hasura:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BX-1FO2S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/XzhJntVAvL9K7j_X9yCOQmQVqGfGNk1zVGoOqDf6Y7h1ZndvVD-7NzQzOzJKS6mMPxlyrp7C2yfl24F7aeAGAz7EfBaPTlc1uiYu9mnKoKenAd2_jcDAH_uUiwOCyrzLzHuPjDmNYJAQv5viULhKEDE" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BX-1FO2S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/XzhJntVAvL9K7j_X9yCOQmQVqGfGNk1zVGoOqDf6Y7h1ZndvVD-7NzQzOzJKS6mMPxlyrp7C2yfl24F7aeAGAz7EfBaPTlc1uiYu9mnKoKenAd2_jcDAH_uUiwOCyrzLzHuPjDmNYJAQv5viULhKEDE" alt="Instant APIs on Snowflake using UDFs and Hasura" width="800" height="541"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that, we can query the UDF like any other table from Hasura:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ewprkAaz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh4.googleusercontent.com/qPwmOm0IIxQYZQ-ZsXYvJ_Y8jb3HiWwD9NQD_uMk60lkrrCj5uI0fEjyQNuW8RfulySwD86px85f07ZxRCjYc8N7Oa2Wb5xHAX62YJvdc1VADCIV3YOU-JaIRa1-L8paFOEIThdjcuzHMgU7MYupO54" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ewprkAaz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh4.googleusercontent.com/qPwmOm0IIxQYZQ-ZsXYvJ_Y8jb3HiWwD9NQD_uMk60lkrrCj5uI0fEjyQNuW8RfulySwD86px85f07ZxRCjYc8N7Oa2Wb5xHAX62YJvdc1VADCIV3YOU-JaIRa1-L8paFOEIThdjcuzHMgU7MYupO54" alt="Instant APIs on Snowflake using UDFs and Hasura" width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Cache your Snowflake API
&lt;/h4&gt;

&lt;p&gt;You can use the @cached directive to your query to decrease the response time. The default TTL is 60 seconds, reducing latency by about ~100 milliseconds.&lt;/p&gt;

&lt;p&gt;Caching is useful when querying Snowflake, as latencies could be a bottleneck, primarily when serving customers on web/mobile applications. It can improve latencies and reduce the load on data warehouse resources by fetching the results from Redis.&lt;/p&gt;

&lt;p&gt;You can also fetch the results using cURL:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;url -X POST -H "x-hasura-admin-secret: admin-secret" -H "Content-Type: application/json" -d '{"query":"query MyQuery{BUSY_AIRPORTS_2(limit:20,order_by:{COUNT:desc}){ARRAPT COUNT}}"}' https://choice-platypus-97.hasura.app/v1/graphql
{"data":{"BUSY_AIRPORTS_2":[{"ARRAPT":"ATL","COUNT":131184},{"ARRAPT":"ORD","COUNT":120075},{"ARRAPT":"LHR","COUNT":106354},{"ARRAPT":"DFW","COUNT":105660},{"ARRAPT":"JFK","COUNT":98454},{"ARRAPT":"LAX","COUNT":91109},{"ARRAPT":"CDG","COUNT":87035},{"ARRAPT":"FRA","COUNT":86363},{"ARRAPT":"AMS","COUNT":85004},{"ARRAPT":"MAD","COUNT":61921},{"ARRAPT":"BOS","COUNT":58124},{"ARRAPT":"SEA","COUNT":56016},{"ARRAPT":"SFO","COUNT":55467},{"ARRAPT":"DEN","COUNT":53891},{"ARRAPT":"EWR","COUNT":49122},{"ARRAPT":"FCO","COUNT":48736},{"ARRAPT":"IAH","COUNT":47115},{"ARRAPT":"MIA","COUNT":46552},{"ARRAPT":"YYZ","COUNT":45225},{"ARRAPT":"CLT","COUNT":44266}]}}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, we can implement &lt;code&gt;airport_daily&lt;/code&gt; and &lt;code&gt;airport_daily_carriers&lt;/code&gt; following the same steps. Thus, we can access the data in minutes without writing a single line of code, just using UDFs in Snowflake and Hasura.&lt;/p&gt;

&lt;p&gt;The APIs are protected with API keys. Suppose you want a role-based authorization layer. In that case, this blog covers how to implement role-based access control on Snowflake: &lt;a href="https://hasura.io/blog/hasura-graphql-on-snowflake-using-rbac-a-secure-and-scalable-data-access-solution/"&gt;“ &lt;strong&gt;Snowflake using RBAC: A secure and scalable data access solution.”&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Within a few minutes, we replaced five different AWS components, implemented an API without writing a single line of code, added an admin secret for authentication, and added the ability to cache to provide low-latency data access.&lt;/p&gt;

&lt;p&gt;Hasura avoids getting entangled with details and lets you focus on building the end product.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 Get Started Today!
&lt;/h2&gt;

&lt;p&gt;We can't wait to see the amazing application you'll build using Hasura and Snowflake. Start by signing up for &lt;a href="https://cloud.hasura.io/signup"&gt;Hasura Cloud&lt;/a&gt; and connecting your Snowflake data warehouse.   &lt;/p&gt;

&lt;p&gt;If you have any questions or need assistance, please contact our team on &lt;a href="https://discord.com/invite/hasura"&gt;Discord&lt;/a&gt; or &lt;a href="https://github.com/hasura/graphql-engine/issues"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>snowflake</category>
      <category>tutorial</category>
      <category>userdefinedfunctions</category>
    </item>
    <item>
      <title>Supercharge your application development with Hasura Remote Joins and Data Federation</title>
      <dc:creator>Hasura</dc:creator>
      <pubDate>Tue, 01 Aug 2023 13:36:10 +0000</pubDate>
      <link>https://forem.com/hasurahq_staff/supercharge-your-application-development-with-hasura-remote-joins-and-data-federation-15pi</link>
      <guid>https://forem.com/hasurahq_staff/supercharge-your-application-development-with-hasura-remote-joins-and-data-federation-15pi</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QXt5bQ3b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/07/supercharge-feature.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QXt5bQ3b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/07/supercharge-feature.png" alt="Supercharge your application development with Hasura Remote Joins and Data Federation" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We are thrilled to announce that Hasura now supports Remote Joins across all supported data sources&lt;/strong&gt; , including PostgreSQL, Snowflake, MySQL, SQL Server, BigQuery, Oracle, Athena, and Remote Schemas!&lt;/p&gt;

&lt;p&gt;In the world of modern application development, building complex data relationships across various sources has become a common requirement. As databases and data sources grow in complexity, developers often find themselves facing the challenge of integrating data from different databases, APIs, or remote GraphQL endpoints.   &lt;/p&gt;

&lt;p&gt;Thankfully, Hasura’s GraphQL engine provides a powerful solution through its remote relationships, also known as "Remote Joins."&lt;/p&gt;

&lt;p&gt;In this blog, we will explore the concept of remote relationships and how they can be leveraged to join data across tables and remote data sources, empowering developers to build sophisticated and data-rich applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding remote relationships
&lt;/h2&gt;

&lt;p&gt;At its core, a remote relationship in GraphQL allows you to connect data across different tables and remote data sources. These sources can include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Database to database relationships:&lt;/strong&gt; This type of remote relationship enables you to join data between two different database sources. For instance, you can link order information stored in one PostgreSQL database with user information stored in another PostgreSQL or even a SQL Server database. GraphQL acts as the glue that seamlessly integrates the data from both sources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database to GraphQL services:&lt;/strong&gt; Here, you can merge data across tables with remote GraphQL APIs that we call &lt;a href="https://hasura.io/docs/latest/remote-schemas/overview/"&gt;Remote Schemas&lt;/a&gt;. For example, you might combine customer data from your database with account data from external services like Stripe, Spotify, or Auth0. This integration allows you to consolidate information from various sources into a single, cohesive GraphQL query.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GraphQL services to database relationships:&lt;/strong&gt; In this scenario, you can connect data from Remote Schemas (representing services like Stripe, Spotify, or Auth0) to customer data from your database. This bidirectional relationship unlocks a plethora of possibilities for building feature-rich applications that rely on data spanning across multiple services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom business logic to database relationships:&lt;/strong&gt; Actions in GraphQL often correspond to REST APIs. With this type of remote relationship, you can join data across tables and Actions, such as fetching user data from your database and combining it with the response from a &lt;em&gt;createUser&lt;/em&gt; action using the &lt;em&gt;id&lt;/em&gt; field.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How Hasura performs Remote Joins?
&lt;/h2&gt;

&lt;p&gt;When performing a RemoteJoin between “table A” and “table B,” Hasura will first query “table A” for its rows. From these rows Hasura will collect all the IDs involved in the join to “table B.” It will then query “table B” as a separate query, filtering the rows by the IDs collected from “table A.” Hasura will then stitch the rows returned from “table B” into the results from “table A” to provide the final response to the original GraphQL query.&lt;/p&gt;

&lt;p&gt;This approach avoids the usual N+1 query anti-pattern that can occur with naive implementations that join between two tables in different databases. The N+1 query anti-pattern is where you query “table B” for every row returned by the query to “table A,” which is inefficient.&lt;/p&gt;

&lt;p&gt;In Hasura, join queries are optimized to query each table once, limiting the number of separate queries issued to the smallest possible number.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do Remote Joins enhance application development?
&lt;/h2&gt;

&lt;p&gt;Businesses today are not confined to a single database. Data exists in a multitude of sources – on-premises, in the cloud, or distributed across multiple systems. To maintain a competitive edge, organizations must harness the power of these disparate data sources and seamlessly combine them to drive meaningful use cases and insights.&lt;/p&gt;

&lt;p&gt;In a matter of &lt;em&gt;minutes&lt;/em&gt;, developers can create a single GraphQL API using Hasura that accesses data from multiple sources, including GraphQL APIs (Remote Schemas) and databases, in real time, without the need for complex data integration processes. This means developers can create powerful applications that access data from different sources, without worrying about data silos.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4hGR_45m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/jle2FIQP3M99Z7gq58nLWeu3lcTLUzQ4Tj6j_gkIpJshFBZCEU5q8PD1kjPwVe1wPt2X1HvzfUyYee24r1mqBNypJt0L0omgmdlhXlfqsMVIRT2UGbzn-pUU9CnudLrIpYUwtBYeRdT47bA8d_gpjHw" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4hGR_45m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/jle2FIQP3M99Z7gq58nLWeu3lcTLUzQ4Tj6j_gkIpJshFBZCEU5q8PD1kjPwVe1wPt2X1HvzfUyYee24r1mqBNypJt0L0omgmdlhXlfqsMVIRT2UGbzn-pUU9CnudLrIpYUwtBYeRdT47bA8d_gpjHw" alt="Supercharge your application development with Hasura Remote Joins and Data Federation" width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In traditional SQL, joining tables from different databases requires maintaining a duplicate copy of the data from one database in the other database that is needed to perform the join. This process was not only time-consuming but also required a lot of resources. With Hasura's remote joins, developers can perform joins across data sources without copying any data. This allows organizations to federate their data into a single self-service data access layer.&lt;/p&gt;

&lt;p&gt;By providing a unified view of data, Hasura's remote joins can significantly reduce the time it takes to develop data intensive applications. Developers can quickly prototype applications, test different use cases, and get feedback from stakeholders. This enables enterprises to bring applications and data products to market faster, giving them a competitive edge.&lt;/p&gt;

&lt;h3&gt;
  
  
  Simplified data management
&lt;/h3&gt;

&lt;p&gt;Handling and managing data across multiple databases is a significant challenge for developers. It often involves writing complex queries and spending countless hours on data management tasks. However, with Hasura's Remote Joins, these challenges become a thing of the past. Remote Joins provide a way to access and unify data across various databases as if it were coming from a single data source.&lt;/p&gt;

&lt;h3&gt;
  
  
  Boosts development speed
&lt;/h3&gt;

&lt;p&gt;By offering a unified API across different databases, Hasura saves developers from the tiresome task of orchestrating data from various sources. This allows developers to focus more on the core functionality of the application, ultimately boosting the development speed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Streamlined workflow
&lt;/h3&gt;

&lt;p&gt;A GraphQL API that bridges different data sources simplifies your workflow. It not only helps you fetch data from multiple sources in a single request but also keeps the interface clean and organized, significantly streamlining the development process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started with Remote Joins for databases
&lt;/h2&gt;

&lt;p&gt;Check out my colleague Vaishnavi’s demo from her talk at HasuraCon 2023 on joining data from Postgres to MySQL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Add two database sources.&lt;a href="https://hasura.io/docs/latest/schema/postgres/remote-relationships/remote-source-relationships/#step-1-add-two-database-sources"&gt;​&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add a source database as described &lt;a href="https://hasura.io/docs/latest/databases/overview/"&gt;here&lt;/a&gt; and track the required tables. Then, repeat the process to add your target database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Define and create the relationship.&lt;a href="https://hasura.io/docs/latest/schema/postgres/remote-relationships/remote-source-relationships/#step-2-define-and-create-the-relationship"&gt;​&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A remote database relationship is defined alongside the source database table (that is, the source side of the join).&lt;/p&gt;

&lt;p&gt;The following fields can be defined for a Remote Schema relationship:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Relationship type:&lt;/strong&gt; Either &lt;code&gt;object&lt;/code&gt; or &lt;code&gt;array&lt;/code&gt; – similar to normal relationships. Hasura supports both many-to-one (object) and one-to-many (array) relationships.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Relationship name:&lt;/strong&gt; A name for the relationship.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reference source:&lt;/strong&gt; The name of the target database (that is, the target side of the join).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reference table:&lt;/strong&gt; The table in the target database source that should be joined with the source table.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Field mapping:&lt;/strong&gt; A mapping between fields in the source table and their corresponding fields in the target table, just as a foreign key relationship would be defined by such mapping within a single database.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, say we have a table &lt;code&gt;Album(AlbumId int)&lt;/code&gt; in the source database and a table &lt;code&gt;Track(id int, name text)&lt;/code&gt; in the target database.&lt;/p&gt;

&lt;p&gt;We can create an array remote database relationship between the AlbumId joining the &lt;code&gt;Album&lt;/code&gt; table to the &lt;code&gt;Track&lt;/code&gt; table using the &lt;code&gt;Album.AlbumId&lt;/code&gt; and &lt;code&gt;Track.AlbumId&lt;/code&gt; fields.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6vgZsRtc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/Xm_T9V_2JDZJatUZ5rQtwaFoCbvrjY0qYhcKI_Kp-yryxDiG-v-ZOMMVxAV_q62zcxJ2E4hOdNw5wV7GP_6fvh8uWc1z1royCx0qD9tQatXg3kpGJc_hwrtb56h9_73uqIXNIQZda3EOQQgVtFkLcZo" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6vgZsRtc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/Xm_T9V_2JDZJatUZ5rQtwaFoCbvrjY0qYhcKI_Kp-yryxDiG-v-ZOMMVxAV_q62zcxJ2E4hOdNw5wV7GP_6fvh8uWc1z1royCx0qD9tQatXg3kpGJc_hwrtb56h9_73uqIXNIQZda3EOQQgVtFkLcZo" alt="Supercharge your application development with Hasura Remote Joins and Data Federation" width="800" height="617"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;When it comes to generating APIs for multiple data sources, Hasura offers a powerful and developer-friendly solution. Its ease of use, automation, and feature-rich capabilities provide a significant advantage over the DIY approach, which requires significant time and effort to design and implement custom solutions.&lt;/p&gt;

&lt;p&gt;With Hasura, developers can create a single GraphQL API that can access data from multiple sources allowing developers to focus more on building innovative applications that leverage diverse data sources, without getting bogged down by the complexities of API generation.&lt;/p&gt;

&lt;p&gt;If you're looking to simplify the process of joining data from multiple sources, Hasura is the way to go.&lt;/p&gt;

&lt;h2&gt;
  
  
  📚 Documentation and resources
&lt;/h2&gt;

&lt;p&gt;To help you get started, we've prepared detailed documentation, guides, and examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://hasura.io/docs/latest/remote-schemas/remote-relationships/index/"&gt;Remote Relationships Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://hasura.io/docs/latest/data-federation/overview/"&gt;GraphQL Federation Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://hasura.io/blog/tagged/data-federation/"&gt;Other Hasura Data Federation Blogs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🚀 Get started today!
&lt;/h2&gt;

&lt;p&gt;We can't wait to see the amazing applications you'll build using Hasura and Native Queries. Get started today by signing up for &lt;a href="https://cloud.hasura.io/signup"&gt;Hasura Cloud&lt;/a&gt; and connecting to one of the supported databases.&lt;/p&gt;

&lt;p&gt;If you have any questions or need assistance, feel free to reach out to our team on &lt;a href="https://discord.com/invite/hasura"&gt;Discord&lt;/a&gt; or &lt;a href="https://github.com/hasura/graphql-engine/issues"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>remotejoins</category>
      <category>remoteschemas</category>
      <category>datafederation</category>
    </item>
    <item>
      <title>Introducing Input Validation Permissions on Hasura: Enhancing data integrity and security</title>
      <dc:creator>Hasura</dc:creator>
      <pubDate>Thu, 27 Jul 2023 13:40:48 +0000</pubDate>
      <link>https://forem.com/hasurahq_staff/introducing-input-validation-permissions-on-hasura-enhancing-data-integrity-and-security-mgm</link>
      <guid>https://forem.com/hasurahq_staff/introducing-input-validation-permissions-on-hasura-enhancing-data-integrity-and-security-mgm</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--17Dw3Aee--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/07/security-feature-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--17Dw3Aee--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/07/security-feature-1.png" alt="Introducing Input Validation Permissions on Hasura: Enhancing data integrity and security" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In today's data-driven world, building applications with strong data integrity and security is challenging.&lt;/strong&gt; Ensuring that the data being processed is accurate, valid, and aligned to predefined rules is a critical aspect of modern application development.&lt;/p&gt;

&lt;p&gt;To empower developers with more control over their data validation process, Hasura is thrilled to announce the launch of a powerful new feature – &lt;a href="https://hasura.io/docs/latest/schema/postgres/input-validations"&gt;&lt;strong&gt;Input Validations&lt;/strong&gt;&lt;/a&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why data integrity and security matter&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Data integrity refers to the &lt;strong&gt;accuracy, consistency, and reliability of data&lt;/strong&gt; stored and processed within an application. Ensuring data integrity is essential for a number of  reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Reliable decision-making:&lt;/strong&gt; Accurate data enables sound decision-making, leading to better business outcomes and customer experiences.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data consistency:&lt;/strong&gt; Inconsistent data can cause errors, leading to confusion and incorrect results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data security:&lt;/strong&gt; Validating input data helps prevent security vulnerabilities, such as SQL injections, and safeguards sensitive information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance and regulations:&lt;/strong&gt; Many industries have strict data compliance and privacy regulations, making data integrity a legal requirement.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Maintaining data integrity and security can be a challenging task, especially in complex applications with numerous data mutations and interactions. Input Validations provide a robust solution to tackle these challenges.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Introducing Input Validations&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Hasura's &lt;a href="https://hasura.io/docs/latest/schema/postgres/input-validations/"&gt;Input Validations&lt;/a&gt; allow developers to implement custom data validation logic for GraphQL mutations. This feature acts as a pre-mutation hook, offering developers the &lt;strong&gt;ability to validate input arguments before executing insert, update, or delete operations&lt;/strong&gt;. By defining rules and constraints on an HTTP service or a serverless function that can be pointed from Hasura as input validation endpoint, developers can ensure that only valid data is processed and stored in the database.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How Input Validations work&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When a GraphQL mutation arrives, targeting specific tables and roles, the Input Validations feature comes into action. The mutation arguments are routed to the defined HTTP webhook, where custom validation logic is executed. If the validation is successful, the mutation proceeds, and the data is processed as intended. On the other hand, if the validation fails, the mutation is aborted, and appropriate error messages can be relayed back to the client.&lt;/p&gt;

&lt;p&gt;You can set up an Input Validation rule for a role directly from the Hasura Console as easy as configuring a normal permission rule in Hasura.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let’s take an example of allowing user sign ups with age &amp;gt;18 years and I’m going to create a table as shown below.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kUBcConI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/ohMUg9C48glrA7AeSqZ4iQOGk4O3j9W6CIpv1SZkQq-i1KSZBaKaCmEhcgEq1Dwtb5dEH8fKwTqCrrjluSBzknZsl8ZU4c2kgLX0khT9bAuIR1y5gIpA59uY7S6oMMhwWxSGnLz2baBlDftpo08Wl_A" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kUBcConI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/ohMUg9C48glrA7AeSqZ4iQOGk4O3j9W6CIpv1SZkQq-i1KSZBaKaCmEhcgEq1Dwtb5dEH8fKwTqCrrjluSBzknZsl8ZU4c2kgLX0khT9bAuIR1y5gIpA59uY7S6oMMhwWxSGnLz2baBlDftpo08Wl_A" alt="Introducing Input Validation Permissions on Hasura: Enhancing data integrity and security" width="800" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Head to the table permissions page and you will now see a new section as shown below, lets add an input validation configuration as shown.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BfPSKONr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh3.googleusercontent.com/fToD3LvIG8NcZCHz8wj5PJLEAcY-QnXazevSb1CRL_IXagf2aKfdC-aBWXyhnSymQqcnLo_N-FIV1MAdM1ZQSe287oW3FXblPHbQOlJqeghPDGPv4pioFxpGFZVjAhqm8X-O-Txz7ToN-QzFKnAr55U" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BfPSKONr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh3.googleusercontent.com/fToD3LvIG8NcZCHz8wj5PJLEAcY-QnXazevSb1CRL_IXagf2aKfdC-aBWXyhnSymQqcnLo_N-FIV1MAdM1ZQSe287oW3FXblPHbQOlJqeghPDGPv4pioFxpGFZVjAhqm8X-O-Txz7ToN-QzFKnAr55U" alt="Introducing Input Validation Permissions on Hasura: Enhancing data integrity and security" width="800" height="791"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now lets create a simple web server to validate the data inputs. Authoring the webhook validator service is quite easy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;To approve an input:&lt;/strong&gt; Resolve the HTTP request with a 200&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;To reject an input:&lt;/strong&gt; Resolve the request with 400 and a message payload, the message payload will be forwarded to the client as a GraphQL error message for reference&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is a simple NodeJS server that handles the input from Hasura and validate the input only if the user is more than 10 years old.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require("express");
const bodyParser = require("body-parser");

const app = express();
app.use(bodyParser.json()); // to support JSON-encoded bodies
app.use(bodyParser.urlencoded({
  extended: true
}));

app.get("/", (req, res) =&amp;gt; {
  res.send("Server is running!");
});
app.post("/validateNewUserData", (req, res) =&amp;gt; {
  console.log("INPUT_", req?.body?.data?.input);
  const DOB = req?.body?.data?.input?.[0]?.DOB;
  const age = ~~((new Date() - new Date(DOB)) / 31557600000);

  console.log("Age", age);
  if (age &amp;gt; 10) {
    res.send("SUCCESS");
  } else {
    res.status(400).json({
      message: "User should have a minimum age of 10"
    });
  }
  return;
});

app.listen(8080, function() {
  console.log("Server is running on 8080");
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now let’s test this by making a mutation on Hasura
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RhNSS5eg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://hasura.io/blog/content/images/2023/07/output-1.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RhNSS5eg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://hasura.io/blog/content/images/2023/07/output-1.gif" alt="Introducing Input Validation Permissions on Hasura: Enhancing data integrity and security" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can find a sample repository with the express-js server data validator here: &lt;a href="https://github.com/soorajshankar/Hasura-Input-Validation-Demo"&gt;https://github.com/soorajshankar/Hasura-Input-Validation-Demo&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;Read more about the configurations of Input Validations in detail from our official docs page &lt;a href="https://hasura.io/docs/latest/schema/postgres/input-validations/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Use cases and problems solved&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Input Validations can address a wide range of use cases and solve common challenges faced by developers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;User registration and authentication:&lt;/strong&gt; Validate user registration details like email addresses, usernames, and enforce password complexity rules.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Managing user-generated content:&lt;/strong&gt; Implement content guidelines and moderation for user-generated data, ensuring compliance and preventing inappropriate content.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enforce restrictions on allowing requests:&lt;/strong&gt; Ensure that an order cannot be placed with more than a certain amount of an item.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Hasura's Input Validations offers a powerful tool for developers to enhance data integrity and security in their applications. With this feature, developers can ensure that the data flowing through their systems is accurate, valid, and protected from security vulnerabilities.&lt;/p&gt;

&lt;p&gt;By configuring custom validation logic through HTTP webhooks, developers gain control over the entire validation process, enabling them to build more reliable and secure applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stay tuned for more in-depth insights into this exciting new feature! Sign up now to try Input Validations on your project.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://cloud.hasura.io/signup"&gt;Sign up&lt;/a&gt; to start your journey with Hasura Cloud.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>inputvalidations</category>
      <category>datasecurity</category>
      <category>dataintegrity</category>
    </item>
    <item>
      <title>Incremental DB migration with Hasura</title>
      <dc:creator>Hasura</dc:creator>
      <pubDate>Mon, 24 Jul 2023 18:07:32 +0000</pubDate>
      <link>https://forem.com/hasurahq_staff/incremental-db-migration-with-hasura-2889</link>
      <guid>https://forem.com/hasurahq_staff/incremental-db-migration-with-hasura-2889</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vadOAb5A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/07/incrementalDB-feature.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vadOAb5A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/07/incrementalDB-feature.png" alt="Incremental DB migration with Hasura" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Database migration is a complex, multiphase, multi-process activity.&lt;/strong&gt; The challenges scale up significantly when you move to a new database vendor or across different cloud providers or data centers.&lt;/p&gt;

&lt;p&gt;Despite the challenges, sometimes data migration is the only way. So how do we peel back the layers of ambiguity, keep the downtime minimal, and ensure our user's experience does not suffer when embarking on this journey?&lt;/p&gt;

&lt;p&gt;I had faced a similar change in the past when I was at one of Asia/India’s largest e-commerce shops. Our database storage and compute capacity had maxed out, we were using ~90%+ of our compute resources, and we had about two months left before running out of storage space.&lt;/p&gt;

&lt;p&gt;We had to act fast and execute the entire process within that two-month period while ensuring our business continued to grow. It was a stressful period with some leap-of-faith calculations to make it happen.&lt;/p&gt;

&lt;p&gt;In this post, I want to revisit the migration process and see how, with &lt;a href="https://hasura.io/"&gt;Hasura&lt;/a&gt;, we could incrementally execute the migration, tackling the uncertainties but still delivering the results on time.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;We want to migrate data from one SQL database to another SQL database vendor on the cloud, and we want to achieve this process with minimal disruption.&lt;/p&gt;

&lt;h2&gt;
  
  
  The challenges
&lt;/h2&gt;

&lt;p&gt;What are the typical challenges we must solve for when moving to a new database vendor?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Database schema migration&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Usually, there are differences in data types, index types, and architectures between different vendors that need to be addressed before starting the migration.&lt;/li&gt;
&lt;li&gt;Some characteristics need functional/performance testing to be sure.&lt;/li&gt;
&lt;li&gt;These differences arise even if we move to a different vendor for the same types of databases (E.g., MySQL to PostgreSQL).&lt;/li&gt;
&lt;li&gt;However, these differences further magnify when moving from SQL to NoSQL store, where write and access pattern change considerably due to schema/schema-less changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Data transformation&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Due to changes in schema and data types, one would need to transform the data before persisting.&lt;/li&gt;
&lt;li&gt;These ETL pipelines can become complex, as we need to reason schema by schema.&lt;/li&gt;
&lt;li&gt;It can be a mammoth task if the table count is large (20+) with a large amount of data.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Data migration&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;This step could be challenging if the databases are co-located or on different cloud providers.&lt;/li&gt;
&lt;li&gt;Inside the same data centers, the network bandwidth is usually plenty to move 100s of GB of data in a few mins to a few hours.&lt;/li&gt;
&lt;li&gt;However, the same cannot be said for migrating the data over the internet, there are many failure scenarios to consider, and the time it takes can be a couple of days.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Performance/functional testing&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If one is migrating all the data simultaneously, testing the new database's migration and functionality/performance is paramount.&lt;/li&gt;
&lt;li&gt;There could be a need for tweaking data types, indexes, or queries that might not work from older databases.&lt;/li&gt;
&lt;li&gt;This step requires a thorough review of the data modeling, and the application code, specific queries, and consistency guarantees must change when undertaking such migrations.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;API migration&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When migrating data across databases, data access patterns with existing APIs become crucial.&lt;/li&gt;
&lt;li&gt;If it’s a migration across a couple of days or weeks, some requests must be handled while pointing to the new database vendor, while prior data access must be redirected to the older database.&lt;/li&gt;
&lt;li&gt;If the data is being asked in multiple databases, we must handle cross-DB queries/joins.&lt;/li&gt;
&lt;li&gt;For writes, moving partial data could be challenging, as new inserts have to be written in the new datastore, while updates for old rows have to be redirected to the old DB for consistency.&lt;/li&gt;
&lt;li&gt;However, if one moves entire tables to the new database, the data access layer can handle many such challenges.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  An approach to solving DB migration challenges
&lt;/h2&gt;

&lt;p&gt;Given the challenges above, how do we go about this problem statement?&lt;/p&gt;

&lt;p&gt;First, the data migration problem is also an API migration problem. If we can provide the flexibility of querying multiple data sources while running existing business logic, we can support migrations using an incremental approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  Her are the steps, using Hasura, to help address the uncertainties:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Introduce Hasura and connect it to the old database.
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QrOkvsDO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh3.googleusercontent.com/8LNNM3WWqHYDF2lY7sF0zb9zE_eh7hQ-9EMLsQ74gi9Cogesg6NFAgHdv4SIfQD5dWMh0AlJbhQlxfEJKDN12Pba7ifSb0NF6_4wbm-zr07aY26-XuNwfTnNNblaHpPADTiRm5vkX3_6qxGNcrCT-u0" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QrOkvsDO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh3.googleusercontent.com/8LNNM3WWqHYDF2lY7sF0zb9zE_eh7hQ-9EMLsQ74gi9Cogesg6NFAgHdv4SIfQD5dWMh0AlJbhQlxfEJKDN12Pba7ifSb0NF6_4wbm-zr07aY26-XuNwfTnNNblaHpPADTiRm5vkX3_6qxGNcrCT-u0" alt="Incremental DB migration with Hasura" width="800" height="767"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Connect existing API service with Hasura.
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;The existing APIs will become a source of business logic and data validation for Hasura, and with that, we can start migrating some of the client API calls to Hasura.&lt;/li&gt;
&lt;li&gt;For GraphQL, we can use remote schemas, and for REST APIs, we can use actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rqV_rhHA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/J4RNK4jX5gx-1nAJtRy7dQ8-oziyQztJNxJP_mtAYyy0ArzWpP-yzAxSfCutx4HQ-fFJ3_bETdRfO0Ie-Ekwtp70EQ53gJut18IvkEQQnWCPvELMqs3A--4M3R8jzsABYk-CJCqG7tROdI5UliTOVXM" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rqV_rhHA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/J4RNK4jX5gx-1nAJtRy7dQ8-oziyQztJNxJP_mtAYyy0ArzWpP-yzAxSfCutx4HQ-fFJ3_bETdRfO0Ie-Ekwtp70EQ53gJut18IvkEQQnWCPvELMqs3A--4M3R8jzsABYk-CJCqG7tROdI5UliTOVXM" alt="Incremental DB migration with Hasura" width="800" height="666"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once we've migrated some client API calls, we can slowly move all clients to Hasura. Post that, we can start replicating some table schemas to the new database vendor. There are many ways to replicate data across different data sources.&lt;/li&gt;
&lt;li&gt;Once those tables have been replicated successfully and after conducting functional and performance tests over the new data source, we can connect Hasura with the new database.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Connect Hasura to the new database.
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;With this, queries to new tables will be directed to the latest data source, rest of the queries will be redirected to the existing database.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Make remote joins across databases.
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;In case of joins between tables between two databases, we can use Hasura’s Remote Joins feature, which allows us to fetch data from multiple data sources in a performant manner.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CXxC1rYs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/F3Pxr3gQlbYzJlbPMziz8HO9RADg7SLNCsyRbcEXOYdOeDgvM3zwyJdy_fY0OrV_LQdMJ3Wh9PokiyTKh-h707QTCo28X_BmlQF2dJ4w2KS4PT4EY9R6h8O628p006lMFbS3zla2Ryr7j2OEqBd6DOk" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CXxC1rYs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/F3Pxr3gQlbYzJlbPMziz8HO9RADg7SLNCsyRbcEXOYdOeDgvM3zwyJdy_fY0OrV_LQdMJ3Wh9PokiyTKh-h707QTCo28X_BmlQF2dJ4w2KS4PT4EY9R6h8O628p006lMFbS3zla2Ryr7j2OEqBd6DOk" alt="Incremental DB migration with Hasura" width="800" height="667"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With this, we have achieved a framework for incrementally migrating data to the new vendor. By having a strategy where we can move a few tables at a time, we can take our time to squash any issues that arise due to changes in the database vendor, slowly increasing our data footprint over time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Remove old data sources from Hasura.
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Once we have moved all our data, we can discard and disconnect the old data source from Hasura.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6ojF5JkD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh3.googleusercontent.com/RfJaPKIS6TZyF6oA3zHgS08hT2yjv0wcEpcHrd9jPdvPX1udhTVglsvy3r0rRS8PYmTskz1clhYuy6XaBUutYSl52BGUqAigwD83q1qSKD5Td1P3_Bm2dsSmkFXKNEKIDsv6Wx8_j6hLMzkD5Oo0CnE" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6ojF5JkD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh3.googleusercontent.com/RfJaPKIS6TZyF6oA3zHgS08hT2yjv0wcEpcHrd9jPdvPX1udhTVglsvy3r0rRS8PYmTskz1clhYuy6XaBUutYSl52BGUqAigwD83q1qSKD5Td1P3_Bm2dsSmkFXKNEKIDsv6Wx8_j6hLMzkD5Oo0CnE" alt="Incremental DB migration with Hasura" width="800" height="710"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;We saw how using Hasura, we could have a clear strategy for migrating API, which allowed us to move data to the new vendor incrementally while causing minimal client disruptions.&lt;/p&gt;

&lt;p&gt;Using this approach, we can easily guard against any uncertainties that may arise when moving to a new database vendor while providing a fantastic experience to our API consumers.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;a href="https://cloud.hasura.io/signup"&gt;Sign up&lt;/a&gt; now for Hasura Cloud and get started for free!&lt;/strong&gt;
&lt;/h3&gt;

</description>
      <category>databasemigration</category>
      <category>apimigration</category>
    </item>
    <item>
      <title>The why of GraphQL Client Side Nullability in Examples</title>
      <dc:creator>Hasura</dc:creator>
      <pubDate>Wed, 19 Jul 2023 21:18:07 +0000</pubDate>
      <link>https://forem.com/hasurahq_staff/the-why-of-graphql-client-side-nullability-in-examples-ado</link>
      <guid>https://forem.com/hasurahq_staff/the-why-of-graphql-client-side-nullability-in-examples-ado</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RTV8260---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/07/relay-blog-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RTV8260---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/07/relay-blog-1.png" alt="The why of GraphQL Client Side Nullability in Examples" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A terse exposition of how client side nullability can inform client component design through comprehensive examples.&lt;/p&gt;

&lt;p&gt;A nullable field can represent a value that may or may not exist.&lt;/p&gt;

&lt;p&gt;Client side nullability can be used to solve common issues when defining the data fetch for client side components by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Validating that all the fields the component expects are available in one shot&lt;/li&gt;
&lt;li&gt;Simplifying the types on fetched data from nullable types to non nullable types (especially elegant when using statically typed languages where you have to unwrap the value from an optional type).&lt;/li&gt;
&lt;li&gt;Modifying how errors or null values affect the returned fields based on bubble-up logic&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Bubbling on the server (a recap)
&lt;/h2&gt;

&lt;p&gt;Let's set up an example server API to use in the next sections on client side component data fetches.&lt;/p&gt;

&lt;p&gt;On the server, every field is nullable by default; the GraphQL spec allows the server to return a "partial response." A resilient API can resolve all the fields it is able to and return errors on the side. Errors can include system failures (network/database/code) or even authorization errors. Nullable by default also eases API evolution by ensuring that clients are responsible for validating fields in the response data.&lt;/p&gt;

&lt;p&gt;A null value in a field that is not nullable bubbles up to the nearest nullable field.&lt;/p&gt;

&lt;p&gt;For example, with the following query and types:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# sample query 1 BASIC
query {
  user {
    email
    profile { # nullable
      picture
      address { # not nullable
        street # not nullable
        country
      }
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&amp;lt;!--kg-card-end: markdown--&amp;gt;&amp;lt;!--kg-card-begin: markdown--&amp;gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# server schema
type User {
  email: String
  profile: Profile # fields in graphql are nullable by default
}
type Profile {
  picture: String
  address: Address! # not nullable
}
type Address {
  street: String! # not nullable
  country: String
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If something went wrong while fetching the non-nullable "street" field, then the server would bubble up that error to the nearest nullable field "profile", and you would get a null profile field. For this query, either you get a profile with a street, or no profile at all. This is true even if "picture" was a valid value.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// return when non-nullable street not resolved
{
    user: {
        email: ...
        profile: null // null bubbled up to first nullable field
    }
}

// return if street value is resolved
{
    user: {
        email: ...
        profile: {
            picture: ...
            address: {
                street: ... // street has a non null value
                country: ...
            }
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Moving Control to the Client With Nullability Designators
&lt;/h2&gt;

&lt;p&gt;&lt;u&gt;&lt;b&gt;&lt;br&gt;
        Client side nullability introduces two new operators&lt;/b&gt;&lt;/u&gt; &lt;code&gt;!&lt;/code&gt; &lt;u&gt;&lt;b&gt;named "required" and&lt;/b&gt;&lt;/u&gt; &lt;code&gt;?&lt;/code&gt; &lt;u&gt;&lt;b&gt;named "optional" that the client can use to specify how the server should bubble up null errors.&lt;br&gt;
    &lt;/b&gt;&lt;/u&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution 1: Narrow optional data with smaller error boundary in a profile widget
&lt;/h3&gt;

&lt;p&gt;The client can force the server to return some data while marking other data as optional.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# sample query 2 PROFILE WIDGET, smaller boundary
query {
  user {
    email
    profile { # nullable
      picture
      address? { # not nullable, optional
        street! # not nullable, required
        country
      }
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In case of an error or null value while resolving the required field "street", the null value would propagate up to the closest optional field "address", not the closest nullable field. So this allows for a client side developer to override overzealous nullability requirements that the server specifies and get this response:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// sample response 2 PROFILE WIDGET
{
    user: {
        email: ...
        profile: {
            picture: ...
            address: null // closest optional field is null
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The profile widget can now display a profile picture even when the address is invalid.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution 2: Simplify data validation with larger error boundary in a location widget
&lt;/h3&gt;

&lt;p&gt;The client can indicate that either all or no data should be returned to simplify validation of the returned data.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# sample query 3 USER LOCATION WIDGET, larger boundary
query {
  user? { # optional
    email
    profile { # nullable
      picture
      address { # not nullable
        street! # not nullable, required
        country
      }
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A null value for the required street field still propagates to the closest optional field, so we get this result:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// sample response 3 USER LOCATION WIDGET
{
    user: null
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means that the user location widget doesn't need to dig into the internals of the returned user data and validate that the fields returned as expected, the null at the optional field already captures the bubbled up error.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution With Relay
&lt;/h2&gt;

&lt;p&gt;The Relay library has had this for some time through the @required directive specific to the Relay library, and different from the client side nullability GraphQL spec.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query {
  user {
    email
    ... profileFragment
  }
}

fragment profileFragment on users {
  profile {
    picture
    address @required {
      street @required
      country
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If "street", an @required field is missing, the null value will bubble up to the first field that is not @required, but only within the same fragment i.e. the "profile" field.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    user: {
        email: ...
        profile: null
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is because Relay likes to use fragments to locally scope data fetching requirements with data masking. An advantage with the Relay approach is that the directives are implemented on the client, the server does not need to support the @required directive, unlike the client side nullability spec.&lt;/p&gt;

&lt;p&gt;Relay is also looking to push the client side nullability spec further with Fragment Response Keys which define fragment composition boundaries for the client side nullability designators (&lt;a href="https://youtu.be/EjJ4oDfCpi4?t=853"&gt;GraphQL Fails Modularity on YouTube&lt;/a&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Using this today
&lt;/h2&gt;

&lt;p&gt;The client side nullability spec is currently in the RFC stage, and Hasura does not support it today.&lt;/p&gt;

&lt;p&gt;You can however use the @required directive with Relay today, and Hasura can generate a Relay API. Check it out if the idea of having data requirements colocated with your components using fragments stands out to you. It also makes for a phenomenal developer experience.&lt;/p&gt;

&lt;p&gt;If you're using Hasura backed by a single PostgreSQL database, you don't really have to worry about the server returning unexpected null values.&lt;/p&gt;

&lt;p&gt;In a federated setup, with Hasura backed by multiple data sources across the network, something like client side nullability could help clients account for partial failures when resolving data. Hasura v3 will handle these scenarios more elegantly by allowing partial fetches even with some of the backing data sources down.&lt;/p&gt;

&lt;p&gt;From a client perspective, a developer probably wants to focus on nailing down data requirements for every component regardless of the status of the backing data sources.&lt;/p&gt;

&lt;p&gt;The GraphQL spec continues to evolve to solve real problems that developers encounter.&lt;/p&gt;

&lt;p&gt;Here are some active discussions for GraphQL features that are still in the works! &lt;a href="https://github.com/graphql/graphql-wg/tree/main/rfcs"&gt;GraphQL Working Group RFCs&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/graphql/graphql-wg/blob/main/rfcs/ClientControlledNullability.md"&gt;https://github.com/graphql/graphql-wg/blob/main/rfcs/ClientControlledNullability.md&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="http://spec.graphql.org/October2021/#sec-Handling-Field-Errors"&gt;http://spec.graphql.org/October2021/#sec-Handling-Field-Errors&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://relay.dev/docs/next/guides/required-directive/"&gt;https://relay.dev/docs/next/guides/required-directive/&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://hasura.io/blog/graphql-nulls-cheatsheet/"&gt;https://hasura.io/blog/graphql-nulls-cheatsheet/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Other discussions:&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/graphql/graphql-wg/discussions/1009"&gt;https://github.com/graphql/graphql-wg/discussions/1009&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/graphql/graphql-wg/discussions/994"&gt;https://github.com/graphql/graphql-wg/discussions/994&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=SVx4HG2bhII"&gt;https://www.youtube.com/watch?v=SVx4HG2bhII&lt;/a&gt;&lt;/p&gt;

</description>
      <category>clientsidenullabilit</category>
      <category>graphql</category>
      <category>engineering</category>
    </item>
    <item>
      <title>Harnessing the power of MuleSoft and Hasura</title>
      <dc:creator>Hasura</dc:creator>
      <pubDate>Wed, 19 Jul 2023 19:17:25 +0000</pubDate>
      <link>https://forem.com/hasurahq_staff/harnessing-the-power-of-mulesoft-and-hasura-1cfl</link>
      <guid>https://forem.com/hasurahq_staff/harnessing-the-power-of-mulesoft-and-hasura-1cfl</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W-v2AvSB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/07/mulesoft-feature-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W-v2AvSB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/07/mulesoft-feature-1.png" alt="Harnessing the power of MuleSoft and Hasura" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In the rapidly evolving world of API development and integration&lt;/strong&gt; , organizations often encounter complex challenges that require a combination of powerful tools and technologies. Two such platforms, MuleSoft and Hasura, offer unique capabilities that can be harnessed together to &lt;a href="https://hasura.io/blog/elevating-your-api-strategy-with-hasura/"&gt;create a comprehensive and efficient API ecosystem.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this blog post, we will take a deep dive into MuleSoft and Hasura, exploring their individual merits, discussing real-world use cases, and providing architectural insights on how these platforms can complement each other to meet the diverse needs of modern enterprises.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding MuleSoft
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.mulesoft.com/"&gt;MuleSoft&lt;/a&gt;, acquired by Salesforce, is a leading integration platform that enables organizations to seamlessly connect disparate systems, applications, and data sources. With its extensive library of connectors, MuleSoft simplifies integration challenges by providing out-of-the-box connectivity to various systems, including enterprise applications, databases, cloud services, and more. It offers a visual integration development environment that empowers developers to design, build, and manage APIs, ensuring efficient data flow across the enterprise. MuleSoft's robust capabilities include data transformation, API orchestration, message routing, and comprehensive monitoring and &lt;a href="https://www.mulesoft.com/platform/api-management"&gt;management tools&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;MuleSoft shines as an API gateway layer, enabling organizations to manage, secure, and control their APIs. Its versatile workflow capabilities streamline the integration process, ensuring seamless orchestration of data and processes. Beyond its API gateway functionality, MuleSoft's specialized &lt;a href="https://www.mulesoft.com/integration-solutions/b2b-edi-platform"&gt;B2B/EDI integration platform&lt;/a&gt; deserves a special mention, catering to the specific requirements of B2B and EDI integration use-cases. With MuleSoft, organizations can effortlessly connect systems, enhance API governance, and drive efficient workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deep dive into MuleSoft
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Message transformation:&lt;/strong&gt; MuleSoft provides powerful data transformation capabilities, allowing you to easily convert data between different formats, protocols, and systems. With its graphical mapping editor and built-in data mapping functions, you can efficiently handle complex data transformation scenarios.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Orchestration and workflow:&lt;/strong&gt; MuleSoft's visual flow designer enables the creation of complex integration workflows and orchestration processes. It allows you to define conditional routing, error handling, and exception management, making it suitable for implementing sophisticated integration scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exploring Hasura
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://hasura.io/"&gt;Hasura&lt;/a&gt;, on the other hand, is an open source data platform that provides &lt;a href="https://hasura.io/products/instant-api"&gt;instant GraphQL APIs&lt;/a&gt; over existing databases and existing REST and GraphQL endpoints. With Hasura, developers can rapidly build &lt;a href="https://hasura.io/products/api-security"&gt;secure&lt;/a&gt; and &lt;a href="https://hasura.io/products/performance"&gt;performant&lt;/a&gt; APIs without writing complex boilerplate code. It acts as a powerful data access layer, automatically generating GraphQL APIs based on the database schema. Hasura's real-time capabilities, powered by GraphQL subscriptions, enable developers to build interactive and collaborative applications with instant updates. It also offers &lt;a href="https://hasura.io/products/authorization"&gt;granular access control and authorization mechanisms&lt;/a&gt;, allowing developers to define fine-grained permissions and ensure data security and compliance.&lt;/p&gt;

&lt;p&gt;Hasura takes the spotlight as a data connectivity layer, revolutionizing API development and transformation. By seamlessly integrating with OLTP and OLAP databases, Hasura provides an automated solution for generating GraphQL APIs. With its powerful API integration capabilities, developers can quickly build robust APIs and effortlessly handle data connectivity challenges. Hasura simplifies the complex task of API integration and transformation, empowering developers to accelerate their development cycles and deliver exceptional user experiences.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deep dive into Hasura
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Rapid API development:&lt;/strong&gt; Hasura's strength lies in its ability to generate &lt;a href="https://hasura.io/blog/why-graphql-api-is-the-perfect-data-layer-for-your-backend/"&gt;GraphQL APIs&lt;/a&gt; from existing databases rapidly. With its intuitive UI and CLI tools, you can easily design, build, and deploy APIs in minutes, significantly accelerating your development cycles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-time capabilities:&lt;/strong&gt; Hasura offers real-time GraphQL subscriptions, enabling instant updates and real-time data synchronization. This feature is particularly useful for applications requiring live updates, collaborative features, or notifications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fine-grained data access control:&lt;/strong&gt; Hasura provides granular access control and authorization mechanisms, allowing you to define permissions and policies at the API layer. This ensures that only &lt;a href="https://hasura.io/blog/hasura-graphql-on-snowflake-using-rbac-a-secure-and-scalable-data-access-solution/"&gt;authorized users have access to specific data&lt;/a&gt;, enhancing security and compliance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-world use cases and architecture
&lt;/h2&gt;

&lt;p&gt;Let's explore three real-world use cases where the combination of MuleSoft and Hasura can be particularly powerful:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.  E-commerce integration:&lt;/strong&gt; MuleSoft can connect e-commerce platforms, inventory systems, payment gateways, and shipping providers, ensuring a seamless customer experience. Hasura can be used to provide instant GraphQL APIs over the product catalog and inventory databases, enabling real-time inventory updates and personalized product recommendations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.  Healthcare data integration:&lt;/strong&gt; MuleSoft can integrate patient management systems, electronic health records (EHR), and billing systems, streamlining data exchange and ensuring compliance with healthcare standards. Hasura can serve as the data access layer, allowing healthcare providers to query patient data through GraphQL APIs securely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.  Financial services integration:&lt;/strong&gt; MuleSoft can connect banking systems, payment processors, and customer relationship management (CRM) platforms, facilitating secure and efficient financial transactions. Hasura can be used to provide real-time insights into customer transactions and account balances, enabling personalized financial dashboards or fraud detection systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b6iGNwg8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/ySALLeUlGvKxyof8MmqroV3UAeIHCthVoQfXDENocLM-zoGxFzj3Tsx5fXm2gY-La6ibzzBgSkzpoCnXfL0Z7pBaPr5CkSdk32ylVN_Hor7N5xOApOOCeAMT_-6Ye975366NdmFHAKozBrSi7ixBkUU" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b6iGNwg8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/ySALLeUlGvKxyof8MmqroV3UAeIHCthVoQfXDENocLM-zoGxFzj3Tsx5fXm2gY-La6ibzzBgSkzpoCnXfL0Z7pBaPr5CkSdk32ylVN_Hor7N5xOApOOCeAMT_-6Ye975366NdmFHAKozBrSi7ixBkUU" alt="Harnessing the power of MuleSoft and Hasura" width="800" height="839"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above architecture, the client and the application layer directly interacts with MuleSoft, which serves as the API gateway/layer and workflow platform. MuleSoft handles the routing, transformation, and management of API requests, providing a centralized point for API governance and control. MuleSoft also does the integration of APIs that Hasura doesn’t work with or which MuleSoft handles best. This includes the workflow management and B2B EDI integration APIs.&lt;/p&gt;

&lt;p&gt;Hasura, on the other hand, is positioned as the data connectivity layer. It seamlessly connects to &lt;a href="https://hasura.io/learn/database/mysql/core-concepts/2-olap-vs-oltp/"&gt;OLTP and OLAP databases&lt;/a&gt; and provides the ability to perform API integration and transformation. Hasura simplifies API development by auto-generating a GraphQL API layer from existing databases, &lt;a href="https://hasura.io/docs/latest/getting-started/use-case/data-api/"&gt;enabling efficient data access&lt;/a&gt; and manipulation.&lt;/p&gt;

&lt;p&gt;The databases, represented in the architecture, can be OLTP (Online Transaction Processing) and OLAP (Online Analytical Processing) databases. Hasura can handle both types, allowing developers to connect to and query the databases using GraphQL APIs.&lt;/p&gt;

&lt;p&gt;This architecture highlights the clear division of responsibilities between MuleSoft and Hasura, with MuleSoft focused on API gateway functionality and workflow management, while Hasura excels in data connectivity, API integration, and handling OLTP and OLAP databases.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to use Hasura with MuleSoft
&lt;/h2&gt;

&lt;p&gt;Let’s look at the main scenarios where it makes sense to use Hasura with MuleSoft:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.  Simplified Data Access:&lt;/strong&gt; When your primary focus is on efficient data access and manipulation, especially with real-time requirements, Hasura's GraphQL engine offers a powerful solution. It is perfect for applications that necessitate real-time updates, collaborative editing, or interactive data querying.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.  Hybrid Approaches:&lt;/strong&gt; In certain cases, a combination of MuleSoft and Hasura can be beneficial. For instance, you can leverage MuleSoft's integration capabilities to connect disparate systems and use Hasura as a data access layer, providing a unified GraphQL API over the integrated data sources. This approach combines the best of both platforms to deliver a seamless and efficient solution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Combining MuleSoft and Hasura
&lt;/h3&gt;

&lt;p&gt;When considering the combination of MuleSoft and Hasura, it's essential to understand their complementary strengths and architectural implications.&lt;/p&gt;

&lt;p&gt;Here are some key scenarios where their combined usage can prove beneficial:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.  Hybrid Integration Scenarios:&lt;/strong&gt; MuleSoft excels in integrating heterogeneous systems, enabling seamless communication between various enterprise applications, cloud services, and databases. In scenarios where real-time data access and manipulation are required, Hasura can be used as a data access layer for instant GraphQL APIs over the existing databases. This combination allows for efficient data integration and real-time capabilities while leveraging MuleSoft's comprehensive connectivity and transformation features.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.  Microservices Architecture:&lt;/strong&gt; In a microservices architecture, MuleSoft can serve as the integration layer, orchestrating communication between different microservices and providing a unified API gateway. Hasura can be used within the microservices themselves, generating GraphQL APIs to access and manipulate the respective microservice's data. This approach allows for decoupled microservices with their own GraphQL APIs, while MuleSoft ensures seamless communication and integration between them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.  Accelerating API Development:&lt;/strong&gt; Hasura's automatic API generation capabilities provide developers with a rapid development experience. By integrating Hasura with MuleSoft, developers can leverage MuleSoft's powerful transformation and connectivity features to enrich the data returned by Hasura's GraphQL APIs. This combination allows for accelerated API development while ensuring data consistency, enrichment, and adherence to enterprise standards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.  Real-time Data Synchronization:&lt;/strong&gt; Hasura's real-time capabilities make it ideal for scenarios where real-time data synchronization is required, such as collaborative applications or real-time dashboards. MuleSoft can integrate with Hasura to fetch and transform data from various sources, and Hasura's real-time subscriptions can enable instant updates and real-time notifications to connected clients.&lt;/p&gt;

&lt;h3&gt;
  
  
  Comparing MuleSoft and Hasura
&lt;/h3&gt;

&lt;p&gt;To better understand why combining Hasura and MuleSoft is so powerful, let’s compare the two systems. The contrast shows us how complimentary they really are:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Features&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;MuleSoft&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Hasura&lt;/p&gt;

&lt;p&gt;|&lt;br&gt;
| &lt;/p&gt;

&lt;p&gt;Integration type&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Enterprise Integration Platform&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Data Platform with auto-generated APIs&lt;/p&gt;

&lt;p&gt;|&lt;br&gt;
| &lt;/p&gt;

&lt;p&gt;Connectivity&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Extensive connectors for various systems and protocols&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Connects to existing databases for API generation&lt;/p&gt;

&lt;p&gt;|&lt;br&gt;
| &lt;/p&gt;

&lt;p&gt;Message transformation&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Powerful data transformation capabilities&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Directly maps database schema to GraphQL API&lt;/p&gt;

&lt;p&gt;|&lt;br&gt;
| &lt;/p&gt;

&lt;p&gt;Orchestration&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Robust visual flow designer for complex integration workflows&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;N/A (Focuses on API generation and real-time features)&lt;/p&gt;

&lt;p&gt;|&lt;br&gt;
| &lt;/p&gt;

&lt;p&gt;Real-time capabilities&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Limited support for real-time updates and notifications&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;GraphQL subscriptions for real-time data synchronization&lt;/p&gt;

&lt;p&gt;|&lt;br&gt;
| &lt;/p&gt;

&lt;p&gt;Rapid API development&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Requires configuration and development effort&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Auto-generates GraphQL APIs with a minimal development effort&lt;/p&gt;

&lt;p&gt;|&lt;br&gt;
| &lt;/p&gt;

&lt;p&gt;Data access control&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Provides fine-grained access control and authorization mechanisms&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Offers granular control over API permissions and policies with ABAC and Authorization&lt;/p&gt;

&lt;p&gt;|&lt;br&gt;
| &lt;/p&gt;

&lt;p&gt;Scalability&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Designed for complex enterprise integrations&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Suitable for projects of all sizes, including startups and smaller applications&lt;/p&gt;

&lt;p&gt;|&lt;br&gt;
| &lt;/p&gt;

&lt;p&gt;Learning curves&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Steeper learning curve due to its extensive feature set&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Relatively easier to learn and get started with&lt;/p&gt;

&lt;p&gt;|&lt;br&gt;
| &lt;/p&gt;

&lt;p&gt;Use cases&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Large-scale enterprise integrations involving diverse systems&lt;/p&gt;

&lt;p&gt;| &lt;/p&gt;

&lt;p&gt;Modern applications requiring real-time updates and collaborative features&lt;/p&gt;

&lt;p&gt;|&lt;/p&gt;

&lt;p&gt;When comparing MuleSoft and Hasura, it's essential to understand their differences in focus and functionality. MuleSoft shines in enterprise-level integrations, offering extensive connectors, data mapping, and transformation capabilities. It excels in large-scale deployments, legacy system modernization, and hybrid cloud scenarios. Hasura, on the other hand, is tailored for efficient data access and manipulation through GraphQL. It provides real-time capabilities, automatic schema generation, and authorization mechanisms.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;MuleSoft and Hasura, with their distinct capabilities, can be harnessed together to create a comprehensive and efficient API ecosystem. MuleSoft excels in complex integrations, enterprise-level security, and large-scale deployments. Hasura simplifies data access and manipulation through GraphQL, providing real-time capabilities.&lt;/p&gt;

&lt;p&gt;By combining MuleSoft's powerful integration capabilities with Hasura's rapid API development and real-time capabilities, organizations can achieve enhanced agility, seamless data flow, and improved user experiences. By architecting solutions with MuleSoft and Hasura and adding their complementary strengths to align them with your specific use cases, you can unlock new possibilities in API development and integration, empowering your organization to innovate and thrive in the digital era.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://cloud.hasura.io/signup"&gt;Sign up&lt;/a&gt; now for Hasura Cloud to get started!
&lt;/h3&gt;

</description>
      <category>mulesoft</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Breaking up monolith into microservices with Hasura</title>
      <dc:creator>Hasura</dc:creator>
      <pubDate>Tue, 18 Jul 2023 14:55:36 +0000</pubDate>
      <link>https://forem.com/hasurahq_staff/breaking-up-monolith-into-microservices-with-hasura-4fkd</link>
      <guid>https://forem.com/hasurahq_staff/breaking-up-monolith-into-microservices-with-hasura-4fkd</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VzVsdaOi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/07/image1-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VzVsdaOi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/07/image1-1.png" alt="Breaking up monolith into microservices with Hasura" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Unleashing scalability and agility&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;As applications grow in size and complexity, the monolithic architecture that once served them well can become a bottleneck.&lt;/strong&gt; Monolithic applications are difficult to scale, maintain, and deploy. Microservices, on the other hand, offer a more flexible and scalable architecture that allows for independent development and deployment of smaller, focused components.&lt;/p&gt;

&lt;p&gt;In this blog post, we will explore how Hasura can help break up a monolithic application into microservices, enabling a more &lt;a href="https://hasura.io/blog/unlocking-advanced-api-strategies-with-hasura/"&gt;efficient and scalable architecture&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding monoliths and microservices
&lt;/h2&gt;

&lt;p&gt;Before diving into the migration process, it is important to understand the key differences between monolithic and microservices architectures.&lt;/p&gt;

&lt;p&gt;In a monolithic architecture, the entire application is built as a single, tightly-coupled unit. This makes it difficult to isolate and scale individual components independently. On the other hand, microservices architecture breaks down the application into small, autonomous services that can be developed, deployed, and scaled independently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cdnFQ465--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/NxcT3hBmeQca4-6oMk0Q3RA6Ff3iL30FgvcLM-N7H5si-HCOJmEBF0ivBDI6_5aT5EVbAObOGSjdgUU5I_LkIIVmFbtVfMGr7o2FFGPP0p13yLI9J7Qr5JKd41Kt1j4-nuHwQ7mC_8vH3R7K4j5qedY" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cdnFQ465--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/NxcT3hBmeQca4-6oMk0Q3RA6Ff3iL30FgvcLM-N7H5si-HCOJmEBF0ivBDI6_5aT5EVbAObOGSjdgUU5I_LkIIVmFbtVfMGr7o2FFGPP0p13yLI9J7Qr5JKd41Kt1j4-nuHwQ7mC_8vH3R7K4j5qedY" alt="Breaking up monolith into microservices with Hasura" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s look at the detailed process of breaking a monolithic application into microservices step-by-step:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1: Identify microservice boundaries&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The first step in breaking up a monolith is to identify the boundaries of your microservices. This involves analyzing the existing monolithic application to identify cohesive and loosely coupled modules that can be decoupled and deployed as independent services. Each microservice should have its own bounded context and serve a specific business capability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create GraphQL APIs with Hasura
&lt;/h3&gt;

&lt;p&gt;Hasura provides a powerful GraphQL engine that can act as a gateway to your microservices. Instead of exposing each microservice's API directly to clients, you can create a unified GraphQL API using Hasura. Hasura simplifies this process by automatically generating the GraphQL schema based on the underlying data sources. You can leverage Hasura's console or configuration files to define relationships, access control rules, and &lt;a href="https://hasura.io/blog/introducing-actions/"&gt;custom business logic.&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Extract microservices from the monolith
&lt;/h3&gt;

&lt;p&gt;With the &lt;a href="https://hasura.io/blog/graphql-microservices-with-hasura/"&gt;microservice boundaries&lt;/a&gt; defined and the GraphQL API set up, you can start extracting individual microservices from the monolithic application. This can be done incrementally by identifying a well-defined module or functionality within the monolith and moving it into its own service. Hasura's GraphQL API acts as a bridge between the existing monolith and the newly created microservices, allowing for seamless integration and coexistence.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Refactor and redesign
&lt;/h3&gt;

&lt;p&gt;During the extraction process, you may need to &lt;a href="https://hasura.io/blog/how-hasura-works/"&gt;refactor and redesign&lt;/a&gt; certain components to ensure they align with microservices principles. This may involve restructuring databases, revisiting domain models, and optimizing communication patterns between microservices. Hasura's powerful data modeling capabilities, such as handling relationships and custom resolvers, can assist in this process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Deployment and scalability
&lt;/h3&gt;

&lt;p&gt;Once the microservices are extracted and properly designed, you can deploy them individually using your preferred deployment strategy. Hasura integrates well with popular containerization technologies like Docker and orchestration platforms like Kubernetes, making it easier to deploy and scale microservices. Each microservice can have its own independent deployment pipeline, enabling continuous delivery and reducing the risk associated with deploying changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Monitor and maintain
&lt;/h3&gt;

&lt;p&gt;Monitoring and maintaining microservices is crucial to ensure their health and performance. Hasura provides monitoring and logging capabilities that can be leveraged to gain insights into the usage, performance, and error handling of your microservices. By utilizing these tools, you can proactively identify and address any issues that may arise, ensuring the smooth operation of your distributed application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Breaking up a monolithic application into microservices can be a complex undertaking, but with the help of Hasura, it becomes more manageable. Hasura's GraphQL engine simplifies the creation of a unified API layer for your microservices, enabling seamless integration and coexistence with the existing monolith.   &lt;/p&gt;

&lt;p&gt;By following the steps outlined in this blog post, you can transform your monolithic application into a scalable and flexible microservices architecture, unlocking the benefits of independent development, deployment, and scalability.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://cloud.hasura.io/signup"&gt;&lt;strong&gt;Sign up&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;now for Hasura Cloud to get started for free!&lt;/strong&gt;
&lt;/h3&gt;

</description>
      <category>microservices</category>
      <category>graphqlapis</category>
    </item>
    <item>
      <title>The complexity of building a GraphQL API permissions layer and how Hasura solves this</title>
      <dc:creator>Hasura</dc:creator>
      <pubDate>Wed, 05 Jul 2023 09:17:26 +0000</pubDate>
      <link>https://forem.com/hasurahq/the-complexity-of-building-a-graphql-api-permissions-layer-and-how-hasura-solves-this-2j3l</link>
      <guid>https://forem.com/hasurahq/the-complexity-of-building-a-graphql-api-permissions-layer-and-how-hasura-solves-this-2j3l</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1ToPDh1z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/07/authz-permissions.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1ToPDh1z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/07/authz-permissions.png" alt="The complexity of building a GraphQL API permissions layer and how Hasura solves this" width="800" height="946"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;API security breaches are on the rise. &lt;a href="https://www.gartner.com/en/documents/4009103"&gt;Gartner&lt;/a&gt; predicts that by 2025, insecure APIs will account for more than 50% of data theft incidents. As enterprises continue to embrace an API-driven approach to software development (for all the benefits it brings), arming their developers with the tools to build secure APIs with proper data access and authorization logic needs to be a priority.&lt;/p&gt;

&lt;p&gt;Building an authorization layer involves many factors. In GraphQL, authorization belongs in the business logic layer and not typically inside resolvers. It is more complex to write an authorization layer for GraphQL APIs than REST APIs. In this blog, we will see what it takes to build authorization logic for a DIY GraphQL server to understand how it is more complex. We will also compare it with Hasura and how it solves this complexity by being declarative and leveraging predicate pushdown.&lt;/p&gt;

&lt;p&gt;Here's a quick TL'DR and summary before diving deep into this post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iX2uwwFj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d4psljrxrpxedl2ggtmo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iX2uwwFj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d4psljrxrpxedl2ggtmo.png" alt="Image description" width="626" height="645"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key items to consider
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Data modeling
&lt;/h3&gt;

&lt;p&gt;Data models contain key information about the types of data, relationships between data. This is used to determine the kind of authorization system that needs to be built.&lt;/p&gt;

&lt;p&gt;For example, you have users of the application that can view public data of all users and certain private data of them. The data model now contains certain table columns relevant to this and also relationships that are relevant to this private data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Roles and attributes
&lt;/h3&gt;

&lt;p&gt;The common ways to model your authorization system is through RBAC (role-based access control and ABAC (attribute-based access control).&lt;/p&gt;

&lt;p&gt;You could start defining roles for the application and list out use cases and privileges for each. RBAC could be flat, hierarchical, constrained and symmetrical. Authorization can also be modeled based on attributes where the user who logs in to the application will have certain attributes (type of user, type of resources, and the environment in which they are accessing) that can be checked for allowing access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Nested rules
&lt;/h3&gt;

&lt;p&gt;The GraphQL schema may or may not be a direct map to the data models. In some cases, the data model extends to multiple data sources. Even within the same data source, the GraphQL query could be nested with multiple fields spanning relationships. Applying authorization logic contextually to the nested query is vital.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query {
  users {
    id
    name
    orders { // apply permission rule here too
      id
      order_total
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here the &lt;code&gt;orders&lt;/code&gt; is part of the users relationship and is nested. But the authorization logic needs to apply for both users, for orders, and for any level of nesting of the query, contextually.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance
&lt;/h3&gt;

&lt;p&gt;Ideally, authorization checks shouldn’t add a lot of overhead to the response latency. In reality, when you are writing a custom GraphQL server yourself, checks are done after the data fetching operation and there’s a lot of unnecessary data fetched, which slows down the database if you consider millions of requests which slows down API performance as a result.&lt;/p&gt;

&lt;h4&gt;
  
  
  Predicate pushdown
&lt;/h4&gt;

&lt;p&gt;If the authorization logic is heavily dependent on the data being fetched, it is important to do a predicate pushdown of the Authorization rules.&lt;/p&gt;

&lt;p&gt;For example: If you are fetching orders placed by a user on an e-commerce app, you need to be able to apply the rule in the query that goes to the database to fetch only the orders placed by the user requesting them. This is not only faster, but also the most secure way to apply authorization logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a DIY authorization layer for a GraphQL API
&lt;/h2&gt;

&lt;p&gt;You can build an AuthZ layer using middleware libraries. The maturity of AuthZ libraries in GraphQL depends on the language or framework you are in.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why is building an authorization layer complex?
&lt;/h3&gt;

&lt;p&gt;When you are writing your own GraphQL server with custom authorization rules, there are a number of methods to write this logic, depending on the use case. You have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API-wide authorization&lt;/li&gt;
&lt;li&gt;Resolver-based authorization&lt;/li&gt;
&lt;li&gt;Schema / model-based authorization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The resolver-based authorization quickly balloons into a lot of boilerplate code if authorization rules are applied to every field. Even with repeated rules and patterns that can be applied, it can easily expand to thousands of lines of code to secure your application. And this code becomes difficult to maintain.&lt;/p&gt;

&lt;p&gt;In the schema-based authorization, the authorization logic is dependent on the GraphQL schema and becomes independent of the underlying database or data fetching libraries and ORMs.&lt;/p&gt;

&lt;p&gt;Authorization in GraphQL is typically built using _ &lt;strong&gt;Context&lt;/strong&gt; _ object that is available on every request. The context is a value that is provided to every resolver and is created at the start of a GraphQL server request. This means you can add authentication and authorization details to the context, such as user data.&lt;/p&gt;

&lt;p&gt;Here’s how a typical AuthZ context gets passed in a DIY GraphQL Server written in Node.js:&lt;/p&gt;

&lt;h3&gt;
  
  
  Parsing and validating JWT token
&lt;/h3&gt;

&lt;p&gt;The first step is to parse and validate the incoming JWT token. Once the JWT token is verified, you can pass it to the context. We are using JWT tokens as an example here, because it is universal and works across platforms.&lt;/p&gt;

&lt;p&gt;An example request:&lt;/p&gt;

&lt;p&gt;Endpoint: myapp.com/v1/graphql&lt;/p&gt;

&lt;p&gt;Headers: Authorization: &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;Authorization&lt;/code&gt; header is parsed, validated, and verified. The underlying authentication service used could be any solution that issues JWT. Here’s some example code that verifies a JWT token:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;try {
    if (token) {
        return jwt.verify(token, YOUR_SECRET_KEY);
    }
    return null;
} catch (err) {
    return null;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s an example taken from a DIY GraphQL Server (Apollo) to parse the context:&lt;/p&gt;

&lt;p&gt;Create a new instance of Apollo Server by passing in the type definitions and resolvers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const server = new ApolloServer &amp;lt; MyContext &amp;gt; ({
    typeDefs,
    resolvers,
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a standalone server that retrieves the token from headers and returns the &lt;code&gt;user&lt;/code&gt; information as context.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const {
    url
} = await startStandaloneServer(server, {
    // Note: This example uses the `req` argument to access headers,
    // but the arguments received by `context` vary by integration.
    context: async ({
        req,
        res
    }) =&amp;gt; {
        // Get the user token from the headers.
        const token = req.headers.authorization || '';
        // Try to retrieve a user with the token
        const user = await getUser(token);
        // Add the user to the context
        return {
            user
        };
    },
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Passing context
&lt;/h3&gt;

&lt;p&gt;Once the token is extracted, you need to pass the context object to every resolver that will get executed. Now all of your resolver code gets access to the context.&lt;/p&gt;

&lt;p&gt;In the above example, we can see that the &lt;code&gt;user&lt;/code&gt; data is passed to the context.&lt;/p&gt;

&lt;h3&gt;
  
  
  API-wide authorization
&lt;/h3&gt;

&lt;p&gt;There are a few rules that might need to be enforced at the request level, even before passing it on to the GraphQL resolvers.&lt;/p&gt;

&lt;p&gt;For example, you could block a user from performing any queries and return a 401, unauthorized error. Again, this involves code logic and the logic could become a lot of boilerplate if there are many rules.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resolver-level authorization
&lt;/h3&gt;

&lt;p&gt;As the context gets attached to each resolver, it is now possible to authorize the request inside the resolver. This method is only suitable for basic authorization logic when there are only few resolvers and few rules to check and authorize users.&lt;/p&gt;

&lt;p&gt;For example: You have a users field that returns a list of user names. You will end up writing code, which looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;users: (parent, args, contextValue) =&amp;gt; {
    // In this case, we'll pretend there is no data when
    // we're not logged in. Another option would be to
    // throw an error.
    if (!contextValue.user) return null;
    return ['bob', 'jake'];
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The contextValue is now available for parsing and authorizing the user.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: This is resolver logic for one field. Imagine repeating logic code in every single resolver and field. It is very challenging to scale this. It is also very challenging to update any logic quickly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  GraphQL schema-based authorization
&lt;/h3&gt;

&lt;p&gt;In a large schema with plenty of data based authorization, there are patterns of rules that are applicable for multiple queries. For example: Allow the user to fetch their own data and not any one else.&lt;/p&gt;

&lt;p&gt;Here’s an example from GraphQL AuthZ library using schema as a data source.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// using schema as a data source inside pre-execution rule
const CanPublishPost = preExecRule()(async (context, fieldArgs) =&amp;gt; {
    const graphQLResult = await graphql({
        schema: context.schema,
        source: `query post($postId: ID!) { post(id: $postId) { author { id } } }`,
        variableValues: {
            postId: fieldArgs.postId
        }
    })
    const post = graphQLResult.data?.post
    return post &amp;amp;&amp;amp; post.author.id === context.user?.id
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you look at the return statement at the end in the above code snippet, that’s where the logic of checking user ID is written.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does Hasura’s authorization layer work?
&lt;/h2&gt;

&lt;p&gt;Hasura has a powerful authorization engine that allows developers to declaratively define fine-grained permissions and policies to restrict access to only particular elements of the data based on the session information in an API call.&lt;/p&gt;

&lt;p&gt;Implementing proper data access control rules into the handwritten APIs is painstaking work. By some estimates, access control and authorization code can make up to 80% of the business logic in an API layer. Securing GraphQL is even harder because of the flexible nature of the query language. Hasura radically simplifies the effort needed to build authorization logic into APIs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Declarative
&lt;/h3&gt;

&lt;p&gt;With Hasura, you can transparently and declaratively define roles, and what each role is allowed to access in the metadata configuration. This can be done either through the Hasura Console or programmatically through the Hasura CLI. This declarative approach to authorization is simpler to create, maintain, evolve, and audit for developers and security teams.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fine-grained access control
&lt;/h3&gt;

&lt;p&gt;Hasura supports a role-based access control system. Access control rules can be applied to all the CRUD operations. You define permissions granularly on the schema, sessions, and data (table, row, and column).&lt;/p&gt;

&lt;p&gt;For every role you create, Hasura automatically publishes a different GraphQL schema that represents the right queries, fields, and mutations that are available to that role. Every operation will use the request context to further apply permissions rules on the data.&lt;/p&gt;

&lt;p&gt;Authorization rules are conditions that can span any property of the JSON data graph and its methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Any property of the data spanning relationships. Eg: Allow access if “document.collaborators.editors” contains “current_user.id3⁄4&lt;/li&gt;
&lt;li&gt;Any property of the user accessing the data. Eg: Allow access if accounts.organization.id is equal to current_user.organization_io&lt;/li&gt;
&lt;li&gt;Rules can mask, tokenize, or encrypt portions of the data model or the data returned by a method and are labeled. These labels are called "roles."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V48jwCq6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh3.googleusercontent.com/MyG4i7A6dPyA3AmuPXaJCEaNfAmz0wz-BhnyTKbGJk8_4LDijDp_WvUlNQU4RgmggFHFPiAYulaQUOcQZ1qViKYgvjzn2Q-X7PBGwcgC5YRKXpwRVo-sFEcLryRWpxRyuPr4rzsM1xNfR8IXP4WtAaw" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V48jwCq6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh3.googleusercontent.com/MyG4i7A6dPyA3AmuPXaJCEaNfAmz0wz-BhnyTKbGJk8_4LDijDp_WvUlNQU4RgmggFHFPiAYulaQUOcQZ1qViKYgvjzn2Q-X7PBGwcgC5YRKXpwRVo-sFEcLryRWpxRyuPr4rzsM1xNfR8IXP4WtAaw" alt="The complexity of building a GraphQL API permissions layer and how Hasura solves this" width="800" height="946"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Easily integrate with your authentication system
&lt;/h3&gt;

&lt;p&gt;Authentication is handled outside of Hasura, and you can bring in your own authentication server or integrate any authentication provider that supports JSON Web Token (JWT). If your authentication provider does not support JWT, or you want to handle authentication manually, you can use webhooks.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"By using Hasura, we cut the development time in half and built our product in three months. The built-in role-based authorization system made it easy to secure our data."&lt;/em&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Mark Erdmann, Software Engineer, Pulley&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Predicate pushdown
&lt;/h3&gt;

&lt;p&gt;Hasura automatically pushes down the authorization check to the data query itself, which provides a significant performance boost and cost savings by avoiding additional lookups and unnecessary data egress, especially at larger scale.&lt;/p&gt;

&lt;p&gt;Hasura automates predicate pushdown, since it is essentially a JIT compiler that can dynamically apply the filter in where the clause of a SQL query is based on the user running the query. Most GraphQL server frameworks require you to write plenty of boilerplate code to achieve the predicate pushdown authorization check.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CqE0IGOJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh3.googleusercontent.com/kxwq2X1k_6dX0CHJ9JwnpjRImSgV73iRtaGEONBBfywlLXnXdleu-FuCFTittWkwqL6oJ5-D6tr-7e6oFJx0GTY9UDyF1bUHTiqNZZ9-QnFwrMJGvdbBNkhtIDSGXvE9o-3F75SsQIFeBt-N2F4Wri4" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CqE0IGOJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh3.googleusercontent.com/kxwq2X1k_6dX0CHJ9JwnpjRImSgV73iRtaGEONBBfywlLXnXdleu-FuCFTittWkwqL6oJ5-D6tr-7e6oFJx0GTY9UDyF1bUHTiqNZZ9-QnFwrMJGvdbBNkhtIDSGXvE9o-3F75SsQIFeBt-N2F4Wri4" alt="The complexity of building a GraphQL API permissions layer and how Hasura solves this" width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Cross-source authorization
&lt;/h3&gt;

&lt;p&gt;Hasura integrates authorization rules based on data and entitlements in different sources. Hasura forwards the resolved values as headers to your external services, which makes it easy to apply authorization rules in your external service. Again, this is made possible by Hasura’s declarative authorization system.&lt;/p&gt;

&lt;h2&gt;
  
  
  OWASP Top 10
&lt;/h2&gt;

&lt;p&gt;OWASP is most famous for the “Top Ten” framework for structuring secure applications. As the industry expands into a microservice-driven approach, it’s important for organizations to validate all of their dependencies according to the OWASP framework.&lt;/p&gt;

&lt;p&gt;Hasura’s security-first approach ensures that the Top 10 security features and criteria are fulfilled. Read more: &lt;a href="https://hasura.io/blog/owasp-samm-and-hasura/"&gt;How Hasura addresses the OWASP Top 10 concerns&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;When you are building your own GraphQL server and writing authorization logic, you will need to ensure that the Top 10 concerns are handled to be secure and compliant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try out Hasura
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://cloud.hasura.io/signup"&gt;Sign up&lt;/a&gt; to start your journey with Hasura Cloud today. If you are an enterprise looking to learn more about how Hasura fits in your app modernization strategy, reach out to us through the &lt;a href="https://hasura.io/contact-us/"&gt;Contact Us form&lt;/a&gt; and our team will get back to you.&lt;/p&gt;

</description>
      <category>authorization</category>
    </item>
    <item>
      <title>Boosting database interactivity and developer productivity with Hasura Native Queries and Logical Models</title>
      <dc:creator>Hasura</dc:creator>
      <pubDate>Thu, 29 Jun 2023 16:28:28 +0000</pubDate>
      <link>https://forem.com/hasurahq_staff/boosting-database-interactivity-and-developer-productivity-with-hasura-native-queries-and-logical-models-1ng</link>
      <guid>https://forem.com/hasurahq_staff/boosting-database-interactivity-and-developer-productivity-with-hasura-native-queries-and-logical-models-1ng</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0UiR-DPv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh4.googleusercontent.com/5nxZiaErAoGG5vCCjlJLd9ca7uISN2xmKgQNm8-CPZT4U3UDxUbJvWIMGiaSVBPGoWcgEciRWsvQhpAnhMtLHEquxk_zFKKB2VmUITlYUHYmDteoaKcbLR9GLiLCrcDV5fV9Co5Pefq10xQRTaSGGnw" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0UiR-DPv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh4.googleusercontent.com/5nxZiaErAoGG5vCCjlJLd9ca7uISN2xmKgQNm8-CPZT4U3UDxUbJvWIMGiaSVBPGoWcgEciRWsvQhpAnhMtLHEquxk_zFKKB2VmUITlYUHYmDteoaKcbLR9GLiLCrcDV5fV9Co5Pefq10xQRTaSGGnw" alt="Boosting database interactivity and developer productivity with Hasura Native Queries and Logical Models" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API development has seen a significant surge in demand&lt;/strong&gt; as organizations across the globe strive to harness the power of data. The Hasura GraphQL engine automates API creation from existing database objects like tables, views, functions, and stored procedures.&lt;/p&gt;

&lt;p&gt;For developers, this automation leads to significant time savings, freeing them from the hassles of manually developing APIs from scratch. It also brings in the benefits of GraphQL, such as type safety, real-time subscriptions, and performance enhancements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use Native Queries and Logical Models
&lt;/h2&gt;

&lt;p&gt;The Hasura GraphQL Engine has gained popularity for its ability to automatically generate a GraphQL API around database objects, providing seamless querying, mutating, and subscribing to data changes. However, there are instances when you require more custom or advanced functionality that goes beyond the out-of-the-box capabilities. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;​​Use the full power of SQL that Hasura might not provide access to through the typical table API, such as GROUP BY, window functions, or scalar functions.&lt;/li&gt;
&lt;li&gt;Provide custom arguments to the users of your API to greatly expand its flexibility.&lt;/li&gt;
&lt;li&gt;Encapsulate sophisticated filtering with a query, allowing your users to provide a single argument rather than having to understand how to manipulate the data.&lt;/li&gt;
&lt;li&gt;Work with the advanced features of your database to improve performance.&lt;/li&gt;
&lt;li&gt;Write a compatibility layer around tables, making it easier to change your API without breaking existing clients.&lt;/li&gt;
&lt;li&gt;Reduce duplication by moving common data manipulation into one place.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hasura's Native Queries feature provides a powerful tool for enhancing your GraphQL API with the flexibility and control of raw SQL queries. By leveraging Native Queries, you can create custom and advanced behavior in your Hasura-generated GraphQL schema without the need for additional database objects or DDL privileges. This enables you to unlock the full potential of SQL while building robust and efficient applications with Hasura.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Native Queries work
&lt;/h2&gt;

&lt;p&gt;The Hasura GraphQL Engine integrates seamlessly with Native Queries. Here's an overview of how it works:&lt;/p&gt;

&lt;h3&gt;
  
  
  Defining Native Queries
&lt;/h3&gt;

&lt;p&gt;To create a Native Query, you write raw SQL statements directly within Hasura. This query like the one shown below can use arguments using the syntax &lt;code&gt;{{argument_name}}&lt;/code&gt;. These queries can be as simple or complex as needed, incorporating SQL features such as joins, aggregations, and custom business logic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5d3aPlHW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/8YgjtNM4O-qENWw-2_FwI-3v39LtrRn3-TeneOJpKSvtkLoaAdivVKAvjjgKuqZ3yLjOVkNk6RWBNBzm3v2HrEmtIjEdscNAsWqe61UrIAJdEriXIvKDuGCK9UmCSlQaLBzyCd2edEH9XUStJcHWUrg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5d3aPlHW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh6.googleusercontent.com/8YgjtNM4O-qENWw-2_FwI-3v39LtrRn3-TeneOJpKSvtkLoaAdivVKAvjjgKuqZ3yLjOVkNk6RWBNBzm3v2HrEmtIjEdscNAsWqe61UrIAJdEriXIvKDuGCK9UmCSlQaLBzyCd2edEH9XUStJcHWUrg" alt="Boosting database interactivity and developer productivity with Hasura Native Queries and Logical Models" width="800" height="555"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Binding to GraphQL Schema using Logical Models
&lt;/h3&gt;

&lt;p&gt;After defining a Native Query, you can bind it to your Hasura-generated GraphQL schema. This automatically exposes the Native Query as a GraphQL API endpoint, making it accessible to your application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--axkYHyUM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh3.googleusercontent.com/diLCe3rD6ViMvkp4TMGROuPl1BtnmlhEoWQa8VMc73xf3ZH-GX-QgbnqZbs2p6D2wY3tlPMneytiyphDc4QMETH3h2CBev5cZxClVWOo_WpwXptnUIwDRcvMAOl1WVOddJnbFbeGiocxBjiiUUxqorA" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--axkYHyUM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh3.googleusercontent.com/diLCe3rD6ViMvkp4TMGROuPl1BtnmlhEoWQa8VMc73xf3ZH-GX-QgbnqZbs2p6D2wY3tlPMneytiyphDc4QMETH3h2CBev5cZxClVWOo_WpwXptnUIwDRcvMAOl1WVOddJnbFbeGiocxBjiiUUxqorA" alt="Boosting database interactivity and developer productivity with Hasura Native Queries and Logical Models" width="800" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Executing Native Queries:&lt;/strong&gt; When a request is made to the Native Query endpoint via GraphQL, the Hasura GraphQL Engine translates the GraphQL query into an SQL query and executes it against the database.&lt;/p&gt;

&lt;p&gt;The results are then transformed into GraphQL response format and returned to the client.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;root field name&amp;gt;(
[args: {"&amp;lt;argument name&amp;gt;": &amp;lt;argument value&amp;gt;, ...},]
[where: ...,]
[order_by: ..., distinct_on: ...,]
[limit: ..., offset: ...]
) {
&amp;lt;field 1&amp;gt;
&amp;lt;field 2&amp;gt;
...
}
}```




&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Developers can leverage Native Queries using the Hasura GraphQL API and all the features that come with it – this includes pagination, filtering, sorting, aggregations, caching, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits
&lt;/h2&gt;

&lt;p&gt;This is where Native Queries come into play, empowering you to leverage the full power of SQL within Hasura while maintaining flexibility and control over your GraphQL schema. Native Queries enable you to automatically generate a GraphQL API around raw SQL queries, offering a range of benefits:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility:&lt;/strong&gt; By utilizing Native Queries, you can incorporate custom SQL logic into your GraphQL schema, allowing for complex database operations and tailored responses. This flexibility enables you to meet specific requirements that may not be achievable through standard GraphQL queries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Control:&lt;/strong&gt; Native Queries give you greater control over your Hasura-generated GraphQL schema. Instead of relying solely on the automatic generation of GraphQL API from the database schema, you can shape the GraphQL schema around your SQL queries, providing a more fine-tuned interface for interacting with your data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Avoiding DDL Privileges:&lt;/strong&gt; With Native Queries, you no longer need to create additional database objects that require Data Definition Language (DDL) privileges. This simplifies the development process by reducing the dependencies and permissions required for deploying and maintaining your application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Abstraction:&lt;/strong&gt; By defining the data model in Hasura, it allows you to define a data schema that best fits the needs of your GraphQL API, decoupling it from the underlying database schema. This enables independent evolution of the API and the database structure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-Database Compatibility:&lt;/strong&gt; Modeling data in the middleware layer can facilitate cross-database compatibility. The middleware can act as an abstraction layer, translating GraphQL queries into the appropriate database-specific queries for different target databases. This allows for greater flexibility in choosing and changing the underlying database technology.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the realm of GraphQL development, choosing the right approach for modeling data is crucial for building efficient, scalable, and flexible applications. Traditionally, developers have relied on creating database artifacts like views, user-defined functions (UDFs), and stored procedures.&lt;/p&gt;

&lt;p&gt;However, solutions like Hasura represent a paradigm shift in the way data is leveraged by app developers and other data consumers across organizations. Native Queries and Logical Models are a game-changer, they empower developers to unlock unprecedented agility and flexibility in their GraphQL solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations
&lt;/h2&gt;

&lt;p&gt;Native Queries currently supports read-only query capabilities for Postgres, SQL Server, and BigQuery. Support for more databases and mutations will be coming soon.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Modeling data in Hasura represents a paradigm shift in GraphQL development, enabling developers to embrace agility, flexibility, and efficiency. By leveraging Hasura's automatic API generation, flexibility in schema definition, rapid development capabilities, access control mechanisms, performance optimizations, and integration capabilities, developers can build scalable and resilient applications without being bound by the limitations of traditional database-centric approaches.&lt;/p&gt;

&lt;p&gt;As the GraphQL ecosystem continues to evolve, embracing tools like Hasura will undoubtedly become the preferred choice for developers seeking to unlock the full potential of GraphQL.&lt;/p&gt;

&lt;h2&gt;
  
  
  📚 Documentation and Resources
&lt;/h2&gt;

&lt;p&gt;To help you get started, we've prepared detailed documentation, guides, and examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://hasura.io/docs/latest/schema/postgres/logical-models/native-queries/"&gt;Postgres Native Queries Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://hasura.io/docs/latest/schema/ms-sql-server/logical-models/native-queries/"&gt;SQL Server Native Queries Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://hasura.io/docs/latest/schema/bigquery/logical-models/native-queries/"&gt;BigQuery Native Queries&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🚀 Get Started Today!
&lt;/h2&gt;

&lt;p&gt;We can't wait to see the amazing applications you'll build using Hasura and Native Queries. Get started today by signing up for &lt;a href="https://cloud.hasura.io/signup"&gt;Hasura Cloud&lt;/a&gt; and connecting to one of the supported databases.&lt;/p&gt;

&lt;p&gt;If you have any questions or need assistance, feel free to reach out to our team on &lt;a href="https://discord.com/invite/hasura"&gt;Discord&lt;/a&gt; or &lt;a href="https://github.com/hasura/graphql-engine/issues"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Happy building! 🎉&lt;/p&gt;

</description>
      <category>nativequeries</category>
      <category>logicalmodels</category>
    </item>
    <item>
      <title>“We got documentation feedback on…transferring wealth?”</title>
      <dc:creator>Hasura</dc:creator>
      <pubDate>Thu, 29 Jun 2023 14:28:20 +0000</pubDate>
      <link>https://forem.com/hasurahq_staff/we-got-documentation-feedback-ontransferring-wealth-21cc</link>
      <guid>https://forem.com/hasurahq_staff/we-got-documentation-feedback-ontransferring-wealth-21cc</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aWbVB_ar--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/06/docs-template-feature.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aWbVB_ar--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://hasura.io/blog/content/images/2023/06/docs-template-feature.png" alt="“We got documentation feedback on…transferring wealth?”" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A few weeks ago, I was plugging away at something significant&lt;/strong&gt; (read: procrastinating by looking for ways of automating some repetitive task in an inane way) when I heard the familiar "pop-pop-pop" notification from Slack. Like many of you, Slack sits permanently open on my machine, every ping like a mosquito needing to be swatted away. However, this one got my attention.&lt;/p&gt;

&lt;p&gt;In our #docs-feedback channel, which we connected to the feedback component at the bottom of every &lt;a href="https://hasura.io/docs"&gt;docs page&lt;/a&gt;, a new message came through:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;New feedback on this page:  Transfers | Wealth Docs&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I'm relatively neurotic, so the spike in cortisol from a near panic attack was significant, as my first thought was that we'd been hacked. How? Rationality is absent in fight-or-flight mode. 🤷‍♂️&lt;/p&gt;

&lt;p&gt;It quickly became clear this wasn't a malicious attempt at hijacking our component or site – rather, this was someone so happy with our docs that they decided to copy them verbatim, leading to their docs feedback arriving on our Slack channel. We were flattered, and it quickly became an internal joke.&lt;/p&gt;

&lt;p&gt;Then, it happened again. With another company.&lt;/p&gt;

&lt;p&gt;Flattered as we were, we couldn't have other sites' feedback clogging our channel or &lt;del&gt;dragging our average down&lt;/del&gt; inflating our KPIs. To mitigate this, we introduced &lt;a href="https://github.com/hasura/graphql-engine/blob/master/docs/src/components/Feedback/Feedback.tsx#L56-L64"&gt;a tiny bit of logic&lt;/a&gt; that prevents the input from going through, if not from our domain. Could you bypass this and still hit our API? Of course, but please don't. You surely have better things to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Our philosophy on docs&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As Hasura’s core is open source, so are our docs. We regularly receive contributions from you all (thanks for catching our spelling mistakes 😘). We are constantly improving our documentation to better serve you in building your projects and products.&lt;/p&gt;

&lt;p&gt;As we're sure you know from regularly visiting the docs wiki as part of your morning ritual while drinking coffee, you're no doubt familiar with the core of the docs team's philosophy around documentation:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We hold a strict standard because we want to ensure our users can quickly find what they need, understand it because it's well-written, and get back to building with Hasura. 🚀&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We see the popularity of our docs site as a template as an extension of that philosophy! And we want you to keep using the docs site as your starting point for your next project's documentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Get your own Hasura docs!&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We have an easy one-liner below if you want to get started with our docs site. This command will clone the hasura/graphql-engine repo and then remove every folder except the most important. 😉&lt;/p&gt;

&lt;p&gt;&lt;code&gt;git clone --depth 1 --filter=blob:none --sparse https://github.com/hasura/graphql-engine.git &amp;amp;&amp;amp; cd graphql-engine &amp;amp;&amp;amp; git sparse-checkout init &amp;amp;&amp;amp; git sparse-checkout set docs &amp;amp;&amp;amp; find . -type f ! -path "./docs/*" -delete&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then, &lt;code&gt;git init&lt;/code&gt; and have some fun building on top of our docs!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;BONUS&lt;/strong&gt; : Be on the lookout for our _ &lt;strong&gt;new&lt;/strong&gt; _ v3-docs site…coming soon! And, yes: It will still be open sourced for your nitpicking pleasure. 🥳&lt;/p&gt;

&lt;p&gt;❤️ Hasura Docs Team&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
