<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Gerard Klijs</title>
    <description>The latest articles on Forem by Gerard Klijs (@gklijs).</description>
    <link>https://forem.com/gklijs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/gklijs"/>
    <language>en</language>
    <item>
      <title>Why you might eventually wish you started out using event sourcing</title>
      <dc:creator>Gerard Klijs</dc:creator>
      <pubDate>Sun, 02 Jan 2022 16:21:30 +0000</pubDate>
      <link>https://forem.com/gklijs/why-you-might-eventually-wish-you-started-out-using-event-sourcing-2jbl</link>
      <guid>https://forem.com/gklijs/why-you-might-eventually-wish-you-started-out-using-event-sourcing-2jbl</guid>
      <description>&lt;p&gt;Some thought on what event sourcing might be, and what it adds to the table.&lt;/p&gt;

&lt;h2&gt;
  
  
  Philosophical trickery
&lt;/h2&gt;

&lt;p&gt;In order to be able to write this blog, but not strand to much on what event sourcing 'really' is, I will rely on some trick with words. Instead of talking about 'real' event sourcing I'll quickly go through about what event sourcing in the 'weak' sense could mean, and what event sourcing in the 'strong' sense could be. As often with this trick most systems in the wild are probably somewhere between.&lt;/p&gt;

&lt;p&gt;It's unlikely event sourcing in the 'weak' sense will be labeled as such by people using it. Likewise, event sourcing in the 'strong' sense is so strict, it's unlikely a real system hasn't used a shortcut somewhere. It still helps for setting up the story, and the differences for when we go into the advantages of event sourcing in the 'strong' sense.&lt;/p&gt;

&lt;h3&gt;
  
  
  Event sourcing in the weak sense
&lt;/h3&gt;

&lt;p&gt;Event sourcing in the weak sense is taking a very literal approach. Basically all we need is just some events that are stored somewhere for other systems to use. The system producing this stream might not even consume it itself, meaning inconsistencies may occur. It might also not even contain all the events since the service was started. The event also doesn't need to be a business event, it might be something technical, like the update on some row in a SQL database. The events might also not be complete, as there might be related events, but they might not as such be available in the same way.&lt;/p&gt;

&lt;p&gt;One of the components that can be used to easily implement such a thing would be to use &lt;a href="https://debezium.io/"&gt;Debezium&lt;/a&gt;. With this we can set up one or multiple rows or collections, depending on the database used, and have all the changes to those available as messages in &lt;a href="https://kafka.apache.org/"&gt;Kafka&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Event sourcing in the strong sense
&lt;/h3&gt;

&lt;p&gt;The most important difference with event sourcing in the strong sense, compared to event sourcing in the weak sense, is that the events should be the &lt;strong&gt;single&lt;/strong&gt; source of truth. In order to generate a new event, a 'command' is issued to the system, and depending solely on the past events, this might generate new events.&lt;/p&gt;

&lt;p&gt;One of the ways to work in this manner in practice is by using the &lt;a href="https://axoniq.io/product-overview/axon-framework"&gt;Axon Framework&lt;/a&gt;. For this to work the events need to be immutable, and we need some way to quickly get all the events related to a certain entity. Based on the past events we can than determine if the command is valid or not. There are some complexities with such a system, like when we need to coordinate between different entities. I won't go into them in this blog.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use case, PEF, a payment provider
&lt;/h2&gt;

&lt;p&gt;Let say we have a payment provider. They wanted to be quick to market, so did not spend a lot of thought on designing the architecture. Let's call it 'PEF', for pay easy and fast. In PEF most services are simple REST based services, secured, and allowing all the known CRUD operation when the one calling the service has enough rights. As always with these things any resemblance with any real company is incidental.&lt;/p&gt;

&lt;h3&gt;
  
  
  PEF starts using event sourcing in the weak sense
&lt;/h3&gt;

&lt;p&gt;In PEF they use a microservice setup. A lot of these services need to have some access to the customer information of PEF, for example to know the bank account number of a customer. Because the customer service was swamped with REST calls as PEF was getting more customers, they decided to start using event sourcing in the weak sense for the customer information. By using Debezium some customer information thus became available in an async matter. This way another service needing to have the bank account number by a customer id, could store and update this information in its own database, preventing a lot of REST calls to the customer service.&lt;/p&gt;

&lt;p&gt;Note that what is available is just customer updates, as the changes are made to the database. For example there is not something like a 'CustomerBlocked' event with some context. If a customer is being blocked ot might be just an update where the property 'blocked' is set to true.&lt;/p&gt;

&lt;h3&gt;
  
  
  Some challenges ahead
&lt;/h3&gt;

&lt;p&gt;I will now get to a few cases where the differences between event sourcing in the weak sense and event sourcing in the strong sense become more evident. As PEF continues to grow, data is needed in other ways than previously was accountant for. Each of these cases provides a challenge to PEF, and depending on how things are set up, these challenges might be easy or impossible to solve.&lt;/p&gt;

&lt;h4&gt;
  
  
  Customer wants to know why he's blocked
&lt;/h4&gt;

&lt;p&gt;After some time one of the customers is reaching out why his account is blocked. The one researching this issue doesn't have direct access to customer service database. From the messages available via Kafka he does eventually find out he was blocked three weeks ago. Unfortunately the updated record just says he was blocked from that time, but doesn't include any information regarding the reason. After reaching out to the customer service team, it turns out the reason is not stored in the database either, but it is logged at info level. Logging is searchable, but only for the past two weeks. To get the logs further back a request need to be made to the SRE team, to get the stored logs. After some back and forth they manage to get the relevant logging, and it turns out the customer was being blocked for being inactive. Happy to finally found the answer, this could be feed back to the customer, with some instructions how to reactivate his account, and what he can do to prevent the same thing from happening again.&lt;/p&gt;

&lt;p&gt;If we use event sourcing in the strong sense, it would probably have been much easier to found out what was the cause. By the nature of event sourcing it would be easy to retrieve all the event concerning the one customer. Instead of having to search for an update where a property was set from 'false' to 'true' we likely would be able to see something like an 'CustomerBlockedEvent'. In this event we would have the relevant additional data, like the reason, and when it happened.&lt;/p&gt;

&lt;h4&gt;
  
  
  Data analyst want to see correlations
&lt;/h4&gt;

&lt;p&gt;Via the marketing department, PEF want to build some models on the typical behavior of customers. They are aware for the time they want to run there analysis on, all the customer updates are available via Kafka. They want to compare when customers are onboarded, with which actions are executed in the web UI. Unfortunately it turns out for knowing what customer did in the web UI they can only use access logging. This is quite a struggle as the customer id's are not readable which each request made, because JWT is used. So in turn to correlate a certain call with a certain customer, they have to decode the JWT and extract the customer id. Because this is a different stream of information, a lot of the work for getting the customer information itself can't be reused.&lt;/p&gt;

&lt;p&gt;With event sourcing we would have more uniform information to work with, and it would likely be a lot easier for the data scientist to build the model. Also since more information is available as proper events, it might be a lot easier to research similar things in the future.&lt;/p&gt;

&lt;h4&gt;
  
  
  Governance needs some improvements
&lt;/h4&gt;

&lt;p&gt;Since PEF is a financial institution there are some strict laws that apply. One of these is that it should always be clear that if customer data is changed, it should be clear who made the change and way. For example an address might be changed because the customer has moved. This change might be done from the web UI by the customer directly, or indirectly by calling customer support. As it turned out, the current customer service wasn't compliant with these rules, and should be updated soon on the risk of losing the license.&lt;/p&gt;

&lt;p&gt;This ment the customer service team had to go through the code base, and add relevant logging with the relevant details to be compliant. The team also added an item to the '&lt;a href="https://www.scrum.org/resources/blog/done-understanding-definition-done"&gt;Definition of Done&lt;/a&gt;', so when in the future anything is added that updates customer information the relevant logging should also be there.&lt;/p&gt;

&lt;p&gt;With event sourcing such nonfunctional requirements are easier to handle in a generic way. For example, we could have something like the 'CustomerContext' which is part of all commands related to the customer. In the resulting events we need to include those, to make sure they are stored with the event. It should not be that hard to add this additional information later on.&lt;/p&gt;

&lt;h2&gt;
  
  
  What can we learn from PEF
&lt;/h2&gt;

&lt;p&gt;While event sourcing in the strong sense might seem to complex at first glance, it might actually make thing easier later on. By using certain libraries of frameworks it might not actually be much more complex than event sourcing in the weak sense. As such when designing a new architecture or service, I think it's something that should at least be considered.&lt;/p&gt;

&lt;p&gt;I know some of the things might be controversial, please feel free to discuss.&lt;/p&gt;

</description>
      <category>eventsourcing</category>
      <category>architecture</category>
      <category>kafka</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Confluent Schema Registry and Rust</title>
      <dc:creator>Gerard Klijs</dc:creator>
      <pubDate>Mon, 26 Jul 2021 17:40:27 +0000</pubDate>
      <link>https://forem.com/gklijs/confluent-schema-registry-and-rust-4lhj</link>
      <guid>https://forem.com/gklijs/confluent-schema-registry-and-rust-4lhj</guid>
      <description>&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;This blog will be about the Rust library I wrote and maintain, &lt;a href="https://crates.io/crates/schema_registry_converter"&gt;schema_registry_converter&lt;/a&gt;. Since the library has little use on its own, and is about integrating data with Apache Kafka, we first need to take a few steps back.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kafka
&lt;/h2&gt;

&lt;p&gt;For me having used Apache Kafka at several clients during the years, it's sometimes hard to imagine that other developers don't know anything about Kafka. There are a lot of great articles and introductions to Kafka, like &lt;a href="https://www.youtube.com/watch?v=qu96DFXtbG4"&gt;Apache Kafka 101: Introduction&lt;/a&gt; from Tim Burgland. Kafka is different from more traditional message queues mainly because the time messages stay available for consumers is independent on how they are consumed. Multiple apps can thus read the same messages.&lt;/p&gt;

&lt;p&gt;For the purpose of this blog it's important to know that messages are stored on Kafka as records. Records have a value and an optional key, which are both in binary format. Another important fact is that Kafka uses topics to split the messages. One topic might exist of multiple topic-partitions which are used to make it scalable. Part of the configuration is topic specific. For example, data on certain topics can be retained longer than other ones, or can be configured as &lt;code&gt;compacted&lt;/code&gt; such that the last message with the same key will never be deleted.&lt;/p&gt;

&lt;p&gt;Because data is stored in a binary format, it's important for apps producing data to do so in a way apps consuming the data can make sense of those bytes. An easy way to do it is to serialise the data as JSON. Especially since it's human-readable and easy to work with in most programming languages. This does make it less transparant for the consumer, and has some other downsides like the message being relatively big.&lt;/p&gt;

&lt;p&gt;To have more control over the data, and store it in a binary format, a schema registry can be used. One the most used registries with Kafka is the &lt;a href="https://docs.confluent.io/platform/current/schema-registry/index.html"&gt;Confluent Schema Registry&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Confluent Schema Registry
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.confluent.io/"&gt;Confluent&lt;/a&gt; is a company founded by the creators of Apache Kafka. They are providing the &lt;a href="https://docs.confluent.io/platform/current/platform.html"&gt;Confluent Platform&lt;/a&gt; which consists of several components,&lt;br&gt;
all based on Kafka. The &lt;a href="https://docs.confluent.io/platform/current/installation/license.html"&gt;license&lt;/a&gt; for these components vary. The Schema Registry has the &lt;a href="https://www.confluent.io/confluent-community-license-faq/"&gt;community-license&lt;/a&gt;, which basically means it's free to&lt;br&gt;
use as long as you don't offer the Schema Registry itself as a SaaS solution. The source code can be found on &lt;a href="https://github.com/confluentinc/schema-registry"&gt;Github&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So what is actually the Schema Registry? And how does it help to make sense of binary data? In essence Schema Registry is an application with some Rest endpoints, from which schema's can be registered and retrieved. It used to only support &lt;a href="https://avro.apache.org/docs/current/"&gt;Apache Avro&lt;/a&gt;. Later support for &lt;a href="https://developers.google.com/protocol-buffers"&gt;Protobuf&lt;/a&gt; and &lt;a href="https://json-schema.org/"&gt;JSON Schema&lt;/a&gt; was added. Part of the same Github project, and what makes Schema Registry easy to use, is a Collection of Java classes that are used to go from bytes to typed objects and vice versa. There are several classes that support &lt;a href="https://kafka.apache.org/documentation/streams/"&gt;Kafka Streams&lt;/a&gt; and &lt;a href="https://ksqldb.io/"&gt;ksqlDB&lt;/a&gt; next to the more low level Kafka [Producer(&lt;a href="https://kafka.apache.org/documentation/#producerapi"&gt;https://kafka.apache.org/documentation/#producerapi&lt;/a&gt;) and &lt;a href="https://kafka.apache.org/documentation/#consumerapi"&gt;Consumer&lt;/a&gt; clients. There are more advanced use cases, but basically you supply the url for the Schema Registry, and the library will handle the rest. For producing data this will mean optionally register a new schema too get the correct id. The consumer will use the encoded id to fetch the schema used to produce the data. It can also be used with other frameworks like &lt;a href="https://spring.io/projects/spring-cloud-stream"&gt;Spring Cloud Stream&lt;/a&gt;. For example in the &lt;a href="https://github.com/gklijs/obm_confluent_blog/tree/kotlin/command-handler"&gt;Kotlin Command Handler&lt;/a&gt; by using the &lt;a href="https://github.com/confluentinc/schema-registry/blob/master/avro-serde/src/main/java/io/confluent/kafka/streams/serdes/avro/SpecificAvroSerde.java"&gt;SpecificAvroSerde&lt;/a&gt; class. You might need to set additional properties to get this working.&lt;/p&gt;

&lt;p&gt;All this is great when using a JVM language for your app, but might be a challenge when using another programming language. Part of the reason is that because the bytes produced by schema registry are specific to schema registry. There is always a 'magic' first byte, which allows for breaking changes at some point, and lets the clients know quickly whether the data is encoded properly. The reference to the schema that was used to serialize the data is also part of the data. This makes it impossible to use a 'standard' library. Since those bytes need to be removed for a 'standaard' library to work. This might be a valid reason to use something like Protobuf, combined with some documentation on which Protobuf schema was used for which topic. You also don't have to run a schema registry in that case, but for clients it's a bit more work to get the correct schema.&lt;/p&gt;

&lt;p&gt;On the other, hand Schema Registry does offer a complete solution, where [Schema Compatibility (&lt;a href="https://docs.confluent.io/platform/current/schema-registry/avro.html"&gt;https://docs.confluent.io/platform/current/schema-registry/avro.html&lt;/a&gt;) can be configured. Because updates to schema's can be varified for backwards compatibility this way, consumers can use the old schema for the data. Storing the schema in a central location decouples the producers from the consumers. Making it much easier to add additional information later on without the need to immediately update the consumers once the producer start using the new schema. Another major advantage is the integration of Schema Registry with the Confluent Platform. Making it much easier to use of ksqlDB.&lt;/p&gt;
&lt;h2&gt;
  
  
  Rust
&lt;/h2&gt;

&lt;p&gt;Rust is a pretty young language, the first stable version released in May 2015. I've been using Rust a couple of years, but in the early days it could be a lot of work just to get your code to compile again. Since the stable release, there has been no backwards incompatible changes. This has also paved the way for a lot of libraries, or crates as they are called in Rust. One of the major sources to start learning Rust is &lt;a href="https://doc.rust-lang.org/book/"&gt;"the book"&lt;/a&gt;, there are also books for specific subjects like &lt;a href="https://rustwasm.github.io/docs/book/"&gt;WASM&lt;/a&gt; and &lt;a href="https://rust-lang.github.io/async-book/"&gt;async&lt;/a&gt;. There are also a lot of videos available on &lt;a href="https://www.youtube.com/results?search_query=rustlang"&gt;Youtube&lt;/a&gt;. One of those is &lt;a href="https://www.youtube.com/watch?v=L6DvTCr6TF0"&gt;this one&lt;/a&gt;, which I made specific for Java Developers.&lt;/p&gt;

&lt;p&gt;Crates can be found on &lt;a href="https://crates.io/"&gt;crates.io&lt;/a&gt; where you can be easily search for specific libraries, and all relevant information about the libraries is available. Rust is a c/c++ alternative, but it can in some cases be an alternative to Java as well. This largely depends on and what the app does, and if for the libraries used, there are Rust alternatives available.&lt;/p&gt;

&lt;p&gt;Rust itself is open source. With the creation of the &lt;a href="https://foundation.rust-lang.org/"&gt;Rust foundation&lt;/a&gt; its future is secure. Personally I really like the tooling, like &lt;a href="https://github.com/rust-lang/rustfmt"&gt;rustfmt&lt;/a&gt; and &lt;a href="https://github.com/rust-lang/rust-clippy"&gt;clippy&lt;/a&gt; which work as a default and easy to install formatter and linter respectively. Another nice thing is being able to write tests as documentation, with the documentation being available online like the &lt;a href="https://docs.rs/schema_registry_converter/2.0.2/schema_registry_converter/blocking/avro/struct.AvroDecoder.html"&gt;AvroDecoder&lt;/a&gt; stuct from the &lt;code&gt;schema_registry_converter&lt;/code&gt; library.&lt;/p&gt;
&lt;h2&gt;
  
  
  Bank demo project
&lt;/h2&gt;

&lt;p&gt;What originally was created as a project used in a workshop with my &lt;a href="https://www.openweb.nl/"&gt;Open Web&lt;/a&gt; colleagues has turned out to be my goto project for experimentation. The full story can be found on &lt;a href="https://dev.to/gklijs/the-human-side-of-open-bank-mark-3o4b"&gt;Dev.to&lt;/a&gt;. Basically it's a couple of small services that together create a virtual bank where users can log in, get an account, and transfer money. &lt;a href="https://github.com/gklijs/obm_confluent_blog"&gt;One&lt;/a&gt; of the iterations was used for a &lt;a href="https://www.confluent.io/blog/getting-started-with-rust-and-kafka/"&gt;blog with Confluent&lt;/a&gt;. It's relevant for this blog as the core of the &lt;code&gt;schema_registry_converter&lt;/code&gt; came into existence creating a Rust variant for the &lt;code&gt;Command Handler&lt;/code&gt; part of the demo project. For that project I was using schema registry, and since I wanted to keep the rest of the system the same, I didn't want to change the binary format used with Kafka.&lt;/p&gt;

&lt;p&gt;Like I mentioned, using schema registry with a non JVM language can be challenging. Luckily I had some previous knowledge about the internals of Schema Registry from my days at Axual. When I tried to use Rust together with the Schema Registry, Avro was the only supported format. I quickly found out there was also already a &lt;a href="https://crates.io/crates/avro-rs"&gt;Rust library supporting Avro&lt;/a&gt;. So it seemed with just a couple of Rest calls to the Schema Registry server, and using the library I should be able to get it to work, which I did. The result with an early version of the library can be found in &lt;a href="https://github.com/gklijs/obm_confluent_blog/blob/rust-rdkafka-async/command-handler/src/kafka_producer.rs"&gt;kafka_producer.rs&lt;/a&gt; and &lt;a href="https://github.com/gklijs/obm_confluent_blog/blob/rust-rdkafka-async/command-handler/src/kafka_consumer.rs"&gt;kafka_consumer.rs&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Schema Registry Converter
&lt;/h2&gt;

&lt;p&gt;The source for the current version of the library can be found on &lt;a href="https://github.com/gklijs/schema_registry_converter"&gt;Github&lt;/a&gt;. I had to increase the major version because I needed to break the API in order to support all formats supported by the current Schema Registry version. I also added the possibility to set an API key, so it can be used with &lt;a href="https://www.confluent.io/confluent-cloud/"&gt;Confluent Cloud&lt;/a&gt;, the cloud offering from Confluent. As part of the latest major refactoring it's also supporting &lt;code&gt;async&lt;/code&gt;. This might improve performance of your app, and is also the default for the major &lt;a href="https://crates.io/crates/rdkafka"&gt;Kafka client&lt;/a&gt;, more information about why you would want to use async can be found in the &lt;a href="https://rust-lang.github.io/async-book/01_getting_started/02_why_async.html"&gt;async book&lt;/a&gt;. The schemas retrieved from the Schema Registry are cached. This way the schema is only retrieved once for each id, and reused for other messages with the same id.&lt;/p&gt;

&lt;p&gt;Next to the additional formats there was one other major change to incorporate from Schema Registry. In order to reuse registered schemas with new schemas they made it possible to have references. So when retrieving a schema, one or more pointers to other schemas might be part of the returned JSON. To make sure I got this part right in the Rust library I created a &lt;a href="https://github.com/gklijs/schema_registry_test_app"&gt;Java project&lt;/a&gt; which can be used from &lt;a href="https://hub.docker.com/repository/docker/gklijs/schema-registry-test-app"&gt;docker&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Another interesting challenge was getting the protobuf implementation correct. Contrary to Avro or JSON Schema, one proto file can describe multiple messages. In order to properly serialise the data, the kind of message used also needs to be encoded. While for Java the same part was a pretty trivial, because the Protobuf library used had an easy way to map a number to a message. I could not find something similar in Rust, so in &lt;a href="https://github.com/gklijs/schema_registry_converter/blob/master/src/proto_resolver.rs"&gt;proto_resolver.rs&lt;/a&gt; I used a &lt;a href="https://crates.io/crates/logos"&gt;lexer&lt;/a&gt; to provide the needed functionality.&lt;/p&gt;

&lt;p&gt;What the library does is different for a producer and a consumer. For both there are action diagrams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Producer action diagram&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kIV-X4eg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vu53aaopoud4ijselam9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kIV-X4eg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vu53aaopoud4ijselam9.png" alt="Producer activity flow"&gt;&lt;/a&gt;&lt;br&gt;
For the Producer it needs to encode the bytes in the proper way. This starts by enabling the feature (?MT what does feature mean here) of the correct encoder, depending on the format, and whether blocking or async is required. Then the data needs to be encoded using the encoder and one of the &lt;code&gt;SubjectNameStrategies&lt;/code&gt; which might contain a schema. With the option of using the cache, a byte value is produced that can be used as either the &lt;code&gt;key&lt;/code&gt; or &lt;code&gt;value&lt;/code&gt; part of a Kafka record.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consumer action diagram&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0W9uogeH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dmhm3ow8jebwvlwgz5vu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0W9uogeH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dmhm3ow8jebwvlwgz5vu.png" alt="Consumer activity flow"&gt;&lt;/a&gt;&lt;br&gt;
For the consumer it's also needed to use the correct decoder, based on the expected format of the message. From the Kafka record either the &lt;code&gt;key&lt;/code&gt; or &lt;code&gt;value&lt;/code&gt; bytes is used. With the encoded id the matching schema will be retrieved or fetched from the cache. Depending on the decoder used a certain typed value is returned. Depending on the app this value can be used for several things, for example to write something in a database.&lt;/p&gt;

&lt;p&gt;Because of the three formats, and the two ways of using the library, async and blocking, it would be tedious to have examples for all. To make things worse each of these six possibilities has its own encoder and decoder. Where the encoder is used for a producer and a decoder for a consumer. Both also have their own separate possibilities. For the producer it's possible to register a new schema, if the latest is not the same one, or use a schema that was already registered. For the consumer it can be a challenge to use the resulting typed struct, where it might be needed for Avro to have an Enum with all the expected possibilities. Similar to what was used for the &lt;a href="https://github.com/gklijs/obm_confluent_blog/blob/5397a7ead6eb7ade4ad36935be18c010cde4132f/command-handler/src/avro_data.rs#L228"&gt;&lt;code&gt;AvroData&lt;/code&gt; in the demo project&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Some simple examples are available in the library itself like the async Avro Decoder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;avro_rs&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;types&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Value&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;schema_registry_converter&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;async_impl&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;schema_registry&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;SrSettings&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;schema_registry_converter&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;async_impl&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;avro&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;AvroDecoder&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;sr_settings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;SrSettings&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nd"&gt;format!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"http://{}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;server_address&lt;/span&gt;&lt;span class="p"&gt;()));&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;decoder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;AvroDecoder&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sr_settings&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;heartbeat&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;decoder&lt;/span&gt;&lt;span class="nf"&gt;.decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="nf"&gt;.unwrap&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="py"&gt;.value&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nd"&gt;assert_eq!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;heartbeat&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nn"&gt;Value&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;Record&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nd"&gt;vec!&lt;/span&gt;&lt;span class="p"&gt;[(&lt;/span&gt;&lt;span class="s"&gt;"beat"&lt;/span&gt;&lt;span class="nf"&gt;.to_string&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="nn"&gt;Value&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;Long&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;))]));&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In order to prepare for a future blog with Confluent I wanted to play around with ksqlDB, which was the perfect opportunity to use the Rust library in a less trivial way. As it turned out there is even a &lt;a href="https://crates.io/crates/ksqldb"&gt;library&lt;/a&gt; for communicating with ksqlDB from rust, using the &lt;a href="https://docs.ksqldb.io/en/latest/developer-guide/api/"&gt;Rest API&lt;/a&gt;. The PoC project for this contains some &lt;a href="https://github.com/gklijs/ksqlDB-GraphQL-poc/blob/main/rust-data-creator/src/data_producer.rs"&gt;code&lt;/a&gt; to put protobuf data on a topic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Maintaining the library
&lt;/h2&gt;

&lt;p&gt;When I took the steps to turn the code I had into a library, I wanted to make sure I had decent code coverage. By leveraging &lt;a href="https://about.codecov.io/"&gt;codecov&lt;/a&gt; I now get updates on the code covered in pull requests. Not that there are many since the library does what it does, which is nicely scoped. The latest update was just updating dependencies, which might sometimes give problems especially for libraries like Avro, when the byte code used is not the same. A small update I'm thinking about adding it making it slightly easier to use protobuf when you know there is only one message in the proto schema.&lt;/p&gt;

&lt;p&gt;Aside from the big rewrite, maintaining the library has taken very little time. From time to time there is a question about the library. It is nice to see people actively using it and hearing about how the library is used. &lt;a href="https://crates.io/crates/schema_registry_converter"&gt;Crates.io&lt;/a&gt; shows the amount of downloads over the last 90 days. What's interesting is that, from the start of this year, instead of a flat line, there are clear peeks during working days. This is just one of the signs Rust is getting more mature and used in production. I still haven't used it in a 'real' project yet, but that's just a matter of time. Recently the library was used in a Hackathon with a contribution as result which is part of the 2.1.0 release. The pull request made it possible to supply configuration for custom security of the schema registry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final words
&lt;/h2&gt;

&lt;p&gt;Generally I enjoyed the time working on the library. Compared to Java, which has a much more mature ecosystem it's much easier to create a library which really adds value. Things like good error messages, and linter make it easier to create code I'm confident enough about to share with the community. For any questions regarding the library please use &lt;a href="https://github.com/gklijs/schema_registry_converter/discussions"&gt;Github Discussions&lt;/a&gt; so that others might benefit from the answer as well.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>rust</category>
      <category>kafka</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>Which talk would *you* want to see at Kafka Summit</title>
      <dc:creator>Gerard Klijs</dc:creator>
      <pubDate>Wed, 11 Dec 2019 12:46:40 +0000</pubDate>
      <link>https://forem.com/gklijs/which-talk-would-you-want-to-see-at-kafka-summit-4lda</link>
      <guid>https://forem.com/gklijs/which-talk-would-you-want-to-see-at-kafka-summit-4lda</guid>
      <description>&lt;p&gt;I did talk about using Kafka with GraphQL at GraphQL summit. Now I would like to go speak at Kafka Summit as well. You can help by telling which of the three proposals I had in mind, you would like to see most. Also &lt;a href="https://dev.to/rmoff/why-you-should-submit-a-talk-to-kafka-summit-2jf7"&gt;you should submit as well&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here they are:&lt;/p&gt;

&lt;h2&gt;
  
  
  Expose your Kafka using GraphQL
&lt;/h2&gt;

&lt;p&gt;GraphQL is a relatively new way to describe API's and offers subscriptions. Subscriptions are a way to send a stream of updates to the client. Often they are implemented using web sockets, which makes it hard to scale.&lt;/p&gt;

&lt;p&gt;I will explain a solution by using a demo application, which is a bank simulation. Also I will share some benchmarks to prove the scalability. This will be done mainly using Clojure, but I also implemented the GraphQL endpoint in Java, Kotlin, and some other languages.&lt;/p&gt;

&lt;p&gt;The demo project also contains a load test, which can be used on all the implementations in order to compare them.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Avro to GraphQL, blazingly fast
&lt;/h2&gt;

&lt;p&gt;Rust is called a system programming language but it's getting more and more use-full as the ecosystem grows. As it's strongly and strictly typed it should be possible to derive both the internal needed data structured to handle binary data based on Avro schema's and to derive a GraphQL schema which can be used by frontend applications.&lt;/p&gt;

&lt;p&gt;I will share the results of this journey, with at least some demo, and if all goes well a docker image that can be used to easily interact with Kafka using GraphQL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using the Confluent Schema Registry with Rust
&lt;/h2&gt;

&lt;p&gt;In this talk I will go into some of the challenges I faced making a Rust library that is as much as possible similar to the Java serializers. There are a couple of challenges here.&lt;br&gt;
There is no default way of adding serializers with consumers and producers. Another thing is that there is no classpath in Rust to dynamically create specific class instances. Also it turned out the Avro library for Rust didn't support all complex types well.&lt;br&gt;
That being said there are some advantages to using Rust. Like the memory safety, fast startup times, and being able to build very tiny Docker images.&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>writing</category>
      <category>techtalks</category>
    </item>
    <item>
      <title>Experience speaking at GraphQL Summit.</title>
      <dc:creator>Gerard Klijs</dc:creator>
      <pubDate>Sun, 10 Nov 2019 16:46:30 +0000</pubDate>
      <link>https://forem.com/gklijs/experience-speaking-at-graphql-summit-1mjo</link>
      <guid>https://forem.com/gklijs/experience-speaking-at-graphql-summit-1mjo</guid>
      <description>&lt;p&gt;See the series for the previous part of the story, or visit the &lt;a href="https://graphql.gklijs.tech/"&gt;demo&lt;/a&gt;, &lt;a href="https://github.com/openweb-nl/kafka-graphql-examples"&gt;github&lt;/a&gt; or &lt;a href="https://www.youtube.com/watch?v=EN73NiR8xZI"&gt;video&lt;/a&gt; directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Preparing for GraphQL Summit
&lt;/h2&gt;

&lt;p&gt;In order to prepare to speak at Graphql Summit I practiced the talk 5 times with other people present. The first time I wasn't ready and only that morning I put some slides together. It was worth the trouble through as I got a lot of feedback. With one of the most important things that there wasn't really clear what the story was.&lt;/p&gt;

&lt;p&gt;I gradually improved the slides, making the story more concise, also because I only had 25 minutes. And while I could make some assumptions on people already knowing GraphQL, I could not do the same for Kafka. So I added a very short introduction to Kafka.&lt;/p&gt;

&lt;p&gt;The fourth time I practiced my Presentation I got a compliment from someone that was not a developer herself, that she understood the story. I didn't change much after that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Just before the talk
&lt;/h2&gt;

&lt;p&gt;Flying on my own to the USA was quite an experience on it's own. I never flew alone before, but it's not really that different from going with someone. Just that it's just me finding out where and when to go, but it turned out fine.&lt;/p&gt;

&lt;p&gt;After landing I wanted to use the BART to get to the hotel. The machines for the tickets are not user friendly, luckily there was a guard explaining how they worked. When I arrived at my hotel room, I wanted to charge my laptop again, that I also used in the plane. It turned out I didn't bring the correct converter with me. So after visiting some places, I finally succeeded at the Mac Store.&lt;/p&gt;

&lt;p&gt;Going back I only had little time before getting ready to meet the other speakers while having dinner. It was really nice to meet so many interesting people.&lt;/p&gt;

&lt;p&gt;When I came back in the Hotel I wanted to finish something I started just before leaving, trying to run the demo backend on a remote machine using docker. The big problem was that because the frontend was running on https using Netlify the backend websocket had to be wss instead of ws as wel. Eventually I worked it out. I also tried the clicker I brought with me, to not have any surprises the next day.&lt;/p&gt;

&lt;h2&gt;
  
  
  Day of the presentation, first day of GraphQL Summit
&lt;/h2&gt;

&lt;p&gt;Conveniently the summit was held in the hotel I was staying. So I got downstairs for some breakfast. Usually at large conferences you have to eat standing, and wait for some time in the line to grab something. I was pleasantly surprised it was both a decent breakfast and that there where enough tables to sit down.&lt;/p&gt;

&lt;p&gt;After keynotes from Matt DeBergalis, explaining mainly the current and future state of GraphQL, and the talk talk from Brie Bunge about how you can start using GraphQL in a big company, there was a break before splitting up in three different tracks.&lt;/p&gt;

&lt;p&gt;Because my talk was the second one in the Bayview room I went there on time. There where several people present from the organization that put me on easy, and also hard me wired up before the first talk started.&lt;/p&gt;

&lt;p&gt;I was quite nervous, but also happy once I could start to get my presentation over with. During the start of my presentation people kept walking in the room. Because I took quite some time to introduce myself and what I was going to tell, they didn't miss much and I pretty much just continued the presentation. Aside from the url for the demo not being on the slides themselves, causing it to take sometime before people joined the demo it went fine.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/EN73NiR8xZI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  After the talk
&lt;/h2&gt;

&lt;p&gt;It was great seeing all the other presentations while no longer having to worry about my own. Also some people had additional questions or tried to read the source code which was great. During lunch they also had this great idea of having tables where certain topics could be discussed. I would really like to see that at other conferences as well.&lt;/p&gt;

&lt;p&gt;Just today I also finished a Java Spring boot implementation of the GraphQL endpoint. The Kotlin GraphQL server library also seems great, so I really want to give that a try. Maybe I also add both Rust and Node.js so I could compare the bunch.&lt;/p&gt;

&lt;p&gt;For the query part I want to experiment with Hasura and Postgraphile using the current PostgreSQL Database. But I'm also interested in changing the database to either Neo4j, Dgraph or FaunaDB. It should make certain parts easier, and also give access to some advanced features like mutations and pagination.&lt;/p&gt;

&lt;p&gt;I'm less sure after the summit subscriptions are a good fit when using Kafka and GraphQL. Especially because some tools don't work with subscriptions and they are hard to scale. To put something on Kafka it could also be enough just to know it was successfully send. Using queries for the derived view, the clients could make sure the command was properly processed.&lt;/p&gt;

&lt;p&gt;Just before leaving for the summit I got notice of &lt;a href="https://gitlab.com/arboric/arboric"&gt;arboric&lt;/a&gt;, which is also something I want to get some experience with. As it seems to be able to handle a few things like authorization and authentication in a nice way. It might even bee used to make subscriptions scalable in a nice way, by directing users to the 'correct' instance based on their token.&lt;/p&gt;

</description>
      <category>techtalks</category>
      <category>graphql</category>
      <category>clojure</category>
      <category>conferences</category>
    </item>
    <item>
      <title>Open bank mark goes USA</title>
      <dc:creator>Gerard Klijs</dc:creator>
      <pubDate>Thu, 24 Oct 2019 20:52:56 +0000</pubDate>
      <link>https://forem.com/gklijs/open-bank-mark-goes-usa-22po</link>
      <guid>https://forem.com/gklijs/open-bank-mark-goes-usa-22po</guid>
      <description>&lt;p&gt;Last few months have been really busy. I was finalising the now finally published &lt;a href="https://www.confluent.io/blog/getting-started-with-rust-and-kafka"&gt;blog post&lt;/a&gt;about using Rust with Kafka. And also preparing to talk at &lt;a href="https://summit.graphql.com/"&gt;GraphQL Summit&lt;/a&gt; which will be the first conference I'll be speaking.&lt;/p&gt;

&lt;p&gt;Been a really hectic time especially since there was also some trouble at home, leaving less time as usual to do stuff. But I'm really looking forward to visit San Fransisco and learning more about GraphQL.&lt;/p&gt;

&lt;p&gt;I'm still busy with the finishing touches to the repo that is more centred about the GraphQL part. One of the most important changes is using the GraphQL endpoint to increase the load of the system. If all goes well the presentation will be recorded.&lt;/p&gt;

</description>
      <category>graphql</category>
      <category>techtalks</category>
      <category>rust</category>
    </item>
    <item>
      <title>Taking open-bank-mark to The Dutch Clojure Meetup</title>
      <dc:creator>Gerard Klijs</dc:creator>
      <pubDate>Wed, 12 Jun 2019 21:35:01 +0000</pubDate>
      <link>https://forem.com/gklijs/taking-open-bank-mark-to-the-dutch-clojure-meetup-1o0c</link>
      <guid>https://forem.com/gklijs/taking-open-bank-mark-to-the-dutch-clojure-meetup-1o0c</guid>
      <description>&lt;p&gt;I just presented &lt;a href="https://github.com/openweb-nl/open-bank-mark"&gt;open-bank-mark&lt;/a&gt; at the &lt;a href="https://www.meetup.com/The-Dutch-Clojure-Meetup/"&gt;The Dutch Clojure Meetup&lt;/a&gt;. Focussing more on the Clojure libraries used than I did at the Kafka meetup.&lt;/p&gt;

&lt;p&gt;In preparation I tested with setting &lt;code&gt;linger.ms&lt;/code&gt; to 0 and 100 ms. Witch is one of the variables in Kafka that's really affecting your latency and throughput.&lt;/p&gt;

&lt;p&gt;It was nice of &lt;a href="https://www.openweb.nl/"&gt;Open Web&lt;/a&gt; to sponsor. Because it's a small meetup with only about 10 people today, there was lots of room for questions, and also sometimes for other people to answer when I didn't know. The raw data can be found &lt;a href="https://github.com/gklijs/open-bank-mark/tree/100-vs-0-linger-ms"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;iframe src="//www.slideshare.net/slideshow/embed_code/key/xhP6AChuc0RGTu" alt="xhP6AChuc0RGTu on slideshare.net" width="100%" height="450"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>clojure</category>
      <category>microservices</category>
      <category>testing</category>
      <category>docker</category>
    </item>
    <item>
      <title>The human side of open-bank-mark</title>
      <dc:creator>Gerard Klijs</dc:creator>
      <pubDate>Wed, 05 Jun 2019 18:28:31 +0000</pubDate>
      <link>https://forem.com/gklijs/the-human-side-of-open-bank-mark-3o4b</link>
      <guid>https://forem.com/gklijs/the-human-side-of-open-bank-mark-3o4b</guid>
      <description>&lt;p&gt;&lt;em&gt;TL;DR: I had a great time creating &lt;a href="https://github.com/openweb-nl/open-bank-mark" rel="noopener noreferrer"&gt;open-bank-mark&lt;/a&gt; even if your scared of parentheses you might want to check it out.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Recently I open sourced &lt;a href="https://github.com/openweb-nl/open-bank-mark" rel="noopener noreferrer"&gt;open-bank-mark&lt;/a&gt; and a week ago at the &lt;a href="https://www.meetup.com/Kafka-Meetup-Utrecht/events/260303497/" rel="noopener noreferrer"&gt;Kafka Meetup Utrecht&lt;/a&gt; was the fifth meetup I talked about it. During the drinks after the talk I often get questions about how it came to existence. It seemed to be like a good idea to write it down, which is what I'm doing right now.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is open-bank-mark?
&lt;/h2&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1130516421677658114-533" src="https://platform.twitter.com/embed/Tweet.html?id=1130516421677658114"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1130516421677658114-533');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1130516421677658114&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;Although it nowhere explicitly mentioned in the project itself, the name of the project is a combination of 'open-bank' and benchmark. 'open-bank' was the name for the frontend part before I combined everything in one project. It's a reference to my employer &lt;a href="https://www.openweb.nl/" rel="noopener noreferrer"&gt;Open Web&lt;/a&gt; but a banking application. It's also open in the sense that the code is publicly available and there is little to no security. The last part I added makes it possible to run benchmarks. For example you could switch out components for equivalent ones in another language. After running both on some environment the latency and the use of both cpu and memory of the different parts can be compared.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/openweb-nl" rel="noopener noreferrer"&gt;
        openweb-nl
      &lt;/a&gt; / &lt;a href="https://github.com/openweb-nl/open-bank-mark" rel="noopener noreferrer"&gt;
        open-bank-mark
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A bank simulation application using mainly Clojure, which can be used to end-to-end test and show some graphs.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;Active development has moved to &lt;a href="https://github.com/openweb-nl/kafka-graphql-examples" rel="noopener noreferrer"&gt;kafka-graphql-examples&lt;/a&gt; which is more focused about graphql, and in ways is simpler than this project.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Open Bank Mark&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="https://travis-ci.com/openweb-nl/open-bank-mark" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/ae0a393b7edba7121591aa147b39e2d8e83de6cd36736946aca2d1740237de4e/68747470733a2f2f7472617669732d63692e636f6d2f6f70656e7765622d6e6c2f6f70656e2d62616e6b2d6d61726b2e7376673f6272616e63683d6d6173746572" alt="Build Status"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Contents&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/openweb-nl/open-bank-mark#intro" rel="noopener noreferrer"&gt;Intro&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/openweb-nl/open-bank-mark#development" rel="noopener noreferrer"&gt;Development&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/openweb-nl/open-bank-mark#building-locally" rel="noopener noreferrer"&gt;Building locally&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openweb-nl/open-bank-mark#building-remote" rel="noopener noreferrer"&gt;Building remote&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openweb-nl/open-bank-mark#other-backend" rel="noopener noreferrer"&gt;Building other backend&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/openweb-nl/open-bank-mark#modules" rel="noopener noreferrer"&gt;Modules&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/openweb-nl/open-bank-mark#topology" rel="noopener noreferrer"&gt;Topology&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openweb-nl/open-bank-mark#synchronizer" rel="noopener noreferrer"&gt;Synchronizer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openweb-nl/open-bank-mark#heartbeat" rel="noopener noreferrer"&gt;Heartbeat&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openweb-nl/open-bank-mark#command-generator" rel="noopener noreferrer"&gt;Command generator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openweb-nl/open-bank-mark#command-handler" rel="noopener noreferrer"&gt;Command handler&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openweb-nl/open-bank-mark#graphql-endpoint" rel="noopener noreferrer"&gt;Graphql endpoint&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openweb-nl/open-bank-mark#frontend" rel="noopener noreferrer"&gt;Frontend&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openweb-nl/open-bank-mark#test" rel="noopener noreferrer"&gt;Test&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openweb-nl/open-bank-mark#scripts" rel="noopener noreferrer"&gt;Scripts&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/openweb-nl/open-bank-mark#variants" rel="noopener noreferrer"&gt;Variants&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/openweb-nl/open-bank-mark#three-brokers" rel="noopener noreferrer"&gt;Three brokers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openweb-nl/open-bank-mark#one-broker" rel="noopener noreferrer"&gt;One broker&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openweb-nl/open-bank-mark#results" rel="noopener noreferrer"&gt;Results&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;&lt;a id="user-content-intro" href="https://github.com/openweb-nl/open-bank-mark" rel="noopener noreferrer"&gt;Intro&lt;/a&gt;&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;This project is an example of an event sourcing application using Kafka
The front-end can be viewed at &lt;a href="https://open-bank.gklijs.tech/" rel="nofollow noopener noreferrer"&gt;open-bank&lt;/a&gt; which is for now configured to have the endpoint running on localhost
In the background tab are the results of comparing 4 languages, which all ran 10 times on TravisCi, with one broker.
It also contains an end-to-end test making it possible to compare different implementations or configurations. For example one could set the &lt;code&gt;linger.ms&lt;/code&gt; setting at different values in the topology module, so everything build on Clojure will use that setting.
Another option would be to drag …&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/openweb-nl/open-bank-mark" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;The passion project is kind of organically grown over the course of about one and a half years. With sometimes a month of not doing anything about it, and sometimes a month of working on it almost every evening in order to get sufficient progress for the next meetup. I used call projects like these pet projects, but never really liked how it sounded. Thanks to the article, &lt;a href="https://clubhouse.io/blog/why-every-developer-should-have-a-passion-project" rel="noopener noreferrer"&gt;Why every developer should have a passion project&lt;/a&gt;, I now have a better name for such projects.&lt;/p&gt;

&lt;p&gt;What's next will be a roughly chronological story about the different stages of development of open-bank-mark.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kafka platform at Rabobank
&lt;/h2&gt;

&lt;p&gt;A big part of the project revolves around Kafka. It's used to decouple different components and also to make it potentially scalable. Kafka was the first of some of the tech that plays an important role in open-bank-mark I got acquainted with.&lt;/p&gt;

&lt;p&gt;I started working with Kafka in November 2015 when I was working at the &lt;a href="https://www.rabobank.nl/particulieren/" rel="noopener noreferrer"&gt;Rabobank&lt;/a&gt; on what would later would become the &lt;a href="https://axual.com/platform/" rel="noopener noreferrer"&gt;Axual platform&lt;/a&gt;. Previous to my start a demo was created to make people enthusiastic about Kafka. The demo showed the balances of fictional accounts, with notifications when a certain condition was met. It was kind of similar to what open-bank-mark would be. But the transactions where not executed by commands and there were a fixed amount of accounts.&lt;/p&gt;

&lt;p&gt;I learned a great deal creating the client library part. One thing in particular I learned was that is is hard to write concurrent code with Java. The Kafka Consumer not being safe for multi-threaded access made it even harder. Which was one of the reasons that would get me interested in Clojure later on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Clojure for the Brave and True
&lt;/h2&gt;

&lt;p&gt;During the summer of 2016 I read &lt;a href="https://www.braveclojure.com/clojure-for-the-brave-and-true/" rel="noopener noreferrer"&gt;Clojure for the Brave and True&lt;/a&gt;. It was fun and clear book to read. But when it came to actual programming I had a hard time to use &lt;a href="https://www.gnu.org/software/emacs/" rel="noopener noreferrer"&gt;Emacs&lt;/a&gt; like the book prescribed.&lt;/p&gt;

&lt;p&gt;Only some months later I learned about &lt;a href="https://cursive-ide.com/" rel="noopener noreferrer"&gt;Cursive&lt;/a&gt; which was a plugin for the IDE I was already familiar with, &lt;a href="https://www.jetbrains.com/idea/" rel="noopener noreferrer"&gt;IntelliJ&lt;/a&gt;. It was a nice refreshing language, with some of the first passion projects being a &lt;a href="https://github.com/gklijs/snake" rel="noopener noreferrer"&gt;snake game&lt;/a&gt; with the frontend with Clojurescript. It's nice to be able to use the same code for the backend and frontend. Even the game itself can be viewed in the browser or with Java using the same library.&lt;/p&gt;

&lt;p&gt;Later on I added some rule based artificial intelligence to the snake game. I presented the snake game, together with a client for it at one of our 'Open Pizza' sessions on the fourth of May. An 'Open Pizza' session is like a regular meetup, but only for employees of Open Web.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kafka workshop
&lt;/h2&gt;

&lt;p&gt;About a year after the snake game it was time for another presentation at  'Open Pizza'. I wanted to do a Kafka workshop, to share some of my experience with it. Both event sourcing and GraphQL where the subject of 'Open Pizza' before I started preparing my talk. Both of them where demonstrated using Java, so I wanted to do something using both, with Kafka and Clojure.&lt;/p&gt;

&lt;p&gt;And so I started with the first parts of open-bank-mark. Since I build most of it evening I quickly became annoyed with having to set the correct topics and schema's each time. So that's why I added some tooling to set those based on some &lt;a href="https://clojure.org/reference/reader#_extensible_data_notation_edn" rel="noopener noreferrer"&gt;edn files&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In advance I only wanted to build the backend part. But after I had that ready I couldn't resist to give it a try to create a frontend using Clojurescript. I quickly got something working, using &lt;a href="https://github.com/oliyh/re-graph" rel="noopener noreferrer"&gt;re-graph&lt;/a&gt; for the GraphQL logic, and &lt;a href="https://github.com/jgthms/bulma" rel="noopener noreferrer"&gt;Bulma&lt;/a&gt; for the css. I was pretty happy with it, and since the Clojure meetup at 14 February had an open slot I did a talk showing what I had so far. It was nice to share the project and I got some valuable feedback.&lt;/p&gt;

&lt;p&gt;Almost two months later, 5 April 2018 it was finally time to do an open pizza talk again. Turned out I focused a but too much on it being a workshop. I did have both for frontend and backend thought of things that could be improved. But I didn't have a proper introduction of the whole project to make clear what the meaning was.&lt;/p&gt;

&lt;p&gt;I also added the Kotlin variant of the &lt;a href="https://github.com/openweb-nl/open-bank-mark#command-handler" rel="noopener noreferrer"&gt;Command Handler&lt;/a&gt; just before the talk. I created it mostly as a non Clojure JVM example. But besides making it more complex as a whole it also caused my live demo to fail. This was because while running Clojure, some values in the database were null, and Kotlin expected them to always have values based on the data types. I did test both variants separately but I didn't test the switching before. Also an important lesson, when doing live demo's make sure you only do things you did before, and not something different.&lt;/p&gt;

&lt;p&gt;Because of the complexity and not having a proper introduction, only a few people actually started coding. Creating a different frontend was also not without problems since there were some CORS issues when using Angular that I could not solve quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rust 2018 edition release party
&lt;/h2&gt;

&lt;p&gt;Before I heard there would be a &lt;a href="https://www.meetup.com/Rust-Gouda/events/254877742/" rel="noopener noreferrer"&gt;2018 edition release party&lt;/a&gt; on 7 February this year, I had already done a few things with &lt;a href="https://www.rust-lang.org/" rel="noopener noreferrer"&gt;Rust&lt;/a&gt;. You can read more about at &lt;a href="https://www.openweb.nl/nieuws/2019/02/gerard-zijn-ervaring-met-rust.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;. At the release party there would be a total of 6 talks and also some demo's so speaking time was limited.&lt;/p&gt;

&lt;p&gt;I already wrote the Rust variant of the Command Handler, and in order to do that I needed something to change the bytes I received from Kafka, with the schema from the schema registry, to typed data I could use in Rust. Trying out several combinations of existing libraries I got it working. Also my experience with the Rabobank helped a lot, since I already had a good grasp on how this worked with Java. Especially since we wrote out own serialisers wrapping the Confluent Avro serialiser.&lt;/p&gt;

&lt;p&gt;As part of learning Rust I wanted to turn the serialiser into a proper library, or a crate since that's how they are called in Rust. It's currently at 1.0.0 and can be found on &lt;a href="https://crates.io/crates/schema_registry_converter" rel="noopener noreferrer"&gt;crates.io&lt;/a&gt;. In order to make it a crate I not only put the code together, but also made it more generic. I also added some features to make it equivalent to the Java library that does the same. Another thing I did was adding unit and integration tests, and start used &lt;a href="https://codecov.io/gh/gklijs/schema_registry_converter" rel="noopener noreferrer"&gt;codecov&lt;/a&gt; to track code coverage.&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://www.slideshare.net/slideshow/embed_code/key/yO9H5sb5u9gXa6" alt="yO9H5sb5u9gXa6 on slideshare.net" width="100%" height="487"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;The release party was my first Rust meetup and the first time I needed to wear a microphone so that got me a bit nervous. I think because of the nerves I went a bit too fast trough my slides. From some of the questions it wasn't all that clear what I did exactly. I also showed me bank demo, but most people were looking at the demo with web assembly witch happened at the same time, also some people where leaving since the demo's were the last part of the program. It was a nice setup with some people showing projects that where running in production, or almost there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kafka Meetup Utrecht
&lt;/h2&gt;

&lt;p&gt;I also sometimes tweeted about open-bank-mark and got asked by a former colleague to present at the &lt;a href="https://www.meetup.com/Kafka-Meetup-Utrecht/" rel="noopener noreferrer"&gt;Kafka Meetup Utrecht&lt;/a&gt;. The talk was planned for 28 May this year. Since it was a Kafka meetup, and one of the reasons Kafka is often used is performance, I would seem look a good idea to try to compare the different implementations of the Command Handler.&lt;/p&gt;

&lt;p&gt;In preparation for the Open Pizza the year before I already started making a test of the frontend, using &lt;a href="https://github.com/igrishaev/etaoin" rel="noopener noreferrer"&gt;etaoin&lt;/a&gt; with the chrome webdriver. So in order to to end-to-end performance testing I needed to complete that and eventually had a way of making transactions and validating then while measuring the time it took.&lt;/p&gt;

&lt;p&gt;Then I created some code around it to be able to do actual testing. I first tried a library that would use the pid of the process to measure cpu usage and memory bit it was tedious to get the correct pid each time. Also I doubted the measurements. The next way to improve was to use docker to set the project up. This way I could name the containers and didn't needed the pid's anymore. I used &lt;a href="https://github.com/lispyclouds/clj-docker-client" rel="noopener noreferrer"&gt;lispyclouds/clj-docker-client&lt;/a&gt; to measure cpu and memory of the Docker containers, after my pull request was merged to add that feature. Using Docker has some other benefits like making it easier to switch JDK's. I ran it for some nights on my laptop, getting rid of some bugs that were sometimes break the testing program.&lt;/p&gt;

&lt;p&gt;Now I was able to generate a lot of data, but didn't have a way to visualize it yet. I ended up using a combination of &lt;a href="https://github.com/MastodonC/kixi.stats" rel="noopener noreferrer"&gt;kixi.stats&lt;/a&gt; to preprocess the data and &lt;a href="https://github.com/metasoarous/oz" rel="noopener noreferrer"&gt;oz&lt;/a&gt; to generate html with some &lt;a href="https://vega.github.io/" rel="noopener noreferrer"&gt;vega&lt;/a&gt; imports to show the data. The results can be viewed from the background tab on &lt;a href="https://open-bank.gklijs.tech/" rel="noopener noreferrer"&gt;open bank&lt;/a&gt;. Now I really want to add routing to the frontend, but I better don't else the project keeps ongoing forever.&lt;/p&gt;

&lt;p&gt;Just before the presentation I figured I could also use the other Rust Kafka Library. I ignored it first time cause it didn't seem well maintained. It still isn't, but people are using it and creating issues on github. Also being a native rust solution it was possible to get a tiny docker image easily.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1132220825912979457-818" src="https://platform.twitter.com/embed/Tweet.html?id=1132220825912979457"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1132220825912979457-818');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1132220825912979457&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;Despite a strike of the public transport that day the meetup went through and about thirty people showed up. Since I had a lot of time to prepare for this talk and there would be around fifty people coming, I really wanted to do a proper job preparing. But I also found some small errors either in the code or the test setup. So only had my slides ready just the night before the presentation. Leaving me without time to exercise it out loud. Which caused me to sometimes search for words a little, but on overall I was quite pleased with how it went. The slides from the presentation are available.&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://www.slideshare.net/slideshow/embed_code/key/bIEQ0qXtIlunJx" alt="bIEQ0qXtIlunJx on slideshare.net" width="100%" height="487"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  What to do next?
&lt;/h2&gt;

&lt;p&gt;I've had several positive reactions from open sourcing and sharing the project. It's also added to the &lt;a href="https://lacinia.readthedocs.io/en/latest/samples.html" rel="noopener noreferrer"&gt;lacinia docs&lt;/a&gt; as an example project. I don't use twitter very actively so it was quit nice to have &lt;a href="https://twitter.com/scicloj" rel="noopener noreferrer"&gt;scicloj&lt;/a&gt; mention me.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1133853325164711936-703" src="https://platform.twitter.com/embed/Tweet.html?id=1133853325164711936"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1133853325164711936-703');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1133853325164711936&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;The last additions to open-bank-mark were some small improvements to the documentations. Also I thought it would be cleaner to leave only master and the 'one-broker' branch in the main project, and move the variants to my own fork. At the same time I added a variants section to the readme witch could host links to other variants. I don't know if I'll spend much more time on the project. I would like to give crux a try through.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/xtdb" rel="noopener noreferrer"&gt;
        xtdb
      &lt;/a&gt; / &lt;a href="https://github.com/xtdb/xtdb" rel="noopener noreferrer"&gt;
        xtdb
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      An immutable SQL database for application development, time-travel reporting and data compliance. Developed by @juxt
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="adoc"&gt;
&lt;div&gt;
&lt;div&gt;
&lt;a rel="noopener noreferrer" href="https://github.com/xtdb/xtdbimg/xtdb-logo-banner.svg"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fxtdb%2Fxtdbimg%2Fxtdb-logo-banner.svg" alt="XTDB Logo"&gt;&lt;/a&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;
&lt;p&gt;&lt;a href="https://xtdb.com" rel="nofollow noopener noreferrer"&gt;XTDB&lt;/a&gt; is an open-source immutable database with comprehensive time-travel. XTDB has been built to simplify application development and address complex data compliance requirements. XTDB can be used via SQL and &lt;a href="https://docs.xtdb.com/tutorials/introducing-xtql.html" rel="nofollow noopener noreferrer"&gt;XTQL&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div&gt;
&lt;p&gt;XTDB 2.x is in 'beta' whilst we collaborate closely with our &lt;a href="https://forms.gle/K2bMsPxkbreKSKqs9" rel="nofollow noopener noreferrer"&gt;Design Partners&lt;/a&gt; ahead of General Availability; if you are looking for a stable release of an immutable document database with bitemporal query capabilities, we are continuing to develop and support XTDB 1.x at &lt;a href="https://github.com/xtdb/xtdb/tree/1.x" rel="noopener noreferrer"&gt;https://github.com/xtdb/xtdb/tree/1.x&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div&gt;
&lt;p&gt;Major features:&lt;/p&gt;
&lt;/div&gt;
&lt;div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Immutable - XTDB is optimised for current-time queries, but you can audit the full history of your database at any point, without needing snapshots or accessing backups.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;'Bitemporal' - all data is accurately versioned as updates are made ('system' time), but it also allows you to separately record and query when that data is, was, or will become valid in your business domain ('valid' time).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Dynamic - you don’t need…&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/xtdb/xtdb" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Please let me know what you think of the story and/or of the project.&lt;/p&gt;

</description>
      <category>clojure</category>
      <category>graphql</category>
      <category>career</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
