<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Amaboh</title>
    <description>The latest articles on Forem by Amaboh (@amaboh).</description>
    <link>https://forem.com/amaboh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/amaboh"/>
    <language>en</language>
    <item>
      <title>Simplify Redis for noobs like myself</title>
      <dc:creator>Amaboh</dc:creator>
      <pubDate>Wed, 03 Aug 2022 12:38:47 +0000</pubDate>
      <link>https://forem.com/amaboh/simplify-redis-for-noobs-like-myself-3amc</link>
      <guid>https://forem.com/amaboh/simplify-redis-for-noobs-like-myself-3amc</guid>
      <description>&lt;p&gt;Hello, my friend and welcome to this short tutorial on using Redis as a cache system in your next project. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Is Redis?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So what is Redis, and why go through the hassle of learning this technology? I guess that's the question you've been asking yourself lately, you see it everywhere and you feel like you're missing out. Yes! you are missing out in terms of performance and speed optimization. However, this is just the tip of the iceberg of what Redis can provide but it's a good starting point to get your feet wet and remember Rome was not built in a single day. That said buckle up and explore this together.&lt;/p&gt;

&lt;p&gt;Redis is an in-memory data structure store, used as a distributed, in-memory key-value database, cache, and message broker, with optional durability. Wow, I know that is too much to take in, let me help you digest this slowly. Basically what Redis does is act like a database that store values in JSON format using key values like an object, and provides caching capability, with messaging broker capabilities like Kafka or RabitMQ in a microservice architecture. However, our focus, for now, is caching. &lt;/p&gt;

&lt;p&gt;Explaining the caching capabilities would do you less justice, but showing you would make you understand better with a vivid analogy of a water piping system of a house. &lt;/p&gt;

&lt;p&gt;Imagine a plumber designing a water system for a house and wishes that it takes a shorter time for water to reach the house from the utility company. How do you think he would design this, given that the water utility company is 1000 meters away from the house? I know you are no plumber, but this is something we see every day. Well, he has two options!&lt;/p&gt;

&lt;p&gt;The first is to send the pipelines straight to the water utility company from the house. &lt;/p&gt;

&lt;p&gt;Secondly, is to implement a water tank storage in the house where water is first served from the water utility company before being sent to the house. &lt;/p&gt;

&lt;p&gt;Hmmmmm, so which do you think is efficient? The second option. This is because each time any tap is open in the house, the tank is first to respond with any drop of water before the water utility company. Thus each time water is available in the tank it would take a shorter time for water to be available in this house. In contrast with the first option, each time any tap is open the water utility company has to first respond by supplying water before the house gets water. Thus we can all agree that it would take a longer time with the first option. This may be an oversimplified explanation because obviously water supply does not function like this, but this drives home the point. Thus the water tank in this case is the cache system and in our case Redis. &lt;/p&gt;

&lt;p&gt;This is how Redis cache functions in your application, thereby enabling fewer requests to your Database and delivering faster response time to any query. The diagram below illustrates the analogy of the Water tank and utility company explained in the previous paragraph.&lt;/p&gt;

&lt;p&gt;First Case without Redis&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7eegf0a0dtkin95t5dxj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7eegf0a0dtkin95t5dxj.png" alt="Image description" width="744" height="574"&gt;&lt;/a&gt;&lt;br&gt;
In this case, all requests are made directly to the server without any caching mechanism here. This takes a lot of time and the response are significantly slower. &lt;/p&gt;

&lt;p&gt;Second Case with Redis&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76atsn4dpjxdgqaclgtd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76atsn4dpjxdgqaclgtd.png" alt="Image description" width="779" height="603"&gt;&lt;/a&gt;&lt;br&gt;
In this case, we can see that Redis is implemented, thereby serving the purpose of the water tank in the analogy of the water piping scenario. Thus, we can observe a faster response time and fewer computational resources to query the database. This is because all queries are made to the Redis Cache which has a faster response time and in the case where this data is not available in the Redis cache for the first query. Then the data is fetched from the Database directly and then stored in the Redis Cache for subsequent requests with lower response timing. &lt;/p&gt;

&lt;p&gt;Alright my friends it's time to leave the world of theory and story learning to get our hands dirty. Let's code this into existence. I'll leave a copy of the repo below so you can clone it and experiment with it. &lt;/p&gt;

&lt;p&gt;We would first need to download Redis stable version depending on your operating system. Check the link below and select a stable version for your OS. &lt;a href="https://redis.io/download/"&gt;https://redis.io/download/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For Mac users like myself if you have Homebrew installed then just run this command brew install Redis, and check out this link for reference:&lt;a href="https://redis.io/docs/getting-started/installation/install-redis-on-mac-os/"&gt;https://redis.io/docs/getting-started/installation/install-redis-on-mac-os/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's open our first code and go to the terminal. &lt;br&gt;
Change the directory to the desired location by typing cd Desktop/desired_folder. &lt;/p&gt;

&lt;p&gt;Type the following into the Terminal to initialize our nodeJs app and install dependencies. We'll be using Express to spinoff our node server, nodemon to watch for changes in our code, redis for our cache and dotenv to store our environment variables such as our PORT number, and Axios to make API queries.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm init -y&lt;br&gt;
npm i express nodemon redis dotenv axios&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We would need to make some adjustments to our package.json file in the root of directory, inorder to ease our development process. Add the following line in the first key value pairs of our package.json file "type": "module". This is to enable us use name import rather than required('') syntax of node. In the scripts object found in the package.json file add the following line "start": "nodemon index.js", this would enable us to avoid restarting node. &lt;/p&gt;

&lt;p&gt;For Simplicity, we shall not be using a real database like MongoDB but rather an API endpoint with JSON data, such as the JSON placeholder API endpoint. &lt;/p&gt;

&lt;p&gt;Let's instantiate our server&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import express from "express";
import dotenv from "dotenv";

dotenv.config();//access enviroment variables

const app = express();
app.use(express.json());//express middleware for JSON data

const PORT = process.env.PORT || 5008;

app.listen(PORT, () =&amp;gt; {
  console.log(`Listening to ${PORT}`);
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run npm start in the terminal and you'll get the following&lt;br&gt;
&lt;code&gt;[nodemon] starting&lt;/code&gt;node index.js&lt;code&gt;&lt;br&gt;
Listening to 5008&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;let's start our Redis client and make a post request&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { createClient } from "redis";

const client = createClient();

client.on("error", (err) =&amp;gt; console.log("Redis Client Error", err));

await client.connect();

app.post("/", async (req, res) =&amp;gt; {
  const { key, value } = req.body;
  const response = await client.set(key, value);
  const output = await client.get(key);
  res.json(output);
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please check this link with the Redis documentation out in order to set up Redis properly:&lt;a href="https://www.npmjs.com/package/redis"&gt;https://www.npmjs.com/package/redis&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;To make a request to this route we would use PostMan.&lt;/p&gt;

&lt;p&gt;I assume you know how to use Postman, if not please check this link from FreeCodeCamp on how to make a request with Postman: &lt;a href="https://www.youtube.com/watch?v=VywxIQ2ZXw4"&gt;https://www.youtube.com/watch?v=VywxIQ2ZXw4&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the response we get from requests using PostMan. &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmv8ohzf23y9bc8ehpf8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmv8ohzf23y9bc8ehpf8.png" alt="Image description" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's get a simulation of what it would be like using a Database by using the JSON placeholder API end point.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import axios from "axios";

app.get("/posts/:id", async (req, res) =&amp;gt; {
  const { id } = req.params;

  const cachedPost = await client.get(`post-${id}`);

  if (cachedPost){return res.json(JSON.parse(cachedPost));}

  const response = await axios.get(
    `https://jsonplaceholder.typicode.com/posts/${id}`
  );

  client.set(`post-${id}`, JSON.stringify(response.data))
  res.json(response.data);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's make a get request to the JSON placeholder API endpoint(&lt;a href="https://jsonplaceholder.typicode.com/posts"&gt;https://jsonplaceholder.typicode.com/posts&lt;/a&gt;) for post 24. We would compare the response the first time when the response is not cached, and the 2nd, 3rd, and 4th time when the response is cached. &lt;/p&gt;

&lt;p&gt;The first request was without any cached data in Redis. We observe a response time of 1259 milliseconds. &lt;/p&gt;

&lt;p&gt;The second request has a faster response time of 19 milliseconds, what a significant change. This even decreases for the 3rd and 4th response time, with an average response time of 12 milliseconds.&lt;/p&gt;

&lt;p&gt;Below is the full code base.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import express from "express";
import dotenv from "dotenv";
import { createClient } from "redis";
import axios from "axios";

dotenv.config();

const app = express();
app.use(express.json());

const client = createClient();

client.on("error", (err) =&amp;gt; console.log("Redis Client Error", err));

await client.connect();

const PORT = process.env.PORT || 500;

app.post("/", async (req, res) =&amp;gt; {
  const { key, value } = req.body;
  const response = await client.set(key, value);
  const output = await client.get(key);
  res.json(output);
});

app.get("/posts/:id", async (req, res) =&amp;gt; {
  const { id } = req.params;

  const cachedPost = await client.get(`post-${id}`);

  if (cachedPost){return res.json(JSON.parse(cachedPost));}

  const response = await axios.get(
    `https://jsonplaceholder.typicode.com/posts/${id}`
  );

  client.set(`post-${id}`, JSON.stringify(response.data))
  res.json(response.data);
});

app.listen(PORT, () =&amp;gt; {
  console.log(`Listening to ${PORT}`);
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: Stringify the data when setting and getting the data in Redis. &lt;/p&gt;

&lt;p&gt;Github repo: &lt;a href="https://github.com/amaboh/Redis_hat"&gt;https://github.com/amaboh/Redis_hat&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope you found this tutorial and explanation helpful. Happy keystroking!&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>tutorial</category>
      <category>codenewbie</category>
      <category>node</category>
    </item>
    <item>
      <title>Simplifying Microservices for noobs like myself</title>
      <dc:creator>Amaboh</dc:creator>
      <pubDate>Mon, 27 Jun 2022 09:08:50 +0000</pubDate>
      <link>https://forem.com/amaboh/simplifying-microservices-for-noobs-like-myself-2c1h</link>
      <guid>https://forem.com/amaboh/simplifying-microservices-for-noobs-like-myself-2c1h</guid>
      <description>&lt;p&gt;Well, you may be wondering what is an event-driven architecture if this is your first time coming across this terminology, if you've been struggling to comprehend this design pattern, well search no more my friend because I about to chop this into pieces so you lovely brain can apprehend all of this jazz and be helpful in your next project.&lt;/p&gt;

&lt;p&gt;Before go this odyssey it's essential that we understand why this design pattern was first introduced because every technology my friend exists to solve a problem. I like to think of Microservices aka Event-Driven Architecture like the iPhone pre-smartphone era when we had the Nokia and Blackberry. Thus just like how we had analog phones like Nokia before the smart era, thus we have monolith applications before event-driven architecture but the iPhone forever changed how phones we're built. &lt;/p&gt;

&lt;p&gt;I know what you may be thinking now my friend, I came here just to read about EDA but now this article is about monolithic applications. Well it's important you know where you are coming from so you can know where you are heading to. Thus a monolithic application is simply a big chuck program consisting of several functionalities in a single codebase that is chained together. All of us started out with a monolithic application when we tried to design a system with various functionalities. Thus if we're designing an e-commerce system, with various layers or services like authentication, products, and orders. The system would look to design and deploy would consist of a single codebase in a jar file like the one below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flm2vmdkhok8hemtblssy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flm2vmdkhok8hemtblssy.png" alt="Image description" width="800" height="553"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It may seem like it's ok to use this design where all the different services are wrapped in a single jar-like file with one database because it's easy to build, has one tech stack like a coding language, and has fewer problems related to network latency and security. Well in as much as they're advantages they're also disadvantages to this architectural style, so let's look at some of the disadvantages of building a monolithic architecture for building applications.&lt;/p&gt;

&lt;p&gt;Coming at number one on our list is the fact that it's difficult to maintain and manage, as the file sizes become too larger with more requirements. More so, because of the jar-like structure deploying becomes cumbersome as any small changes to the application mean redeploying the whole application. Making changes to this application using new technologies looks like a nightmare because of the singleton structure in which services adopt the same tech stack. Thus we can see the interdependent nature of monolithic applications means a breakdown in a single service means a breakdown in the whole system. &lt;/p&gt;

&lt;p&gt;Well my friends this doesn't mean that monolithic applications have been extinct like the dinosaurs, they're still many great companies out there still using this and it's always the best design choice for personal projects. However, if you are building a next Facebook, Twitter, or Uber then my friend for something that scales very fast we would have to shop somewhere else. This is where Microservice architecture comes in also commonly referred to as event-driven applications. This design pattern was derived from service-oriented architecture (SOA). However, we're not going to dive into SOA, because this would just shift us from our focus which is EDA. So then with the problems of monolithic applications, how do Microservices come in as the hero to save us, poor developers. &lt;/p&gt;

&lt;p&gt;Just as the name implies this are a group of small services which handle a small portion of the functionality and data communicating with others directly through an intermediary commonly known as the Message broker. According to Sam Newman, "Microservices are the small services that work together."&lt;/p&gt;

&lt;p&gt;Thus we can see that there's a separation of concern in this design pattern and each application maintains its own database. The diagram below vividly illustrates this design architectural style. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs44q68ts8pdiuwpyrn7q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs44q68ts8pdiuwpyrn7q.png" alt="Image description" width="800" height="465"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;So my friend, you may be asking "what's the big deal with this separation of files?". Well, what this fragmentation means is your application is much more capable of being free in choosing its own tech stacks, independently deployable and so much more. Well let's look at some of the principles of microservices &lt;/p&gt;

&lt;p&gt;Single responsibility: It is one of the principles defined as a part of the SOLID design pattern. It states that a single unit, either a class, a method, or a microservice should have one and only one responsibility. Each microservice must have a single responsibility and provide a single functionality. You can also say that: the number of microservices you should develop is equal to the number of functionalities you require. The database is also decentralized and, generally, each microservice has its own database.&lt;br&gt;
Built around business capabilities: In today’s world, where so many technologies exist, there is always a technology that is best suited for implementing a particular functionality. But in monolithic applications, it was a major drawback, as we can’t use different technology for each functionality and hence, need to compromise in particular areas. A microservice shall never restrict itself from adopting an appropriate technology stack or backend database storage that is most suitable for solving the business purpose, i.e., each microservice can use different technology based on business requirements.&lt;br&gt;
Design for failure: Microservices must be designed with failure cases in mind. Microservices must exploit the advantage of this architecture and going down one microservice should not affect the whole system, other functionalities must remain accessible to the user. But this was not the case in the Monolithic applications, where the failure of one module leads to the downfall of the whole application.&lt;/p&gt;

&lt;p&gt;Let's take a look at some of the pros and cons of Microservices. Like nothing under the sun there's no disadvantage.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Advantages of Microservices: *&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It is relatively easy to manage due to the separation of concerns and the small size of each service and functionality.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Redeploying is made easy, as a change in any single service doesn't entail redeploying the entire application. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is fairly easy to onboard new developers as he or she needs to understand only a particular microservice and avoid the stress of trying to understand the Spaghetti code of monolithic applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Microservice supports horizontal scaling which may result from too much resource demand from a particular service. Thus only that service can be scale-out, hence optimizing resource allocation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Microservice are fault-tolerant and resilient to bug, because if one service goes down it doesn't affect the other microservices and the whole system remains intact and continues to function properly by providing other functionalities.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages of Microservices:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Being a distributed system, it is much more complex than monolithic applications. Its complexity increases with the increase in a number of microservices.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Skilled developers are required to work with microservices architecture, which can identify the microservices and manage their inter-communications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Independent deployment of microservices is complicated.&lt;br&gt;
Microservices are costly in terms of network usage as they need to interact with each other and all these remote calls result in network latency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Microservices are less secure relative to monolithic applications due to the inter-services communication over the network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Debugging is difficult as the control flows over many microservices and to point out why and where exactly the error occurred is a difficult task.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Wow this may seem so much and even confusing to digest given the disadvantages, thus the hard question? &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When should you use this architecture?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Firstly it is important to note that Microservice is ideal for improving agility and moving quickly. Thus the following 4 points can be used when considering adopting Microservices in your application. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;1. Cross-account, cross-region data replication&lt;/em&gt;&lt;br&gt;
When you have teams operating and deploying across different regions, accounts and operations, you can consider adopting this architecture. When using event routers to transfer data between systems, you can develop, scale and deploy services independently from other teams. &lt;/p&gt;

&lt;p&gt;_2. Fanout and parallel processing _&lt;br&gt;
When operating numerous systems which need responses from an event in order to operate, you can implement Microservices architecture to fan out the event without having to write custom code to push the event to the systems, each of which can process the event in parallel with a driven purpose. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Resource state monitoring and alerting &lt;br&gt;
This design can be implemented to continuously check on your resources by monitoring and receiving alerts on any changes, events, and updates. These resources can include storage buckets, database tables, serverless functions, compute nodes, and more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integration of heterogeneous systems&lt;br&gt;
If you have a team working with different tech stacks then you can use this architecture to share information between teams without coupling. The event router establishes indirection and interoperability among the systems, so they can exchange messages and data while remaining agnostic. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This was a high-level design and we would go to a low level in subsequent writeup where we would use NodeJs app and RabbitMq to implement a Microservice architecture in a small eCommerce application. Until then Happy keystroking!!!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Getting initiated into algorithmic thinking. How to think like a programmer for newbies like myself</title>
      <dc:creator>Amaboh</dc:creator>
      <pubDate>Sun, 15 May 2022 17:13:48 +0000</pubDate>
      <link>https://forem.com/amaboh/getting-initiated-into-algorithmic-thinking-how-to-think-like-a-programmer-for-newbies-like-myself-4dai</link>
      <guid>https://forem.com/amaboh/getting-initiated-into-algorithmic-thinking-how-to-think-like-a-programmer-for-newbies-like-myself-4dai</guid>
      <description>&lt;p&gt;One of the most sort after skills as a developer is the ability to solve problems and most often in the world of computers this is synonymous to algorithmic thinking. In simple terms, this is the ability to break a task into smaller pieces and piece them together to get a task done. &lt;br&gt;
SO why is this skill important as a young developer starting his or her journey you may be asking? Well as a software engineer you would always be task with writing softwares to solve human problems, and thus you need to know how to tell the computer how to perform a certain tasks. Since computers are design to perform as programmed, thus your job as a programmer is to tell the computer how to do perform a certain task and this involves a series of steps, which requires that you understand how to piece a program together to solve a problem. &lt;br&gt;
SO explore this with this little initiation algorithm  into Algo Thinking. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FizzBuzz&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Write a program that uses console.log to print all the numbers from 1 to 100, with two exceptions. For numbers divisible by 3, print "Fizz" instead of the number, and for numbers divisible by 5 (and not 3), print "Buzz" instead.&lt;br&gt;
When you have that working, modify your program to print "FizzBuzz" for numbers that are divisible by both 3 and 5 (and still print "Fizz" or "Buzz" for numbers divisible by only one of those).&lt;br&gt;
(This is actually an interview question that has been claimed to weed out a significant percentage of programmer candidates. So if you solved it, your labor market value just went up.)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So how can we solve this some challenge??? &lt;br&gt;
Well as you think about it, take a moment to understand what the outcome of this program would be. If you've figured that already then congrats, you just completed one of the first important piece of the puzzle. Understanding the outcome, helps us piece together the process required to arrive at this imaginative result. &lt;/p&gt;

&lt;p&gt;Ok let's put your assumption to the test, and see if your imaginative result is right. Before that I suppose u have an imaginative result like this.&lt;br&gt;
&lt;code&gt;&lt;br&gt;
1&lt;br&gt;
2&lt;br&gt;
Fizz&lt;br&gt;
4&lt;br&gt;
 Buzz&lt;br&gt;
Fizz&lt;br&gt;
7&lt;br&gt;
8&lt;br&gt;
Fizz&lt;br&gt;
 Buzz&lt;br&gt;
11&lt;br&gt;
Fizz&lt;br&gt;
13&lt;br&gt;
14&lt;br&gt;
FizzBuzz&lt;br&gt;
&lt;/code&gt; &lt;br&gt;
If you had this in mind, then congrats on solving the first piece of the puzzle. Eventually, we can observe a pattern here, and with our young dev intuition, we can spot a loop. Is that right? Yes, I presume you just said that. If not then let's go back and think about it, if we're to make a program that goes through a series from 1 to 100, then obviously we would need to loop through and set conditions so that when each condition is met, then the program does goes into action mode to set our desired outcome. This brings us to the third part of our puzzle which is the using the if statement. &lt;/p&gt;

&lt;p&gt;Thus we can summarize our algorithm into  the following lines before writing our code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/* loop through{
if (x == "value1") action1();
else if (x == "value2") action2(); else if (x == "value3") action3(); else defaultAction();
} */
;

function fizzBuzz(){
  for (let i = 1; i &amp;lt; 100 ; ++i){
    if( (i% 3 == 0) &amp;amp;&amp;amp; (i % 5 ==0)){
      console.log("FizzBuzz")
    }else if(i % 3 == 0){
      console.log("Fizz")
    }else if( i % 5 == 0){
      console.log(" Buzz")
    }else{
      console.log(i)
    }
  }
}

fizzBuzz()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alright then, copy this and open the google Chrome on your laptop and press cmd + option + i, this would open the developer tools go to the console tab and paste this piece of code. Congrats on getting initiated but you just got started, so delete this and think about what is happen each time the loop is called until it arrives at the 100th digit. This is just to help you develop the thinking mindset of a programmer. I recommend you to use CodeWars if you haven't yet registered an account on the platform. Happy Hacking Dev. &lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>algorithms</category>
    </item>
    <item>
      <title>Protected routes for user login</title>
      <dc:creator>Amaboh</dc:creator>
      <pubDate>Tue, 10 May 2022 09:52:39 +0000</pubDate>
      <link>https://forem.com/amaboh/protected-routes-for-user-login-1bf0</link>
      <guid>https://forem.com/amaboh/protected-routes-for-user-login-1bf0</guid>
      <description>&lt;p&gt;When developing an application one of the most important features is a user signup and sign in page. For some of us NOdeJs die heart developers, one major issue is making a user sign in and not being able to go back to the home page and ensuring that only sign in users can have access to some routes such as the home page or dash board. &lt;br&gt;
I'll illustrate below how to achieve this functionality using a middleware in NodeJs. &lt;/p&gt;

&lt;p&gt;First, before we dive into some coding snippets, let illustrate the case in which we would be using this piece of code. &lt;br&gt;
The project we're using to illustrate this is personal blog which allows anyone to sign up and write a personal or public  journal. The main technologies used in this project are NodeJs, MongoDB, Express and dependencies include dotenv, express handlebars, passport, passoport google auth20, morgan, moments, method override and express session.&lt;/p&gt;

&lt;p&gt;This is beginner friendly project but it's expected that youy already have a good understanding of Javascript and nodeJS. This is because we'll be diving straight to the area which involves protected routes. &lt;/p&gt;

&lt;p&gt;App.js&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.use('/auth', require('./routes/auth'))  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;routes/auth.js&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// @desc Auth with Google
// @route GET /auth/google

router.get('/google',    passport.authenticate('google', {scope: ['profile']}))


// @desc Google auth callback
// @route GET /auth/google/callback

router.get(
    '/google/callback', 
    passport.authenticate('google', {failureRedirect: '/'}),
    (req, res) =&amp;gt; {
    res.redirect('/dashboard')
    }
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the lines above we're using the google auth20 to authenticate users based on the google strategy defines in config/passport.js file (I'll leave a link to the code respository for reference). &lt;/p&gt;

&lt;p&gt;However, the first line of code in the routes/auth.js, is used to implement the google auth strategy of signing up the user and return the user profile which is used in different part of the application with the User model to extract users information and store in the database (refer to Github repo).&lt;br&gt;
While the second line fo code is implementing the passport google auth strategy of authenticating the user, if there's a failure, it would redirect the user to the home page else if it's successful. Then the user would redirected to the home page. &lt;/p&gt;

&lt;p&gt;This is where our problem arises, how do we ensure that a sign in user can't have access to the home page which is the login page. More so, how do we also ensure that a non registered user can not have access to  the dashboard. This is where the middleware comes in for protected routes. The code below is an implement of routes before protected routes which results to the problem of allowing non authenticated  users having access to any routes.&lt;/p&gt;

&lt;p&gt;routes/index.js&lt;br&gt;
`&lt;br&gt;
// @desc Login/Landiing page&lt;br&gt;
// &lt;a class="mentioned-user" href="https://dev.to/route"&gt;@route&lt;/a&gt; GET /&lt;br&gt;
router.get('/',(req, res) =&amp;gt; {&lt;br&gt;
    res.render('login', {&lt;br&gt;
      layout: 'login',&lt;br&gt;
    })&lt;br&gt;
    })&lt;/p&gt;

&lt;p&gt;// @desc Dshboard &lt;br&gt;
// &lt;a class="mentioned-user" href="https://dev.to/route"&gt;@route&lt;/a&gt; GET /dashboard&lt;br&gt;
router.get('/dashboard', async (req, res) =&amp;gt; {&lt;br&gt;
    try {&lt;br&gt;
        const stories = await Story.find({user: req.user.id}).lean()&lt;br&gt;
        res.render('dashboard',{&lt;br&gt;
            name: req.user.firstName,&lt;br&gt;
            stories&lt;br&gt;
        })&lt;br&gt;
    } catch (error) {&lt;br&gt;
        console.error(error)&lt;br&gt;
        res.render(error/500)&lt;br&gt;
    }&lt;/p&gt;

&lt;p&gt;})&lt;/p&gt;

&lt;p&gt;`&lt;/p&gt;

&lt;p&gt;In order to fix this problem, we have to create a create middle to ensure that only a signed in user has access to the dashboard and if a user is not signup he cannot return to the homepage signup screen. &lt;/p&gt;

&lt;p&gt;Thus we would create a new folder with the name middleware and file with the name auth.js, with the following lines of code. &lt;/p&gt;

&lt;p&gt;middleware/auth.js&lt;br&gt;
`&lt;br&gt;
module.exports = {&lt;br&gt;
    ensureAuth: function(req, res, next){&lt;br&gt;
        if (req.isAuthenticated()){&lt;br&gt;
            return next()&lt;br&gt;
        } else{&lt;br&gt;
            res.redirect('/')&lt;br&gt;
        }&lt;br&gt;
    },&lt;br&gt;
    ensureGuest: function(req, res, next){&lt;br&gt;
        if(req.isAuthenticated()){ &lt;br&gt;
            res.redirect('/dashboard')&lt;br&gt;
        } else{&lt;br&gt;
            return next()&lt;br&gt;
        }&lt;br&gt;
    },&lt;/p&gt;

&lt;p&gt;}&lt;/p&gt;

&lt;p&gt;`&lt;/p&gt;

&lt;p&gt;The first function named ensuredAuth is a middleware function which ensures that when a user is signed in and in the dashboard route, he cannot access the homepage '/ route. It does this by first checking that if the user authenticated with the implementation of the if function, the if the user is signed, meaning the user is authenticated, then it goes to the next else it request the user to sign in by redirecting the user to the HomePage. Thus, it is suitable to implement this middleware in the '/dashboard' route to prevent signed in users from going back to the homepage. &lt;/p&gt;

&lt;p&gt;While the second function ensureGuest, is used to protecting registered non registered users from accessing the sign up page after signing in. Thus serving as a check mechanism to ensure that authenticated users cannot go to the sign up home page by redirecting them to the dashboard if the attempt to access the index '/' route. &lt;/p&gt;

&lt;p&gt;So, let's see how this middleware is imported and implement in the index route. &lt;/p&gt;

&lt;p&gt;middleware/index.js&lt;br&gt;
`&lt;br&gt;
const express = require('express')&lt;br&gt;
const router = express.Router()&lt;br&gt;
const Story = require('../models/Story')&lt;/p&gt;

&lt;p&gt;const {ensureAuth, ensureGuest} = require('../middleware/auth')&lt;/p&gt;

&lt;p&gt;/ @desc Login/Landiing page&lt;br&gt;
// &lt;a class="mentioned-user" href="https://dev.to/route"&gt;@route&lt;/a&gt; GET /&lt;/p&gt;

&lt;p&gt;router.get('/', ensureGuest, (req, res) =&amp;gt; {&lt;br&gt;
    res.render('login', {&lt;br&gt;
      layout: 'login',&lt;br&gt;
    })&lt;br&gt;
    })&lt;/p&gt;

&lt;p&gt;// @desc Dshboard &lt;br&gt;
// &lt;a class="mentioned-user" href="https://dev.to/route"&gt;@route&lt;/a&gt; GET /dashboard&lt;/p&gt;

&lt;p&gt;router.get('/dashboard', ensureAuth, async (req, res) =&amp;gt; {&lt;br&gt;
    try {&lt;br&gt;
        const stories = await Story.find({user: req.user.id}).lean()&lt;br&gt;
        res.render('dashboard',{&lt;br&gt;
            name: req.user.firstName,&lt;br&gt;
            stories&lt;br&gt;
        })&lt;br&gt;
    } catch (error) {&lt;br&gt;
        console.error(error)&lt;br&gt;
        res.render(error/500)&lt;br&gt;
    }&lt;/p&gt;

&lt;p&gt;})&lt;/p&gt;

&lt;p&gt;module.exports = router&lt;br&gt;
`&lt;/p&gt;

&lt;p&gt;Thus from the code snippet above, we can see how the import the middleware by de-structuring, and how we implement the middleware function as seond arguments in the route.get function. This is how we implement a middleware function my friend, which ensures that only certain users have access to certain routes. Happy hacking my friends and check out the repo below for reference : &lt;a href="https://github.com/amaboh/whisperApp"&gt;https://github.com/amaboh/whisperApp&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building a Data Analytics Platform for a Fintech: My Journey into Google Cloud</title>
      <dc:creator>Amaboh</dc:creator>
      <pubDate>Sun, 10 Apr 2022 14:24:17 +0000</pubDate>
      <link>https://forem.com/amaboh/react-context-simplified-8dn</link>
      <guid>https://forem.com/amaboh/react-context-simplified-8dn</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkdhdiplziida9mcau5i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkdhdiplziida9mcau5i.png" alt="Nkwa Cloud `infrastructure" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When I joined Nkwa, a fintech startup focused on improving financial inclusion in Cameroon, I had only been there for about four months when I faced a critical challenge: developing a data platform that could handle everything from raw ingestion to advanced analytics and regulatory reporting. Before joining Nkwa, my professional background was primarily in treasury analysis, working with financial instruments, managing liquidity, and conducting risk assessments. I had spent a good part of my early career buried in spreadsheets, bank statements, and treasury management systems—far from the world of modern data engineering. So, being handed the responsibility of building an analytics reporting platform on the cloud felt both exciting and intimidating.&lt;/p&gt;

&lt;p&gt;To add to the pressure, our small but ambitious company needed to rapidly scale its services. We were working to provide financial products to the unbanked in Cameroon—people who had never previously had access to conventional banking services. Our mission was to bring financial empowerment to these communities. As more users signed up and our operations expanded, it became clear that we needed a robust data platform. Not just one that stored data, but one that would allow us to run analytical queries, generate product and financial reports, and comply with regulatory requirements. It also needed to be cost-effective, secure, and designed for rapid iteration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Starting Point: Understanding the Data Sources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of the first steps was to understand the variety and complexity of our data sources. Nkwa’s mobile app collected a wealth of user data. This data lived in Firebase Cloud Firestore, a NoSQL database known for its ease of integration with mobile apps, real-time capabilities, and horizontal scaling. But that was only one piece of the puzzle.&lt;/p&gt;

&lt;p&gt;We also relied heavily on APIs from our mobile payment partners: MTN Mobile Money (MoMo) and Orange Money. These services handled user transactions—deposits, withdrawals, peer-to-peer transfers—and provided data through their proprietary APIs. Initially, just reading through their documentation felt like learning a new language. Each partner had its own authentication methods, pagination rules, rate limits, and data formats. I remember many late nights combing through PDF documents and developer portals, trying to understand how to pull the right transaction data without breaking their usage limits or missing important attributes.&lt;/p&gt;

&lt;p&gt;In addition, we integrated data from Beac (beac.int), the central bank of Cameroon. This was critical for financial benchmarking. Beac provided daily exchange rates, information on savings interest rates, and other macroeconomic indicators. Such data helped us benchmark our financial products and ensure we were offering competitive and fair services, while also staying compliant with regulations. Understanding Beac’s datasets was like dealing with official government reports and spreadsheets—less tech-savvy but no less important.&lt;/p&gt;

&lt;p&gt;By the end of this discovery phase, I realized that our data environment was not a single homogenous source. We had streaming data from the Nkwa app, batch data from MTN and Orange APIs (which could be polled periodically), and more static yet critical data from Beac. Each source had its unique format and refresh cycle. To support analytics, we needed a platform that could handle these disparate data flows gracefully.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choosing the Technology Stack: Why Google Cloud?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When I joined Nkwa, I had a background in Python and SQL, mostly from my treasury analyst days where I ran SQL queries on internal financial databases and wrote Python scripts for some automation tasks. However, building a scalable data platform in the cloud was new territory. I had roughly six months to get comfortable with Google Cloud. Luckily, cloud providers today offer extensive documentation, tutorials, and community support, which helped speed up my learning.&lt;/p&gt;

&lt;p&gt;We chose Google Cloud for a few reasons. First, the company had a strategic preference for GCP due to existing infrastructure and the ecosystem’s simplicity. Also, Google Cloud services like BigQuery, Cloud Storage, and Dataflow fit very well into the modern data analytics landscape. They are fully managed, meaning we wouldn’t have to spend hours provisioning servers or managing patches. This freed us to focus on solving data problems rather than infrastructure ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Laying Out the Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After some brainstorming and research, we established a blueprint for our data platform. We wanted a clear flow: ingestion, transformation, storage, and then analytics. The final architecture looked something like this:&lt;br&gt;
    &lt;strong&gt;1.    Data Sources Layer:&lt;br&gt;
    • Nkwa App (Firebase Cloud Firestore):&lt;/strong&gt; This provided user information, transactional metadata, and behavioral data straight from our mobile application.&lt;br&gt;
    &lt;strong&gt;•   External Partners (MTN Mobile Money, Orange Money): *&lt;em&gt;We accessed their APIs to retrieve transactional data. At first, this required manual scheduling and polling, but we planned to automate it.&lt;br&gt;
    *&lt;/em&gt;•   Beac (beac.int):&lt;/strong&gt; We pulled in exchange rates, savings interest rates, and regulatory benchmarks.&lt;br&gt;
    &lt;strong&gt;2.    Ingestion Layer:&lt;/strong&gt;&lt;br&gt;
We decided to use Apache Kafka as our data streaming service. Kafka provided a scalable, fault-tolerant way to bring all this data into one place. For data that was event-driven (like user transactions in real-time), pushing them into Kafka felt natural. For batch data from Beac or partner APIs, we adapted and wrote Python scripts that would fetch data and push it into Kafka at predefined intervals.&lt;br&gt;
Integrating Kafka with GCP might seem non-traditional since we have Pub/Sub as a native solution, but we already had some Kafka expertise and found it easier at the time. However, I must admit, if we had started fresh or wanted pure GCP-managed services, Pub/Sub might have been the better option.&lt;br&gt;
    &lt;strong&gt;3.    Processing Layer (Google Dataflow):&lt;/strong&gt;&lt;br&gt;
Once the data landed in Kafka, it was time to process and transform it. We used Google Dataflow for this. Dataflow is a fully managed service for handling both streaming and batch processing jobs, built on Apache Beam. I found Dataflow approachable, especially since I was familiar with Python and SQL. It allowed us to write pipelines in Python and let Dataflow handle the scaling.&lt;br&gt;
For example, a Dataflow job could take raw transaction logs from the app, join them with user details from Firestore exports, and enrich them with the latest exchange rates from Beac. The output would be a clean, well-structured dataset ready for analysis.&lt;br&gt;
    &lt;strong&gt;4.    Storage and Analytics Layer:&lt;/strong&gt;&lt;br&gt;
Cloud Storage acted as a landing area for any raw files we extracted. Some APIs provided JSON or CSV dumps; we stored them in Cloud Storage before processing. It served as a cost-effective data lake, giving us a place to keep data indefinitely while controlling costs.&lt;br&gt;
BigQuery was the heart of our analytics stack. It’s a serverless data warehouse that allows running SQL queries at scale. This was perfect for me since I was used to SQL from my treasury days. In BigQuery, we could store curated datasets and run complex queries without worrying about capacity planning or indexing. It also integrated seamlessly with Cloud Storage and Dataflow.&lt;br&gt;
With BigQuery, we performed aggregations to produce key metrics: user growth rates, transaction volumes, average transaction sizes, cost of service delivery, and many other KPIs that the business and regulatory bodies wanted to see.&lt;br&gt;
    &lt;strong&gt;5.    Orchestration (Cloud Composer):&lt;/strong&gt;&lt;br&gt;
Building the pipelines was one thing; orchestrating them was another. We introduced Cloud Composer, a fully managed Apache Airflow service, to schedule and monitor our data workflows. Composer allowed us to run daily tasks—for instance, at 2 AM every day, fetch the Beac exchange rates and interest rates, store them in Cloud Storage, process them with Dataflow, and load the results into BigQuery. At 3 AM, run aggregation queries in BigQuery to update the reporting tables. By 8 AM, the BI dashboards would have fresh data for the management team. It sounds simple now, but it took time and testing to get these dependencies right.&lt;br&gt;
    &lt;strong&gt;6.    BI and Reporting Tools:&lt;/strong&gt;&lt;br&gt;
Once our analytics-friendly datasets were in BigQuery, they could be accessed by various tools. Whether it was a simple SQL client, a business intelligence tool, or even a data science notebook, BigQuery served as a single source of truth. The management team could view dashboards to track user growth, the finance team could pull reports to comply with regulatory bodies, and data scientists could run more complex models to predict user churn or transaction fraud.&lt;/p&gt;

&lt;p&gt;C*&lt;em&gt;hallenges and How I Overcame Them&lt;/em&gt;*&lt;/p&gt;

&lt;p&gt;Adapting from treasury analysis to building a data platform in four months was not without hurdles. My previous experience gave me a good grasp of data modeling and some programming basics, but understanding distributed systems, real-time data processing, and cloud-native architectures was new.&lt;br&gt;
    &lt;strong&gt;•   Learning Curve on GCP:&lt;/strong&gt;&lt;br&gt;
The documentation and tutorials helped, but it was still overwhelming at times. I often started with small experiments on my personal GCP sandbox. I would spin up a Dataflow job with a small sample of data and test how transformations worked. I learned how BigQuery’s SQL syntax differed slightly from traditional SQL systems I knew. Over time, I got more confident and started to see patterns and best practices.&lt;br&gt;
    &lt;strong&gt;•   Integrating Partner APIs:&lt;/strong&gt;&lt;br&gt;
Reading documentation from MTN and Orange took patience. Sometimes the docs were incomplete, and I had to open support tickets or check community forums for answers. Dealing with authentication tokens, handling rate limits, and ensuring we had the right API keys in place were all chores that required care and attention to detail. I learned to log every step of the ingestion process and keep careful track of errors. Good logging saved me countless hours of guesswork.&lt;br&gt;
    &lt;strong&gt;•   Working with Beac Data:&lt;/strong&gt;&lt;br&gt;
Beac data was not presented as a modern API but more of a data feed or manual download. We had to write scripts to fetch this data, parse the sometimes messy formats, and convert them into a structured schema. It felt like going through official financial bulletins line by line, but the payoff was huge. Once this data was cleanly integrated, we could benchmark our product rates effectively.&lt;br&gt;
    &lt;strong&gt;•   Scaling and Cost Management:&lt;/strong&gt;&lt;br&gt;
One of the trickier aspects was ensuring that as we brought more data into the system, the costs didn’t skyrocket. BigQuery charges based on the amount of data processed, and Dataflow costs can add up if pipelines run continuously. We learned to optimize queries, partition BigQuery tables by date, and compress data in Cloud Storage. Over time, we established governance practices—reviewing queries regularly, archiving older data that wasn’t frequently accessed, and consolidating transformations.&lt;br&gt;
    &lt;strong&gt;•   Time Pressure and the Need for Incremental Wins:&lt;/strong&gt;&lt;br&gt;
With only four months into the job, I couldn’t afford to build everything perfectly from day one. Instead, I focused on delivering incremental wins. First, I set up a basic pipeline for one data source. Then I added another source, then another transformation step, and so on. Each small success boosted my confidence and helped the team trust the new platform. By the end of the four months, we had a decent system in place, and I had learned a tremendous amount along the way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Leveraging Existing Skills and Learning New Ones&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;My background in Python and SQL was a real lifesaver. Python helped in writing custom Dataflow pipelines and orchestrating tasks. SQL was crucial for querying BigQuery and understanding how to structure our tables for efficient querying. Even though I was new to cloud data engineering, my previous skill set bridged the gap. I could incrementally grow my knowledge of GCP services without feeling completely lost.&lt;/p&gt;

&lt;p&gt;As I got more comfortable, I realized that having some frontend development skills would be beneficial, too. At first, it seemed unrelated, but understanding frontend principles allowed me to appreciate the importance of well-organized data structures for reporting and visualization tools. I took a few courses on Scrimba, an online platform that offers interactive coding tutorials. Learning frontend development basics made it easier for me to understand how the BI team would use our data. If they needed a certain data point to display on the dashboard, I knew exactly how to structure it in BigQuery to make their lives easier.&lt;/p&gt;

&lt;p&gt;This cross-training had another benefit: it improved communication with other team members. When I talked to the frontend developers, I could speak their language a bit. When I discussed transformations with data scientists, I knew how to provide the cleanest inputs for their models. This holistic view of the data’s journey—from ingestion all the way to user-facing dashboards—made me more effective in my role.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lessons Learned&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As I reflect on those first four months at Nkwa, I realize how pivotal they were in my career. Shifting from a treasury analyst to a data platform developer in a fintech startup taught me lessons that I would carry forward:&lt;br&gt;
    &lt;strong&gt;1.    Embrace Change and Uncertainty:&lt;/strong&gt;&lt;br&gt;
I stepped out of my comfort zone. The tools, the ecosystem, even the type of data I was dealing with—it all changed. But by embracing that change, I discovered a world of technology and processes that make finance more accessible.&lt;br&gt;
    &lt;strong&gt;2.    Start Small and Iterate:&lt;/strong&gt;&lt;br&gt;
Instead of trying to build the perfect platform from day one, I focused on small wins. One pipeline at a time, one data source at a time. This incremental approach helped me learn faster and show progress to stakeholders.&lt;br&gt;
    &lt;strong&gt;3.    Leverage Existing Skills and Build New Ones:&lt;/strong&gt;&lt;br&gt;
My Python and SQL background provided a strong foundation. On top of that, I learned GCP’s services and picked up some frontend development concepts from Scrimba. This combination made me a more versatile contributor.&lt;br&gt;
    &lt;strong&gt;4.    Documentation and Communication are Key:&lt;/strong&gt;&lt;br&gt;
Reading partner API documentation carefully, asking questions on forums, and writing detailed internal documentation for my pipelines saved time and prevented mistakes.&lt;br&gt;
    &lt;strong&gt;5.    Cost and Performance Considerations Matter:&lt;/strong&gt;&lt;br&gt;
At scale, even small inefficiencies become expensive. I learned to think about query optimization, data partitioning, and workflow scheduling from day one.&lt;br&gt;
    &lt;strong&gt;6.    Understand the Business Context:&lt;/strong&gt;&lt;br&gt;
The data platform wasn’t just a technical feat; it served the company’s mission of financial inclusion. By understanding the business goals—such as compliance reporting to regulators and improving product offerings for the unbanked—I was able to design data models and pipelines that aligned with what our stakeholders truly needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In just a few months, I went from examining treasury reports to orchestrating a complex data analytics platform on Google Cloud. I learned how to integrate disparate data sources—Firebase Firestore for our app data, MTN and Orange Money APIs for transaction records, and Beac for macroeconomic indicators—into a cohesive system. I leveraged Apache Kafka for ingestion, Google Dataflow for transformations, and BigQuery for analytics, all orchestrated by Cloud Composer. Our BI and data science teams could then produce actionable insights, empowering the business and satisfying regulatory requirements.&lt;/p&gt;

&lt;p&gt;It wasn’t easy, but that’s what made the journey worthwhile. The lessons I took away—both technical and personal—shaped my approach to problem-solving and teamwork. Today, that data platform continues to evolve, supporting Nkwa’s vision of bringing financial services to those who need them most. And I’m proud to have played a part in building it.&lt;/p&gt;

</description>
      <category>python</category>
      <category>database</category>
      <category>api</category>
    </item>
  </channel>
</rss>
