<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: FINN</title>
    <description>The latest articles on Forem by FINN (@finnauto).</description>
    <link>https://forem.com/finnauto</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/finnauto"/>
    <language>en</language>
    <item>
      <title>Managing Saas for Growth: Insights into the FINN Way</title>
      <dc:creator>Verena Ermes</dc:creator>
      <pubDate>Thu, 14 Dec 2023 16:08:00 +0000</pubDate>
      <link>https://forem.com/finnauto/managing-saas-for-growth-insights-into-the-finn-way-4ibf</link>
      <guid>https://forem.com/finnauto/managing-saas-for-growth-insights-into-the-finn-way-4ibf</guid>
      <description>&lt;p&gt;In today's economic environment, effective SaaS (Software as a Service) management can be a game-changer. In a series of LinkedIn posts, &lt;a href="https://www.linkedin.com/in/andreasstryz/" rel="noopener noreferrer"&gt;Andi&lt;/a&gt;, our CTO, dove into our SaaS management strategies. Let's take a closer look at these strategies, from welcoming new tools with open arms to mastering the art of negotiation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Roll Out a Welcome Mat
&lt;/h2&gt;

&lt;p&gt;At FINN, innovation and autonomy go hand in hand. We believe in giving teams the freedom to choose the tools that best suit their needs. This not only boosts productivity but also fosters a sense of ownership and accountability. When individuals work with tools they love, they tend to perform at their best. As shown by the &lt;a href="https://dora.dev/devops-capabilities/technical/teams-empowered-to-choose-tools/" rel="noopener noreferrer"&gt;DevOps Research and Assessment (DORA)&lt;/a&gt; team, giving teams the autonomy to make tool choices increases the teams’ performance and job satisfaction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Central Overview Monitored by the CTO Office
&lt;/h2&gt;

&lt;p&gt;As a result of providing autonomy to choose and change tools, we saw the number of SaaS tools grow from seven at the beginning of 2020 to over 80 in 2023. It’s therefore essential to maintain control over expenses. The CTO office keeps a close watch on the SaaS expenditures. Any tool crossing the €50/month threshold automatically becomes part of a central overview monitored by the CTO office. We have developed an in-house vendor management solution that provides the central overview and helps us streamline spending and optimize SaaS investments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Meet HUBERT: The Vendor Management Solution
&lt;/h2&gt;

&lt;p&gt;Managing contracts for numerous SaaS tools can quickly become chaotic. That is why we created HUBERT, our internal Vendor Management tool. HUBERT centralizes contract information, ensures data integrity and notifies billing DRIs, responsible for invoice upload to our expense management tool, about renewal dates through automated Slack reminders. It enhances transparency and accountability and provides smart spotting of emerging SaaS tools to identify Shadow IT. With HUBERT in the picture, we keep an overview in one place.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F010byao37jkum3zcptmj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F010byao37jkum3zcptmj.png" alt="Figure 1. HUBERT, our vendor management solution."&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Figure 1.&lt;/strong&gt; HUBERT, our vendor management solution.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a SaaS Repository
&lt;/h2&gt;

&lt;p&gt;From day one, we recognized the value of maintaining a complete SaaS repository. This repository, hosted on Confluence, has proven invaluable for various purposes. Whether evaluating potential partners, conducting audits, preparing due diligence inquiries, or analyzing the tech stack, this repository provides critical insights and now too, can be derived from HUBERT. We developed an export functionality to download a sharable view on our SaaS repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyk0yf09dgks97eyvzxqz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyk0yf09dgks97eyvzxqz.png" alt="Figure 2. Our Third Party Services repository."&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Figure 2.&lt;/strong&gt; Our Third Party Services repository.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  SaaS Spend and User Inactivity Management
&lt;/h2&gt;

&lt;p&gt;Closely related to the challenge of keeping a central overview of SaaS contracts is the challenge of optimizing SaaS investments. To optimize SaaS spending, we follow a dual approach: &lt;strong&gt;Provision ALL, Deprovision FAST&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provision ALL: We provide new starters with access to the ecosystem of widely adopted tools, reducing onboarding time and fostering autonomy.&lt;/li&gt;
&lt;li&gt;Deprovision FAST: By automatically identifying and deprovisioning inactive accounts, we have achieved a 7% reduction in costs (since tracking in July 2023). The automated processes sync with Google Workspace and are based on either user inactivity of over three months or working contract end dates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A dashboard visualizes the ratio of active to inactive users per tool for better visibility, and the automations help us reduce spending and minimize manual auditing efforts.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Art of Negotiation
&lt;/h2&gt;

&lt;p&gt;We view negotiation as the big sister of inactive user management when it comes to optimizing SaaS investments. At FINN, we like to approach negotiations playfully with a spirit of sportsmanship. Everyone involved in SaaS negotiations learns these four guiding principles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Turn Negotiations into a Game: We treat negotiations as a friendly competition, encouraging the Tech Leadership to secure the highest discounts.&lt;/li&gt;
&lt;li&gt;Preparation is Key: In-depth market analyses, needs assessments, Requests for Proposal, and decision matrices ensure we enter negotiations well prepared.&lt;/li&gt;
&lt;li&gt;Ask for More: The golden rule is to ask for a discount that makes us slightly uncomfortable, ensuring we get the best deal.&lt;/li&gt;
&lt;li&gt;Anchor the Deal: We aim to improve the price or get more value for the same cost, resulting in a win-win for the company.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In conclusion, our approach to SaaS management revolves around empowering our teams, centralizing information, optimizing spending, and negotiating effectively. By implementing these strategies, we are following one central Engineering paradigm at FINN: &lt;strong&gt;we buy commodities, we build assets&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are your strategies to manage software as a service?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>saas</category>
      <category>management</category>
    </item>
    <item>
      <title>User Centric Technical Writing: A conversation with Kosara Golemshinska on elevating documentation</title>
      <dc:creator>Verena Ermes</dc:creator>
      <pubDate>Thu, 07 Dec 2023 16:10:00 +0000</pubDate>
      <link>https://forem.com/finnauto/user-centric-technical-writing-a-conversation-with-kosara-golemshinska-on-elevating-documentation-4f40</link>
      <guid>https://forem.com/finnauto/user-centric-technical-writing-a-conversation-with-kosara-golemshinska-on-elevating-documentation-4f40</guid>
      <description>&lt;p&gt;Documentation can oftentimes come as an afterthought. At FINN, technical writing is set up as docs-as-code approach to be closely aligned with new developments in the Tech teams. But how can you elevate documentation further and ensure that the audiences' needs are served? How do you align priorities and document the most impactful things? We are trying to get answers to these questions and more by sitting down with &lt;a href="https://www.linkedin.com/in/kosara-g/"&gt;Kosara Golemshinska&lt;/a&gt;, Technical Writer at &lt;a href="https://www.finn.com/"&gt;FINN&lt;/a&gt;, who started her tech writing career at FINN and is continuously shaping user-centric documentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hi, Kosara 👋 Thank you for offering to be interviewed. To dive right in, could you explain what your journey into technical writing looked like?
&lt;/h2&gt;

&lt;p&gt;By education, I am a software engineer. As such, I did hear a lot about the importance of documentation, the different types of documentation, and what role it plays in software development. During each semester in my program at university, we worked on a lot of practical projects in small agile teams. I always ended up advocating for documentation, making sure it gets done, and trying to ensure that it was in a proper state. That was sort of the first indication I had that I am really passionate about documentation, managing knowledge, and managing information.&lt;/p&gt;

&lt;p&gt;After university, I went into pure engineering roles, and there too, I still ended up advocating for documentation. I saw the impacts that insufficient documentation had on actual software teams firsthand. Especially in my previous role, I looked a lot at test documents. I noticed that the documentation there was in such a format and in such a state that it did more harm than good. That was when I felt ready to make the transition into being a documentarian.&lt;/p&gt;

&lt;p&gt;As someone who's always been very interested in the humanities it really made sense to combine both my technical skills and my language and communication skills into one. Luckily, I found the Technical Writer role at FINN. It was a really good place for me to make the transition because of the role’s focus on software documentation. Right up my alley. And yeah, this is how I'm here today.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is your current view on documentation at FINN?
&lt;/h2&gt;

&lt;p&gt;It is appreciated and needed. I can see that documentation is part of everyone's job. Everyone takes the time to document their knowledge, whether it is on Confluence, Slack, or in a document that they share with others. The idea that documentation should not only record knowledge but also help generate it is widespread. Documentation is used to brainstorm and to align on the next steps before diving into the details. I think this wide recognition is why we have a very diverse ecosystem of documentation.&lt;/p&gt;

&lt;p&gt;At the same time, I also see the effort that people put in to maintain their own documentation. They assign a DRI when they create something and this is the person who's going to follow up. They also try to maintain some sort of standardization in their teams, which is really cool to see.&lt;/p&gt;

&lt;p&gt;The documentation is as diverse as the teams themselves. There are so many different types of information being created. And it really depends on the roles and the functions that people have. Documentation at FINN is a living breathing creature.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do you value the impact of technical writing on our products?
&lt;/h2&gt;

&lt;p&gt;At FINN, we view it as an enabling function since the documentation that technical writing is involved in is entirely internal. If we take our API developer portal as an example, it's an excellent resource that is used not just by the engineering teams but also by the product managers and some business roles to understand how to use our APIs. They hold each other accountable whenever they see issues or need clarification. The responsible team that owns that API goes in and updates the documentation. They see the value in it.&lt;/p&gt;

&lt;p&gt;That makes documentation a part of the API landscape. The APIs are essential for how we do business at FINN. But that doesn’t apply to just API documentation. We also have pretty extensive business-related documentation, that is often referenced to align on how things should be done. And it's especially critical in our daily operations. For instance, in Finance, where the stakes are always high, documentation is an integral part of how we operate and what our source of truth is. Because if we can't agree on the truth, we can’t do anything. Documentation serves precisely as this source of truth, for all teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  You have been studying the book &lt;a href="https://www.splunk.com/en_us/blog/splunklife/the-product-is-docs.html"&gt;The Product is Docs&lt;/a&gt;. What are your key takeaways?
&lt;/h2&gt;

&lt;p&gt;I think it's a very good resource. It's not just an introduction but it also has a lot of advanced topics for all sorts of roles. Any documentarian is going to tell you that we do some writing but that's not the majority of our tasks. We do a lot of research, communication, editing, reviewing, and verifying information. That book, The Product is Docs, focuses precisely on all the processes surrounding documentation. It provides practical advice, for example on the process for defining a learning objective or how to communicate with product managers. It focuses heavily on the processes of technical writing and information development. This makes it really valuable to me and I often use it as a reference. I always find something new in it because there's so much practical advice given, which makes it a mainstay at my desk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Looking at the research aspects of technical writing, how do you perceive the value of leaning into the profile of a User Experience (UX) researcher?
&lt;/h2&gt;

&lt;p&gt;I think there's a lot of overlap. The fields of UX writing, content design, and technical writing you can also call information development and knowledge management. All of these topics are closely connected as their main focus is managing knowledge. They also rely heavily on empathizing with users, doing user research, communicating with lots of different functions, and using common patterns to manage our content and to make things accessible for users. So, I think you can't really do technical writing at an advanced level if you aren't familiar with UX research practices. The main idea is to understand the users, to understand their needs, and to understand how to provide the most value to them. That's what technical writing is really all about.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do you imagine an ideal rhythm of technical documentation throughout FINN?
&lt;/h2&gt;

&lt;p&gt;It has to be a very scalable way of doing documentation, because FINN is a very fast-growing company. We defined our own documentation strategy to best match the mission of FINN and to best support our stakeholders. We looked at who our main stakeholders are, what their needs are, and how we can support them. Based on that, we defined our mission: to make knowledge a top–tier resource. This is the overarching idea of what technical writing at FINN is all about.&lt;/p&gt;

&lt;p&gt;The mission needs to rest on a set of fundamental principles. We defined these as our pillars: documentation should be user-centric, searchable, and accurate. These pillars are the building blocks of our strategy. For each pillar, we defined activities and metrics to track our progress against the strategy. This is how we approached defining a strategy, which was similar to how it was actually done in our data organization at FINN as well. We're trying to be aligned in how we approach these high-level topics.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is most challenging when putting such a strategy into action?
&lt;/h2&gt;

&lt;p&gt;It’s definitely managing to keep track of and monitor the metrics that we have defined. Documentation is something that is notoriously tricky to measure. It's not impossible, it's just not straightforward. You need to combine a lot of different things. You need to look at different sources and always view the metrics in context which makes documentation measurement kind of special.&lt;/p&gt;

&lt;p&gt;It's definitely a learning experience and I'm curious to see what we will learn from the impact of our metrics. Maybe the impact is going to show us that some of the metrics are not the best choice. Maybe we will see that we need a different approach or we’ll see something new about our stakeholders and their use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Speaking of stakeholders, how do you currently ensure that you are aligned with your stakeholders?
&lt;/h2&gt;

&lt;p&gt;The main thing is communication. The wider field of documentation is often described as technical communication and I think that's a really good way of putting it. So, technical writing is all about maintaining open communication channels and being in a constant feedback loop with pretty much everyone.&lt;/p&gt;

&lt;p&gt;We have not just regular meetings that we attend as technical writers but we also have one-on-ones where we try to dive deeper into current challenges and goals of individual stakeholders. We apply UX research and look at specific pieces of documentation and assess how usable they are. We also apply principles from information architecture to make the documentation better organized and, overall, more accessible. Essentially, we apply a lot of different techniques to make sure that we understand what we're supposed to be documenting. We always aim to put our efforts where they create the most impact. In short, the answer is communication, communication, communication.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thank you very much for sharing your insights into technical writing at FINN. What advice would you give someone who wants to start a career in technical writing?
&lt;/h2&gt;

&lt;p&gt;My first advice is to connect with the very strong technical writing community. There are lots of groups, for instance on LinkedIn or the &lt;a href="https://www.writethedocs.org/"&gt;Write the Docs&lt;/a&gt; community. You can find many conferences, too. Look up some meetups and enjoy them. People are really open to newcomers and always happy to share their learnings.&lt;/p&gt;

&lt;p&gt;My second advice is to decide on a specific technical writing profile that you want to pursue. There are many avenues for technical writers. Decide on the type of tech stack you want to learn and the type of industry that you want to go in, then focus on that. That's really going to have the most impact.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do you approach technical writing in your organization? Share your insights in the comments below&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>writing</category>
      <category>documentation</category>
      <category>ux</category>
    </item>
    <item>
      <title>How we monitor our Make instance at FINN</title>
      <dc:creator>Delena Malan</dc:creator>
      <pubDate>Thu, 07 Sep 2023 14:21:00 +0000</pubDate>
      <link>https://forem.com/finnauto/how-we-monitor-our-make-instance-at-finn-1f9a</link>
      <guid>https://forem.com/finnauto/how-we-monitor-our-make-instance-at-finn-1f9a</guid>
      <description>&lt;p&gt;&lt;em&gt;How we monitor our Make Enterprise instance at FINN using the Make API, AWS Lambda functions, and Datadog.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this post, I'm going to show you how we monitor our &lt;a href="https://www.make.com/" rel="noopener noreferrer"&gt;Make&lt;/a&gt; instance at &lt;a href="https://www.finn.com/" rel="noopener noreferrer"&gt;FINN&lt;/a&gt; as a Make Enterprise customer. I'll run you through why we monitor our instance, how the technical setup works, and how &lt;a href="https://www.datadoghq.com/" rel="noopener noreferrer"&gt;Datadog&lt;/a&gt; helps us to stay on top of problematic Make scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;At FINN, we use Make extensively to help us automate business processes. &lt;br&gt;
We are one of Make's biggest customers, with about 3,500 active scenarios, running for a total of about 20,000 minutes (or 13.8 days) per day.&lt;/p&gt;

&lt;p&gt;We use Make Enterprise, meaning we have a private Make instance that runs on its own infrastructure. Our extensive use of Make often pushes our instance to its limits. When the instance has too much work to do simultaneously, a backlog of scenario executions forms and it becomes slow.&lt;/p&gt;

&lt;p&gt;Many of our scenarios are time-sensitive, and it is vital that they are executed within a reasonable amount of time. Some scenarios are critical to our business operations; therefore, we need to stay on top of any issues or errors.&lt;/p&gt;
&lt;h2&gt;
  
  
  Infrastructure dashboard
&lt;/h2&gt;

&lt;p&gt;Although we don't have direct access to our instance's underlying infrastructure, Make has provided us with a Datadog dashboard to help us monitor it. Most importantly, this dashboard shows us at what capacity our workers are running and how many scenario executions are currently queued:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmr06mrsdxsep37s1uh2c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmr06mrsdxsep37s1uh2c.png" alt="Figure 1. Our Make Infrastructure Datadog dashboard."&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Figure 1.&lt;/strong&gt; Our Make Infrastructure Datadog dashboard.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Although this dashboard helps us identify &lt;em&gt;when&lt;/em&gt; our instance is close to reaching its capacity, it doesn't allow us to identify &lt;em&gt;why&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;In the past, we had to contact Make's support team to help us figure out which scenarios or modules might be causing our infrastructure to struggle.&lt;/p&gt;

&lt;p&gt;To help us identify which specific scenarios use most of our instance's capacity, we created our own monitoring setup for Make using the &lt;a href="https://www.make.com/en/api-documentation/" rel="noopener noreferrer"&gt;Make API&lt;/a&gt;, AWS Lambda functions, and Datadog.&lt;/p&gt;
&lt;h2&gt;
  
  
  Technical Overview
&lt;/h2&gt;

&lt;p&gt;In this section, I'll give you a quick overview of the technical architecture behind our monitoring setup before showing you what our dashboards look like and running you through some of Datadog's benefits. &lt;/p&gt;

&lt;p&gt;In a nutshell, we fetch data from Make's API using scheduled Python AWS Lambda functions and push it to Datadog's API as metrics. We store the Make data in a PostgreSQL database to use as a cache and for record-keeping.&lt;/p&gt;

&lt;p&gt;In figure 2, you can see a diagram of our setup:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs9jwy5kaikpeqkyr1ch9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs9jwy5kaikpeqkyr1ch9.png" alt="Figure 2. Architecture of our monitoring setup."&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Figure 2.&lt;/strong&gt; Architecture of our monitoring setup.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As shown in figure 2, we use &lt;a href="https://aws.amazon.com/eventbridge/" rel="noopener noreferrer"&gt;AWS EventBridge&lt;/a&gt; to schedule Lambda functions to run every few minutes. We have a few different Lambda functions that run on different schedules depending on which data they fetch and which metrics they create.&lt;/p&gt;

&lt;p&gt;The public Make API gives us most of the data we need for our metrics. In the following sections, I'll highlight which API routes we use for which metrics.&lt;/p&gt;

&lt;p&gt;To get an in-depth analysis of individual executions, we use one of the Make web application's API routes, which isn't part of the official public Make API.&lt;/p&gt;

&lt;p&gt;Each Lambda function starts by querying our PostgreSQL database to get cached data. Next, the function calls the Make API to retrieve the latest data. It caches this data in our database to decrease the number of API calls it needs to make the next time it runs. The function transforms the data and pushes it as metrics to the Datadog API.&lt;/p&gt;

&lt;p&gt;We use the &lt;a href="https://pypi.org/project/datadog/" rel="noopener noreferrer"&gt;&lt;code&gt;datadog&lt;/code&gt;&lt;/a&gt; Python package to interface with the Datadog API.&lt;/p&gt;

&lt;p&gt;Datadog supports a list of &lt;a href="https://docs.datadoghq.com/metrics/types/" rel="noopener noreferrer"&gt;metric types&lt;/a&gt;. We use &lt;a href="https://docs.datadoghq.com/metrics/types/?tab=gauge#metric-types" rel="noopener noreferrer"&gt;"GAUGE" metrics&lt;/a&gt; to keep track of current values, such as queue lengths, data store usage, scenario consumptions, and others.&lt;/p&gt;

&lt;p&gt;The following code sample shows how we use the &lt;code&gt;datadog.api.Metric.send&lt;/code&gt; method to send the &lt;code&gt;make.hook.length&lt;/code&gt; gauge metric to Datadog for each scenario:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;now&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;scenario&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;scenarios&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;datadog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;api&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;metric&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;make.hook.length&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;metric_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gauge&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;points&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[(&lt;/span&gt;&lt;span class="n"&gt;now&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;scenario&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dlqCount&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])],&lt;/span&gt;
        &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;scenario_id:&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;scenario&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;scenario_name:&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;scenario&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;hook_id:&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;scenario&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;hookId&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;attach_host_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For individual data points on which we want to do statistical analysis, for example, scenario execution times, we use the &lt;a href="https://docs.datadoghq.com/metrics/types/?tab=distribution#metric-types" rel="noopener noreferrer"&gt;"DISTRIBUTION" metric type&lt;/a&gt;. The following code sample shows how one of our Lambda functions collects scenario logs and bundles them up into a list of points to send to Datadog's &lt;a href="https://docs.datadoghq.com/api/latest/metrics/#submit-distribution-points" rel="noopener noreferrer"&gt;Submit distribution points&lt;/a&gt; API route:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;datadog_api&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Distribution&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;distributions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;metric&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;make.scenario.duration&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;points&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="p"&gt;(&lt;/span&gt;
                    &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;end_time_ms&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;  &lt;span class="c1"&gt;# timestamp in seconds
&lt;/span&gt;                    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;duration&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]],&lt;/span&gt;
                &lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tags&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;scenario_id:1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;scenario_name:my_scenario&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;team_id:1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;distribution&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;attach_host_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why Datadog?
&lt;/h2&gt;

&lt;p&gt;Before we move on to the different metrics we track, I want to give you a quick rundown of why we use Datadog to monitor our Make instance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Datadog's dashboards can display thousands of metrics on a single dashboard. We wanted our dashboards to display the metrics of each of our 3,500 (and counting) scenarios.&lt;/li&gt;
&lt;li&gt;Datadog allows you to create beautiful dashboards using various types of &lt;a href="https://docs.datadoghq.com/dashboards/widgets/" rel="noopener noreferrer"&gt;graphs and widgets&lt;/a&gt;. We wanted to make our dashboards available to everyone in the company. Thus, the dashboards needed to be accessible to everyone, including non-technical users.&lt;/li&gt;
&lt;li&gt;Datadog metrics support custom &lt;a href="https://docs.datadoghq.com/getting_started/tagging/" rel="noopener noreferrer"&gt;tags&lt;/a&gt;. Figure 3 shows how we use custom metric tags to display and filter our metrics by, for example, organisation, team, author, or scenario:
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkewhb7d18hbp54pt3zwg.png" alt="Figure 3. Our dashboard filters."&gt;
&lt;em&gt;&lt;strong&gt;Figure 3.&lt;/strong&gt; Our dashboard filters.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Datadog's &lt;a href="https://docs.datadoghq.com/dashboards/guide/context-links/" rel="noopener noreferrer"&gt;context links&lt;/a&gt; allow us to link to external applications from graphs. We use context links to add links to our scenarios in Make for quick access. Figure 4 shows an example of a context link to a scenario:
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjh8pgkdeq2nqcjcv98v3.png" alt="Figure 4. Screenshot of a context link to open a scenario."&gt;
&lt;em&gt;&lt;strong&gt;Figure 4.&lt;/strong&gt; Screenshot of a context link to open a scenario.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Datadog supports single-sign-on (SSO) with Google Workspace. SSO allows for seamless onboarding and offboarding of our employees to the platform.&lt;/li&gt;
&lt;li&gt;Setting up monitors/alerts in Datadog and sending them to Slack is easy. Figure 5 shows an example of an alert that we received in Slack:
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fis4frs75mbv30wtajffm.png" alt="Figure 5. An example alert in Slack."&gt;
&lt;em&gt;&lt;strong&gt;Figure 5.&lt;/strong&gt; An example alert in Slack.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Datadog allows you to do aggregations and get statistics on metrics to, for example, get the sum of a metric over a period of time or to get the top 10 values of a metric.&lt;/li&gt;
&lt;li&gt;Last but not least, we already had some experience using Datadog and have used it to monitor some of our internal services at FINN.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using Datadog isn't without its downsides, the high cost being the main one. We generate thousands of custom metrics per hour because we submit unique metrics per scenario to Datadog, which can become expensive.&lt;/p&gt;

&lt;p&gt;We managed to keep our Datadog costs under control, however, by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;using Datadog's &lt;a href="https://docs.datadoghq.com/metrics/metrics-without-limits/" rel="noopener noreferrer"&gt;"Metrics without Limits"&lt;/a&gt; feature to limit the number of metrics that are indexed,&lt;/li&gt;
&lt;li&gt;not sending zero-value metrics to Datadog, and&lt;/li&gt;
&lt;li&gt;by creating a dashboard to keep track of our estimated costs using Datadog's &lt;a href="https://docs.datadoghq.com/account_management/billing/usage_metrics/" rel="noopener noreferrer"&gt;estimated usage metrics&lt;/a&gt;:
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2pidvv3j0y1v6eefwem.png" alt="Figure 6. Our "&gt;
&lt;em&gt;&lt;strong&gt;Figure 6.&lt;/strong&gt; Our "Estimated Datadog Costs" dashboard.&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Our metrics and dashboards
&lt;/h2&gt;

&lt;p&gt;In the following sections, I'll run you through the metrics we track, why we track them, and how we use them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hook queue lengths
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8w2gz6vlyxf9toyuvgq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8w2gz6vlyxf9toyuvgq.png" alt="Figure 7. The "&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Figure 7.&lt;/strong&gt; The "Hooks" section in our Make-monitoring dashboard.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A key feature of Make is its incoming hooks functionality. Hooks support both &lt;a href="https://www.make.com/en/help/tools/webhooks" rel="noopener noreferrer"&gt;webhooks&lt;/a&gt; and email triggers.&lt;/p&gt;

&lt;p&gt;We use hooks to trigger Make scenarios from other Make scenarios---in other words, "chaining" scenarios---third-party services, and our own services.&lt;/p&gt;

&lt;p&gt;It's important to us to keep an eye out for hook queues that are too long or are rapidly increasing. Something downstream may be taking too long, or an upstream service might be calling the hook too frequently. We don't want our workers to be overwhelmed by the number of hook executions they need to process.&lt;/p&gt;

&lt;p&gt;We get the hook queue lengths from the &lt;a href="https://www.make.com/en/api-documentation/hooks-get" rel="noopener noreferrer"&gt;&lt;code&gt;/hooks&lt;/code&gt;&lt;/a&gt; API route, and we have a monitor to notify us when a hook's queue is growing too quickly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Incomplete Executions
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9s4son5twbe1s5y4ecwy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9s4son5twbe1s5y4ecwy.png" alt="Figure 8. The "&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Figure 8.&lt;/strong&gt; The "Incomplete executions" section in our Make-monitoring dashboard.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Make allows us to store &lt;a href="https://www.make.com/en/help/scenarios/incomplete-executions" rel="noopener noreferrer"&gt;incomplete executions&lt;/a&gt; when scenarios encounter errors. We can fix the error and reprocess these executions.&lt;/p&gt;

&lt;p&gt;We have a monitor to notify us when the incomplete executions of a scenario are rapidly growing so that we can stay on top of scenarios that are raising errors. &lt;/p&gt;

&lt;p&gt;We get the number of incomplete executions for each scenario from the &lt;code&gt;dlqCount&lt;/code&gt; field in the &lt;a href="https://www.make.com/en/api-documentation/scenarios-get" rel="noopener noreferrer"&gt;&lt;code&gt;/scenarios&lt;/code&gt;&lt;/a&gt; API route's response.&lt;/p&gt;

&lt;h3&gt;
  
  
  Operations and Transfers
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwrpuixgexwv4i1nwmwg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwrpuixgexwv4i1nwmwg.png" alt="Figure 9. The "&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Figure 9.&lt;/strong&gt; The "Operations" section of our Make-monitoring dashboard.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foq4cjk3r5pl4gm0bts6s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foq4cjk3r5pl4gm0bts6s.png" alt="Figure 10. The "&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Figure 10.&lt;/strong&gt; The "Transfers" section of our Make-monitoring dashboard.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Make's &lt;a href="https://www.make.com/en/help/general/pricing-parameters#usage-allowance" rel="noopener noreferrer"&gt;pricing plans&lt;/a&gt; are determined according to the number of operations you purchase. For every 10,000 operations, you receive a number of allowed transfers. We track how many operations and transfers each of our scenarios uses to see how our allocated operations and transfers are used.&lt;/p&gt;

&lt;p&gt;These graphs also assist us in our investigations when our Make workers' usage is close to capacity to see which scenarios have been running operation-intensive or transfer-intensive workloads.&lt;/p&gt;

&lt;p&gt;We get these metrics from the &lt;code&gt;consumptions&lt;/code&gt; field in the &lt;a href="https://www.make.com/en/api-documentation/scenarios-get" rel="noopener noreferrer"&gt;&lt;code&gt;/scenarios&lt;/code&gt;&lt;/a&gt; API route's response.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data stores
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zeew6ylnhgme19dywrz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zeew6ylnhgme19dywrz.png" alt="Figure 11. The "&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Figure 11.&lt;/strong&gt; The "Data Stores" section of our Make-monitoring dashboard.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Make's &lt;a href="https://www.make.com/en/help/tools/data-store" rel="noopener noreferrer"&gt;data stores&lt;/a&gt; allow scenarios to save data within the Make platform. We keep track of two data store metrics: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The percentage of data store slots used per organisation.&lt;/strong&gt; Make limits the number of data stores you can create according to your Make license. We created a Datadog monitor to notify us when we're about to run out of data stores so that we can either clean up unused data stores or request to increase this limit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The used capacity of each data store.&lt;/strong&gt; When you create a data store in Make, you need to specify its storage size capacity. We created a Datadog monitor to notify us when a data store is about to run out of space so that we can proactively increase its size and thereby prevent scenarios from running into errors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We get the number of data stores, their current storage usage, and their storage capacities from the &lt;a href="https://www.make.com/en/api-documentation/data-stores-get" rel="noopener noreferrer"&gt;&lt;code&gt;/data-stores&lt;/code&gt;&lt;/a&gt; API route. The maximum number of data stores that each organisation is allowed to have, we get from the &lt;code&gt;license.dslimit&lt;/code&gt; field from the &lt;a href="https://www.make.com/en/api-documentation/organizations-get" rel="noopener noreferrer"&gt;&lt;code&gt;/organizations&lt;/code&gt;&lt;/a&gt; API route.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario executions
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50zntpwaqsdi1h63n6fx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50zntpwaqsdi1h63n6fx.png" alt="Figure 12. The "&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Figure 12.&lt;/strong&gt; The "Scenario executions" section of our Make-monitoring dashboard.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This section is my personal favourite part of our monitoring dashboard. It is arguably the most useful for investigating the cause of incidents.&lt;/p&gt;

&lt;p&gt;We use an undocumented API route, &lt;code&gt;/api/v2/admin/scenarios/logs&lt;/code&gt;, to get a list of recent scenario executions and their execution durations. We send these durations as a metric, &lt;code&gt;make.scenario.duration&lt;/code&gt;, to Datadog. We use Datadog tags to indicate their statuses, the scenario's ID and name, the team's ID and name, and the organisation's ID and name.&lt;/p&gt;

&lt;p&gt;Using this metric, we can generate various valuable graphs and tables. We created bar charts to show the number of warnings and errors raised per scenario and the number of executions each scenario had. We used &lt;a href="https://docs.datadoghq.com/dashboards/widgets/top_list/" rel="noopener noreferrer"&gt;"top list" widgets&lt;/a&gt; to show the top most executed scenarios, the slowest individual executions and the highest total duration of scenarios.&lt;/p&gt;

&lt;p&gt;We created another bar chart to visually represent how much time the workers spent on each scenario. During a recent incident, we could use this graph to see that almost all of the workers' time was spent on processing a single scenario (the purple bars in figure 13):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdip7fzh4enjd8tdp84jq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdip7fzh4enjd8tdp84jq.png" alt="Figure 13. Scenario durations during a recent incident indicating that a single scenario (purple bars) was taking up most of the processing time."&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Figure 13.&lt;/strong&gt; Scenario durations during a recent incident indicating that a single scenario (purple bars) was taking up most of the processing time.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In figure 14, you can see how our workers were affected during the time of the incident:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hj9x9quqo8gcnc3h9bf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hj9x9quqo8gcnc3h9bf.png" alt="Figure 14. Workers usage during the incident."&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Figure 14.&lt;/strong&gt; Workers usage during the incident.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenarios that don't have sequential processing enabled
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2ud1haayx1chx0ld7it.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2ud1haayx1chx0ld7it.png" alt="Figure 15. Screenshot of the table of scenarios that don't have sequential processing enabled."&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Figure 15.&lt;/strong&gt; Screenshot of the table of scenarios that don't have sequential processing enabled.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is a new metric we recently started monitoring after we noticed that scenarios that don't have &lt;a href="https://www.make.com/en/help/scenarios/scenario-settings#1---sequential-processing" rel="noopener noreferrer"&gt;sequential processing&lt;/a&gt; enabled have the biggest potential to cause significant bottlenecks.&lt;/p&gt;

&lt;p&gt;Queued executions of scenarios that &lt;em&gt;do&lt;/em&gt; have sequential processing enabled are processed sequentially, i.e. one after the other. This means that, at most, the scenario will occupy one worker at a time. However, queued executions of scenarios that &lt;em&gt;don't&lt;/em&gt; have sequential processing enabled can be processed in any order, even in parallel. Thus, all of our workers could be occupied by a single scenario, leaving no capacity to process other critical scenarios.&lt;/p&gt;

&lt;p&gt;We have considered whether we could turn sequential processing on for all scenarios, but scenarios with sequential processing enabled cannot respond to the webhooks. Another downside is that Make stops these scenarios when an error occurs. They cannot process any further executions until their incomplete executions are manually resolved. &lt;/p&gt;

&lt;h3&gt;
  
  
  Module analysis
&lt;/h3&gt;

&lt;p&gt;The preceding metrics give us a good indication of the performance and usage of our scenarios. However, when many scenarios suddenly become slow to respond, these metrics don't help us identify which downstream module might be causing the sudden slowness.&lt;/p&gt;

&lt;p&gt;For this reason, we have started to track metrics about individual steps in scenario executions to see which specific module or operation might be slow or faulty.&lt;/p&gt;

&lt;p&gt;We experimented with different ways to display metrics about modules and operations, including using Datadog traces:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F11745f3gjkivsdxwjter.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F11745f3gjkivsdxwjter.png" alt="Figure 16. A Datadog trace visualisation of a scenario execution."&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Figure 16.&lt;/strong&gt; A Datadog trace visualisation of a scenario execution.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;However, we settled on building a Datadog dashboard according to the types of questions we might have about our Make modules and operations.&lt;/p&gt;

&lt;p&gt;For example, we created this section to help answer the question: "Which Make modules are the slowest?"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vmvjlcv0a5oq55xvm6c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vmvjlcv0a5oq55xvm6c.png" alt="Figure 17. The "&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Figure 17.&lt;/strong&gt; The "Which Make modules are the slowest?" section of our Make module monitoring dashboard.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We added the following section to help us answer the question: "Which scenarios call which modules the most often?":&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ln9h8ksfthz2dutft4m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ln9h8ksfthz2dutft4m.png" alt="Figure 18. The "&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Figure 18.&lt;/strong&gt; The "Which scenarios call which modules the most often?" section of our Make module monitoring dashboard.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Before we created our monitoring setup for Make, it would sometimes take us hours to understand why our Make infrastructure was running slow. Now, we can detect faulty scenarios and resolve normal operations within minutes.&lt;/p&gt;

&lt;p&gt;Our Datadog dashboards give us a comprehensive overview of our Make instance's current and past state, helping us to make further improvements to get the most out of our instance.&lt;/p&gt;

&lt;p&gt;How do you monitor your Make instance or other low-code tools? Let us know in the comments!&lt;/p&gt;

</description>
      <category>make</category>
      <category>integromat</category>
      <category>lowcode</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Modern data stack: scaling people and technology at FINN</title>
      <dc:creator>FINN Admin</dc:creator>
      <pubDate>Thu, 31 Aug 2023 15:22:08 +0000</pubDate>
      <link>https://forem.com/finnauto/modern-data-stack-scaling-people-and-technology-at-finn-3fme</link>
      <guid>https://forem.com/finnauto/modern-data-stack-scaling-people-and-technology-at-finn-3fme</guid>
      <description>&lt;p&gt;&lt;em&gt;by &lt;a href="https://www.linkedin.com/in/jorrit-p-13461674"&gt;Jorrit Posor&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/felix-kreitschmann"&gt;Felix Kreitschmann&lt;/a&gt;, and &lt;a href="https://www.linkedin.com/in/kosara-g/"&gt;Kosara Golemshinska&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Building proficient data teams and providing them with the right technical tools is crucial for deriving analytical insights that drive your company forward. Understanding the skills, technologies, and roles at play forms an essential part of this process.&lt;/p&gt;

&lt;p&gt;This post offers an overview of these key components shaping a 'Modern Data Stack', which you can use to guide your hiring and strategic planning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context and Background
&lt;/h2&gt;

&lt;p&gt;The strategies shared in this post are drawn from a successful initiative at &lt;a href="https://www.finn.com/"&gt;FINN&lt;/a&gt;, a German scale-up. The approach enabled the data teams' significant expansion from a small team of three data engineers to a comprehensive team of 35 practitioners over the course of two years. Concurrently, the company's size quadrupled from 100 to 400 employees.&lt;/p&gt;

&lt;p&gt;Integrating data into &lt;a href="https://www.finn.com/"&gt;FINN&lt;/a&gt;'s culture and processes is significant: we utilize over 600 dashboards; 58% of our company uses Looker weekly, spending an average of 90 minutes each week on the platform. This level of engagement translates into nearly half a million queries weekly.&lt;/p&gt;

&lt;p&gt;While the insights presented here can also benefit larger organizations, they are primarily based on the experiences and challenges encountered during &lt;a href="https://www.finn.com/"&gt;FINN&lt;/a&gt;’s growth trajectory.&lt;/p&gt;

&lt;p&gt;The technology focus of this post is on a batch-based technology stack while streaming technologies are not considered.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Raw, Fragmented Data to Analytical Artifacts
&lt;/h2&gt;

&lt;p&gt;When working with business data, your goal is analytical artifacts: key performance indicators (KPIs), dashboards, insights, and a comprehensive understanding of your business—all derived from your data. You most likely already possess raw source data in the SaaS tools you use (like HubSpot or Airtable) or in your databases. &lt;/p&gt;

&lt;p&gt;So, the pertinent question is: what transforms your source data (shown under '&lt;em&gt;Data Sources&lt;/em&gt;' in the top part of &lt;em&gt;image 1&lt;/em&gt;, in yellow) into these insightful analytical artifacts (shown under '&lt;em&gt;4. Analytical Artefacts&lt;/em&gt;' in the bottom part of &lt;em&gt;image 1&lt;/em&gt;, in purple)?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YwRenv2f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2yyb8hx8crbff0h44m2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YwRenv2f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2yyb8hx8crbff0h44m2y.png" alt="Image 1" width="800" height="813"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Image 1. Overview of an analytics platform. What gets you from data sources to analytics artifacts?&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Extract, Load, Transform, Analyze: The Pathway to Analytical Artifacts
&lt;/h2&gt;

&lt;p&gt;Transforming your source data (at the top of &lt;em&gt;image 2&lt;/em&gt;, in yellow) into analytical artifacts (at the bottom of &lt;em&gt;image 2&lt;/em&gt;, in purple) involves a sequence of technical steps (in the middle of &lt;em&gt;image 2&lt;/em&gt;, in blue, numbered 1-3).&lt;/p&gt;

&lt;p&gt;Data is systematically extracted from various sources and maneuvered through transformations (changes) and technical components. It eventually emerges as part of the analytical artifacts. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CnBkfkw7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tb179sb7t8fob8nicnb1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CnBkfkw7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tb179sb7t8fob8nicnb1.png" alt="Image 2" width="800" height="899"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Image 2. Overview of the technical steps of the dataflow through an analytics platform, moving from source data to analytical artifacts.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Data professionals often describe this as "data flowing" from sources through data pipelines into analytical artifacts. The arrows in the diagram signify this dataflow, highlighting the key operations involved (in blue):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Extract &amp;amp; Load Data&lt;/strong&gt;: This stage involves copying data from data sources to a data warehouse.&lt;br&gt;
The data is systematically duplicated from every relevant source, copying it table-by-table to a data warehouse (such as BigQuery). Technologies utilized include ingestion providers (SaaS tools) and custom-built data connectors. These tools extract and transfer data from various sources to a data warehouse. This process follows a specific schedule, such as loading new batches of data into the data warehouse every hour.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Transformations&lt;/strong&gt;: This phase involves modifying and integrating tables to generate new tables optimized for analytical use.&lt;br&gt;
Consider this example: you want to understand the purchasing behavior of customers aged between 20-30 in your online shop. This means you'll need to join product, customer, and transaction data to create a unified table for analytics. These data preparation tasks (e.g., joining fragmented data) for analysis are essentially what "Data Transformations" entail.&lt;br&gt;
At &lt;a href="https://www.finn.com/"&gt;FINN&lt;/a&gt;, technologies utilized in this phase include &lt;a href="https://cloud.google.com/bigquery"&gt;BigQuery&lt;/a&gt; as a data warehouse, &lt;a href="https://www.getdbt.com/"&gt;dbt&lt;/a&gt; for data transformation, and a combination of &lt;a href="https://github.com/features/actions"&gt;GitHub Actions&lt;/a&gt; and &lt;a href="https://www.datafold.com/"&gt;Datafold&lt;/a&gt; for quality assurance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Exposure to BI Tools&lt;/strong&gt;: This stage involves making the optimized tables from Step 2 accessible company-wide.&lt;br&gt;
Many users across your organization will want access to these tables, and they can obtain it by connecting their preferred tools (such as spreadsheets, business intelligence (BI) tools, or code) to the tables in the data warehouse. Connecting a data warehouse to a tool typically requires a few clicks, although depending on the context, it can sometimes involve more configuration or even coding.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Creation of Analytical Artifacts&lt;/strong&gt;: The final step involves the creation of analytical artifacts within these BI tools (or code).&lt;br&gt;
This work is typically done by BI users, analysts, and data scientists. These professionals take the accessible tables and transform them into actionable insights. They may create dashboards for monitoring business processes, produce KPIs and reports for strategic decisions, generate charts or plots for visual understanding, or even construct advanced predictive models for future planning.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This process not only unveils valuable business insights hidden within the data but also delivers information in an easy-to-understand format that supports informed decision-making across various levels of the organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Essential Hard Skills Along the Dataflow
&lt;/h2&gt;

&lt;p&gt;Let's now have a look at the hard skills that are required in terms of our analytics platform.&lt;/p&gt;

&lt;p&gt;Different types of work are required along the dataflow, and hence the hard skills change depending on the stage of the dataflow. These skills, along the yellow vertical lines on the left side of &lt;em&gt;image 3&lt;/em&gt;, are crucial to managing the different stages of the process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--N8X2kAUf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i13u910d2edf3qfutvwz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N8X2kAUf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i13u910d2edf3qfutvwz.png" alt="Image 3" width="800" height="899"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Image 3. Overview of the skills that are required to work with data in the different parts of the analytics platform.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let’s take a closer look at what the different skills mean in the context of our analytics platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Software Engineering&lt;/strong&gt;: Implementing data connectors in-house, which extract data from sources, requires writing software.&lt;/p&gt;

&lt;p&gt;This skill is particularly crucial for the initial step of making data available in the data warehouse. It also requires cloud and infrastructure knowledge, as the software must operate in the cloud, follow a schedule, and consistently extract/load new data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Analytics Engineering&lt;/strong&gt;: Predominantly SQL-focused when working with dbt, this skill involves transforming raw data from sources into tables useful for analytics—a process labeled 'analytics engineering'.&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://www.finn.com/"&gt;FINN&lt;/a&gt;, this primarily takes place within the data warehouse (BigQuery) using a data transformation tool (dbt). From a technical standpoint, raw tables are cleaned, combined, filtered, and aggregated to create many new tables for analytics. A common practice to make tables analytics-ready is "&lt;a href="https://www.kimballgroup.com/data-warehouse-business-intelligence-resources/kimball-techniques/dimensional-modeling-techniques/"&gt;dimensional modeling&lt;/a&gt;".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Analytics &amp;amp; Data Science&lt;/strong&gt;: This refers to utilizing transformed tables and extracting insights from them.&lt;/p&gt;

&lt;p&gt;Analytical artifacts created at this stage include dashboards, KPIs, plots, forecasts, etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Business Understanding&lt;/strong&gt;: As data is the product of a business process, understanding these processes is crucial.&lt;/p&gt;

&lt;p&gt;Business understanding is required from the data transformation phase right through to the creation of analytical artifacts. Without business understanding, effectively transforming raw data into insights would be impossible.&lt;/p&gt;

&lt;p&gt;For example, accurately counting the number of converted leads requires understanding what qualifies as a 'converted lead' (considering fraud cases, credit checks, and so on).&lt;/p&gt;

&lt;h2&gt;
  
  
  Navigating the Implementation and Scaling of a Data Teams
&lt;/h2&gt;

&lt;p&gt;We've explored the process of transforming source data into analytical artifacts. Now, we face the most critical question: how can you efficiently integrate this mix of people and technology?&lt;/p&gt;

&lt;p&gt;Addressing this question involves various technical and non-technical dimensions, which, if not properly managed, can lead to costly decisions, inefficient data teams, and delays in delivering insights.&lt;/p&gt;

&lt;p&gt;Regrettably, the answers to these technical and non-technical questions change as your company grows. The journey of onboarding your initial five data practitioners differs from the leap from 30 to 35 practitioners.&lt;/p&gt;

&lt;h2&gt;
  
  
  How FINN Navigates the Complexities of Scaling
&lt;/h2&gt;

&lt;p&gt;Understanding your organization's specific needs and aligning those with your team's structure and technical architecture is critical.&lt;/p&gt;

&lt;p&gt;The complexities, inherent in technical and non-technical aspects, aren't static; they evolve as your company grows. Thus, it's important to remember that the process is dynamic - transitioning from a small team of data practitioners to a larger one, or even adding just a few more members, can change your operational dynamics.&lt;/p&gt;

&lt;p&gt;Let's delve into the experience at &lt;a href="https://www.finn.com/"&gt;FINN&lt;/a&gt;. As we navigated the process of building and scaling an analytics platform across multiple data teams, we discovered that our journey encompassed distinct scaling phases. Each phase introduced its challenges and requirements, necessitating different approaches and shifts in team dynamics and responsibilities. &lt;em&gt;Image 4&lt;/em&gt; highlights the responsibilities of roles, which are constantly changing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f4GWr5z9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n3jwx9uz89mjhgr8171a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f4GWr5z9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n3jwx9uz89mjhgr8171a.png" alt="Image 4" width="800" height="899"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Image 4. Overview of the roles required to work with data in the different parts of the analytics platform. The responsibilities of roles change while growing data teams. Initially, a small platform team has to cover data tasks end-to-end, from raw data to analytical artifacts. While growing, the roles can specialize in specific areas of the analytics platform.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let's explore &lt;a href="https://www.finn.com/"&gt;FINN&lt;/a&gt;’s scaling  phases in more detail.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Setting up the Analytics Platform Team&lt;/strong&gt;: This initial stage is focused on establishing the core analytics platform team, laying down the technical infrastructure, and delivering initial insights to stakeholders. The platform team works end-to-end, meaning it picks up raw data and delivers analytical artifacts likes dashboards to stakeholders.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Onboarding Other Data Teams&lt;/strong&gt;: The second phase entails the integration of additional data roles (like analysts, analytics engineers, and data scientists), referred to as "data teams". They deliver insights using the analytics platform but don't form part of the platform team. Their introduction shifts the dynamics as they take over previously built analytical artifacts and BI-related tasks and share data transformation responsibilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Specialization&lt;/strong&gt;: In the third phase, the platform team focuses on platform improvement and enhancing the data teams' productivity. Other data teams, meanwhile, specialize in specific business areas and data transformations. The extent of this specialization is highly context dependent and aligns with the company's unique requirements.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Navigating Conway's Law&lt;/strong&gt;: When scaling, a company may find itself in a phase filled with considerations around the balance of centralization vs decentralization, both from a technical setup perspective and from a team structure perspective. Informed by Conway's Law—which suggests that a team's structure should mirror the desired technical architecture—a company may seek to align its teams accordingly.&lt;br&gt;
For example, this could mean centralizing communication patterns when stability is needed in centralized, shared parts of the data pipeline. Keep in mind that each step towards centralization may secure some benefits at the cost of potentially surrendering those derived from decentralization, and vice versa.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Thus, navigating these trade-offs to find the sweet spot can be an intricate journey.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Understanding the necessary skills, roles, and technologies is crucial in the dynamic and complex journey of building and scaling effective data teams. The transformation of raw data into insightful analytical artifacts requires a broad set of hard skills and deep business understanding.&lt;/p&gt;

&lt;p&gt;The journey through different scaling phases—initial setup, onboarding, role specialization, and Conway's Law—will demand adaptability and resilience.&lt;/p&gt;

&lt;p&gt;This blog post provides a foundational understanding of these concepts, and we hope it serves as a valuable guide for your scaling journey. For more deep dives into these topics, consider subscribing to our blog. Future posts will dive deeper into "Scaling Phases".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://jposor.substack.com/"&gt;Subscribe now&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy scaling! Thanks for reading!&lt;/p&gt;

&lt;p&gt;Thanks, &lt;a href="https://www.linkedin.com/in/kosara-g/"&gt;Kosara Golemshinska&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/meyns/"&gt;Chris Meyns&lt;/a&gt;, for recommendations and for reviewing drafts!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://jposor.substack.com/p/modern-data-stack-scaling-people"&gt;Substack&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Appendix: Pivotal Dimensions
&lt;/h2&gt;

&lt;p&gt;This is an overview of pivotal dimensions when scaling data teams. The primary aim is to provide a broad picture, while the secondary objective is to encourage you to subscribe to this blog. Doing so will ensure that future in-depth discussions on these topics (for example, 'Modern Data Stack: Deep Dive into Pivotal Dimensions') land directly in your inbox. 😊&lt;/p&gt;

&lt;p&gt;&lt;a href="https://jposor.substack.com/"&gt;Subscribe now&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase Goals&lt;/strong&gt;: What goals should be set for each data team to generate business value in a given scaling phase?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technology&lt;/strong&gt;: Does your technology empower data practitioners, or does it hinder them due to improper usage patterns, knowledge gaps, lack of support, or technical debt?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(De)centralization Trade-offs&lt;/strong&gt;: What are the implications of fundamental decisions regarding "decentralized vs centralized" data teams in your organizational structure and technology architecture?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical Debt&lt;/strong&gt;: How can you mitigate the creeping technical debt that could hamper and gradually slow progress?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Policy Automation&lt;/strong&gt;: How can you implement policies (such as programming standards) with decentralized data practitioners?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Critical Skills&lt;/strong&gt;: What skills are necessary for each scaling phase?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Communication Processes&lt;/strong&gt;: How many people, on average, need to be involved to deliver insight to their stakeholders?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Knowledge Transfer&lt;/strong&gt;: Are knowledge silos emerging, creating bottlenecks? How can you efficiently distribute required business knowledge, especially when business processes evolve?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Support Culture&lt;/strong&gt;: How and when should you foster a support culture? Are data practitioners blocked due to a lack of information that others may have?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource Alignment&lt;/strong&gt;: How can you manage effort peaks? Can you flexibly reassign data practitioners between business units?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Engineering &amp;lt;&amp;gt; Data Alignment&lt;/strong&gt;: How can you guarantee that changes to data sources, business processes, or technical aspects (such as a schema) don't disrupt your downstream analytical artifacts?&lt;/p&gt;

</description>
      <category>data</category>
      <category>analytics</category>
    </item>
    <item>
      <title>Running Steampipe on AWS Fargate</title>
      <dc:creator>Frederik Petersen</dc:creator>
      <pubDate>Wed, 21 Jun 2023 09:33:06 +0000</pubDate>
      <link>https://forem.com/finnauto/running-steampipe-on-aws-fargate-51ci</link>
      <guid>https://forem.com/finnauto/running-steampipe-on-aws-fargate-51ci</guid>
      <description>&lt;p&gt;Are you using Steampipe to query your cloud services, such as an Amazon Web Services (AWS) environment? Are you using the results for reporting, security checks, governance or other important tasks in your organization? And are you tired of having to run those queries from your local machine or an AWS Elastic Compute Cloud (EC2) instance?&lt;/p&gt;

&lt;p&gt;In this blog post we'll delve into running Steampipe directly from inside AWS itself, without setting up an EC2 instance. Instead, we'll have Steampipe run on an AWS ECS cluster as a scheduled AWS Fargate task. This allows for periodic querying while keeping it serverless, erasing the need to maintain an underlying Virtual Machine (VM). We also cover how the Steampipe query works across all our organization's accounts, so that you can get the full picture in one step.&lt;/p&gt;

&lt;p&gt;We are documenting our struggles and solution in this blog post with the hope of assisting our readers. There isn't much publicly available information on running cross-account Steampipe queries from AWS Fargate, and we hope to change this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our use case
&lt;/h2&gt;

&lt;p&gt;In the Tooling and Security team at &lt;a href="https://www.finn.com/" rel="noopener noreferrer"&gt;FINN&lt;/a&gt;, we are currently focusing on cloud governance and related tasks. One pretty common thing to do, when growing as a company and having more and more resources in AWS, is establishing a tagging policy. This means that all resources should be tagged in a certain way to allow for better budget analysis, cost optimization, and also for example the ability to enforce backups only on production resources. Some example tags could include: the resource's department, team, environment or service. And it's also possible and common to restrict the values that are allowed. For example, tags with the key &lt;code&gt;environment&lt;/code&gt; should be either &lt;code&gt;production&lt;/code&gt;, &lt;code&gt;staging&lt;/code&gt; or &lt;code&gt;development&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;One problem with introducing a tagging policy is that there can be a ton of existing resources that don't follow these rules. Because how could they, if those rules are just being introduced? It might be possible partly to automate tagging existing resources, but depending on the tags there is probably also some manual work involved. Especially if development teams are in charge of their own Infrastructure-as-Code (IaC) projects. So to get an overview of how the adoption of the tagging policy is going across the whole organization, we want to provide a dashboard showing how the adoption is coming along. This allows us and the developers to monitor progress and make sure that the number of active resources that aren't following the tagging policy becomes smaller over time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fstwp67xw2ez39l710u08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fstwp67xw2ez39l710u08.png" alt="Tagging Dashboard"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Screenshot of Tagging Dashboard&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To build a dashboard like this, we need three things: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;a way to obtain the numbers on how many active resources aren’t following the tagging policy&lt;/li&gt;
&lt;li&gt;a database to store those numbers&lt;/li&gt;
&lt;li&gt;a frontend that shows these relevant numbers
In this blog article we'll focus almost exclusively on (1), namely how to obtain those numbers. But we'll quickly outline the overall approach right now.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We knew from the beginning that we can obtain the data we need using a Steampipe query. In essence, it's just one command that fetches the data across all AWS CloudFormation stacks from all accounts. We then push this data into a PostgreSQL database and visualize it in a Retool app. This approach is pretty straightforward and doesn't really make for a blog post, but the devil's in the details.&lt;/p&gt;

&lt;p&gt;It was pretty easy to run the query locally, feed the data into the database, and then visualize the data. The question that came up was: How can we have fresh data continuously? It doesn't need to be live, but at least be refreshed hourly, or a few times a day. We will have a look at some possible alternative ways to do this in the next section.&lt;/p&gt;
&lt;h2&gt;
  
  
  Alternatives
&lt;/h2&gt;

&lt;p&gt;As is often the case in engineering, there are many possible ways to tackle this problem. &lt;/p&gt;
&lt;h3&gt;
  
  
  Steampipe on an EC2 instance
&lt;/h3&gt;

&lt;p&gt;The first one is running Steampipe on a designated EC2 instance. The advantage is that this is well documented and should be simple to setup. The disadvantage is that you now have one more VM running. At FINN we are trying to keep our cloud setup as serverless as possible.&lt;/p&gt;
&lt;h3&gt;
  
  
  Steampipe on AWS Lambda
&lt;/h3&gt;

&lt;p&gt;It might also be possible to run Steampipe on AWS Lambda. We haven't tried this and there might be some issues and additional hurdles to take. For example, the image needs to implement the Lambda Runtime API. Also, if at some point we want to convert the service from a scheduled task to a long running service, then Lambda is not the right choice. The one thread limit might also pose problems.&lt;/p&gt;
&lt;h3&gt;
  
  
  Steampipe on AWS ECS with Fargate
&lt;/h3&gt;

&lt;p&gt;Similar to running Steampipe on AWS Lambda there is the option to run it on AWS ECS with Fargate. ECS allows you to run any kind of Docker image and doesn't impose additional requirements, as Lambda does. And with Fargate you still don't have to spin up a VM for it. Thanks to the serverless design, you don't need to think about that part of the architecture. For these reasons we have decided to go with this approach and we'll cover the architecture in the next section.&lt;/p&gt;
&lt;h3&gt;
  
  
  Bonus: Steampipe Cloud
&lt;/h3&gt;

&lt;p&gt;Also worth mentioning: &lt;a href="https://cloud.steampipe.io" rel="noopener noreferrer"&gt;Steampipe Cloud&lt;/a&gt; is a SaaS option that allows running queries in the cloud. It might be an option for you as well. We haven't tried it (yet).&lt;/p&gt;
&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dozbso64z8gp4b2mg9b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dozbso64z8gp4b2mg9b.png" alt="Steampipe Fargate architecture"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Overview of Steampipe Fargate architecture&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let's go through the main parts of the architecture as depicted in the architecture diagram. You can see a number of AWS accounts that make up our organization. In one of the accounts there is a Virtual Private Cloud (VPC) that contains a private subnet. For security reasons the ECS cluster where our tasks run, is in that private subnet. There is no need for our cluster to be reachable from the outside. Also you might notice that there is no load balancer. Our task doesn't have to be reachable from the outside, since it pushes the information based on a schedule.&lt;/p&gt;

&lt;p&gt;The task execution is triggered by CloudWatch Events. We will later see how this can be trivially set up through a Cloud Development Kit (CDK) pattern class. When the task runs, the most interesting part of the process happens. The task role has the permission to assume a Steampipe query role in all other accounts, and also has all the necessary permissions in the account we are querying from. This allows the task to read data from across all accounts, as needed by the Steampipe query that the task runs. &lt;/p&gt;

&lt;p&gt;In the end, the data is sent to a database. In our specific case the database lives in a different AWS account, but it can also be a database managed in a different place, so in the diagram we simplified it. A dashboard can now read the data from the database and show it to the user.&lt;/p&gt;
&lt;h2&gt;
  
  
  Challenges
&lt;/h2&gt;

&lt;p&gt;During the implementation of this project we went through a huge amount of redeploys to debug and fix problems that came up on the live system. Describing the main challenges is thus one of the main motivations behind writing this blog post. We hope that it will save you some time and nerves to read about these challenges here, instead of having to experience them one-by-one during implementation.&lt;/p&gt;
&lt;h3&gt;
  
  
  Query All The Things
&lt;/h3&gt;

&lt;p&gt;Since most larger (and also many smaller) organizations are using multiple AWS accounts, we want to be able to query all accounts at once. AWS even &lt;a href="https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/benefits-of-using-multiple-aws-accounts.html" rel="noopener noreferrer"&gt;recommends using multiple accounts&lt;/a&gt;, so this requirement is very relevant for most AWS users. But using multiple accounts makes things more complicated when it comes to the Steampipe setup.&lt;/p&gt;

&lt;p&gt;We recommend first trying to get Steampipe querying running locally. The Steampipe documentation has a designated page on &lt;a href="https://steampipe.io/docs/guides/aws-orgs#local-authentication-with-a-cross-account-role" rel="noopener noreferrer"&gt;using Steampipe CLI with AWS Organizations&lt;/a&gt;. There are some helper scripts that help you bootstrap the &lt;code&gt;~/.aws/config&lt;/code&gt; and &lt;code&gt;~/.steampipe/config/aws.spc&lt;/code&gt; configuration files. We will also revisit those scripts later, because they can also help us with the setup on Fargate, when we modify them a bit. For the local setup we can pretty much use them as-is. We might only need to remove the management account from the list of generated config entries.&lt;/p&gt;

&lt;p&gt;To enable cross-account queries from your local machine, you need to do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html" rel="noopener noreferrer"&gt;cross-account role&lt;/a&gt;. We deploy it to all our accounts via &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html" rel="noopener noreferrer"&gt;CloudFormation StackSets&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Make sure that the role you created has policies attached that grant all the required access needed for your Steampipe queries.&lt;/li&gt;
&lt;li&gt;Log in to AWS CLI with a role that is allowed to assume the cross-account role. We use SSO, so it's: &lt;code&gt;aws sso login --profile &amp;lt;yourprofile&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;generate_config_for_cross_account_roles.sh&lt;/code&gt; from the Steampipe samples repository as documented and attach the generated AWS configuration entries to your config. You might need to remove the generated entries for the management account that you are running the queries from.&lt;/li&gt;
&lt;li&gt;Run a query with &lt;code&gt;steampipe query&lt;/code&gt; and make sure that there are results for all your accounts when applicable.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So far so good. This is not simple, but also not very hard to figure out, since there is ample documentation available. In the next section we'll up the ante a bit, because running this cross-account setup on AWS Fargate does not work out of the box.&lt;/p&gt;
&lt;h3&gt;
  
  
  Cross-account queries on ECS
&lt;/h3&gt;

&lt;p&gt;One of the biggest challenges was trying to understand how Steampipe can leverage the cross-account role from an ECS/Fargate task. Since the same principles apply for both EC2-run and Fargate-run ECS tasks, we will just refer to "ECS task" from now on.&lt;/p&gt;

&lt;p&gt;Initially, we just assumed that the "EC2 Instance" documentation would also work for ECS, and that as such we could use the &lt;code&gt;IMDS&lt;/code&gt; parameter (for the Instance Metadata Service) in the helper script. That assumption was wrong. When you use the parameter entries, they should look like the following example in the AWS config file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[profile sp_fooli-sandbox]
role_arn = arn:aws:iam::111111111111:role/security-audit
credential_source = Ec2InstanceMetadata
role_session_name = steampipe
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But we need a different &lt;code&gt;credential_source&lt;/code&gt;: &lt;code&gt;EcsContainer&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So we modified the helper script and created a custom version for our project that always uses this credential source and also handles the management account correctly. An excerpt of the helper script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[...]
if [ $ACCOUNT_NAME == "managementaccountname" ] ; then
  cat &amp;lt;&amp;lt;EOF&amp;gt;&amp;gt;$SP_CONFIG_FILE
  connection "aws_${SP_NAME}" {
    plugin  = "aws"
    profile = "default"
    regions = ${ALL_REGIONS}
  }

EOF
  continue
fi

# Append an entry to the AWS Creds file
cat &amp;lt;&amp;lt;EOF&amp;gt;&amp;gt;$AWS_CONFIG_FILE

[profile sp_${ACCOUNT_NAME}]
role_arn = arn:aws:iam::${ACCOUNT_ID}:role/${AUDITROLE}
credential_source = EcsContainer
role_session_name = steampipe
EOF
[...]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To give something back to Steampipe, we have created two pull requests on GitHub to: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;(a) extend the script so it can output the correct configuration out of the box, and &lt;/li&gt;
&lt;li&gt;(b) modify the documentation of the cross-account setup script to include information about running the script in ECS and fixing some minor inconsistencies in the existing sections. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find the PRs &lt;a href="https://github.com/turbot/steampipe-docs/pull/142" rel="noopener noreferrer"&gt;for documentation here&lt;/a&gt; and &lt;a href="https://github.com/turbot/steampipe-samples/pull/15" rel="noopener noreferrer"&gt;for the script itself here&lt;/a&gt;. If you are lucky, Steampipe has are already merged these PRs by the time you are reading this, and your life just got a bit easier :) One thing you will most likely still need to do is remove or adjust the entries for the main account that you are running Steampipe from. In our case we slightly adjusted the script to skip config generation for that account based on name.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running it periodically
&lt;/h3&gt;

&lt;p&gt;Now we've seen a lot of configuration magic. But how can we actually run and deploy Steampipe with Fargate on ECS? &lt;/p&gt;

&lt;h4&gt;
  
  
  Docker image
&lt;/h4&gt;

&lt;p&gt;Since we decided to go with ECS, we need to build a Docker image that is then run periodically and executes the query. This is the Docker image that supports both AMD64 and ARM64 architectures:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ghcr.io/turbot/steampipe:0.20.2

ARG TARGETPLATFORM

# Setup prerequisites (as root)
USER root:0
RUN apt-get update -y \
 &amp;amp;&amp;amp; apt-get install -y git wget curl unzip

RUN if [ "$TARGETPLATFORM" = "linux/amd64" ]; then ARCHITECTURE=amd64; elif [ "$TARGETPLATFORM" = "linux/arm/v7" ]; then ARCHITECTURE=arm; elif [ "$TARGETPLATFORM" = "linux/arm64" ]; then ARCHITECTURE=aarch64; else ARCHITECTURE=amd64; fi &amp;amp;&amp;amp; \
     if [ "$ARCHITECTURE" = "amd64" ]; then curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"; else curl "https://awscli.amazonaws.com/awscli-exe-linux-aarch64.zip" -o "awscliv2.zip"; fi &amp;amp;&amp;amp; \
     unzip -qq awscliv2.zip &amp;amp;&amp;amp; \
     ./aws/install &amp;amp;&amp;amp; \
     /usr/local/bin/aws --version

# Install the AWS and Steampipe plugins for Steampipe (as steampipe user).
USER steampipe:0

# Create workspace and copy cross-account util script
WORKDIR /workspace
COPY generate_config_for_cross_account_roles.sh .

COPY run_query.sh .

ENTRYPOINT [ "/bin/bash" ]

CMD ["./run_query.sh"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The interesting parts about it are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS CLI v2 is installed based on the chosen architecture.&lt;/li&gt;
&lt;li&gt;It took a while to figure out that we need to include &lt;code&gt;ARG TARGETPLATFORM&lt;/code&gt; so that dockerx actually passes that argument in the build.&lt;/li&gt;
&lt;li&gt;Both the custom cross-account setup script and a &lt;code&gt;run_query.sh&lt;/code&gt; script are copied over.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;run_query.sh&lt;/code&gt; script performs some initialization, if needed, and runs the query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

set -e

# Preparation if needed
if [ ! -f ~/.aws/config ]
then
    echo "AWS config wasn't initialized yet. Setting up"
    mkdir -p ~/.aws
    ./generate_config_for_cross_account_roles.sh SteampipeQueryRole ~/.aws/config
    echo "Cross Account Setup complete. Installing steampipe AWS plugin"
    steampipe plugin install steampipe aws
    echo "Install steampipe aws plugin"
else
    echo "AWS config has already been initialized. Skipping setup"
fi

echo "Running query: ${STEAMPIPE_QUERY}"
steampipe query --output json "${STEAMPIPE_QUERY}" &amp;gt; result.json
curl -X POST -H "Content-Type: application/json" -d @result.json ${TARGET_WEBHOOK}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the end the data is pushed to a webhook that we can define via an environment variable (or AWS Secret). This part of the implementation is currently not very advanced, and there is some potential to extend and improve. This post covers this implementation and possible improvements in some more detail further down below.&lt;/p&gt;

&lt;h4&gt;
  
  
  CDK
&lt;/h4&gt;

&lt;p&gt;How about deploying it? At FINN we are using &lt;a href="https://aws.amazon.com/cdk/" rel="noopener noreferrer"&gt;AWS CDK&lt;/a&gt; as our go-to IaC tool. CDK actually provides a very nice pattern called &lt;code&gt;ScheduledFargateTask&lt;/code&gt; that allows us to quickly define a Fargate task that is run on a schedule defined by us.&lt;/p&gt;

&lt;p&gt;This is a slightly simplified stack implementation. Don't worry, we'll go through the important parts resource by resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class SteampipeFargateStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -&amp;gt; None:
        super().__init__(scope, construct_id, **kwargs)
        service_name = "steampipe-fargate"

        repository: ecr.IRepository = ecr.Repository(
            self, f"{service_name}-ecr-repo", repository_name=service_name
        )

        cluster = ecs.Cluster(
            self,
            f"{service_name}-cluster",
            cluster_name=f"{service_name}-cluster",
            container_insights=True,
        )

        task_role = iam.Role(
            self,
            f"{service_name}-task-role",
            role_name=f"{service_name}-task-role",
            description="Allows read access to all accounts for querying via steampipe.",
            assumed_by=iam.ServicePrincipal("ecs-tasks.amazonaws.com"),
            inline_policies={
                "AllowSteamPipeAccess": iam.PolicyDocument(
                    statements=[
                      iam.PolicyStatement(
                            effect=iam.Effect.ALLOW,
                            actions=["sts:AssumeRole"],
                            resources=["arn:aws:iam::*:role/SteampipeQueryRole"],
                        ),
                        iam.PolicyStatement(
                            effect=iam.Effect.ALLOW,
                            actions=[
                              ...
                            ],
                            resources=["*"],
                        ),
                    ]
                )
            },
            managed_policies=[
                iam.ManagedPolicy.from_aws_managed_policy_name(
                    "job-function/ViewOnlyAccess"
                )
            ],
        )

        task_definition: ecs.FargateTaskDefinition = ecs.FargateTaskDefinition(
            self,
            f"{service_name}-taskdefinition",
            family="task",
            cpu=512,
            memory_limit_mib=2048,
            task_role=task_role,
            runtime_platform=ecs.RuntimePlatform(
                operating_system_family=ecs.OperatingSystemFamily.LINUX,
                cpu_architecture=ecs.CpuArchitecture.ARM64,
            ),
        )

        task_definition.add_container(
            f"{service_name}",
            image=ecs.ContainerImage.from_ecr_repository(
                repository=repository, tag=VERSION
            ),
            cpu=512,
            memory_limit_mib=2048,
            logging=ecs.LogDrivers.aws_logs(
                stream_prefix=service_name, log_retention=logs.RetentionDays.ONE_MONTH
            ),
            environment={
                "STEAMPIPE_QUERY": "SELECT name, id, last_updated_time, tags, tags -&amp;gt;&amp;gt; 'environment' as environment_tag_value, tags -&amp;gt;&amp;gt; 'department' as department_tag_value, tags -&amp;gt;&amp;gt; 'service' as service_tag_value FROM aws_cloudformation_stack WHERE extract(day from current_timestamp - last_updated_time)::int &amp;lt; 90;",
            },
        )

        ecs_patterns.ScheduledFargateTask(
            self,
            f"{service_name}-scheduled-fargate-task",
            cluster=cluster,
            platform_version=ecs.FargatePlatformVersion.LATEST,
            scheduled_fargate_task_definition_options=ecs_patterns.ScheduledFargateTaskDefinitionOptions(
                task_definition=task_definition
            ),
            schedule=appscaling.Schedule.cron(minute="50"),
        )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first two resources are a bit boring. The repository stores our Docker image and the ECS cluster is needed to logically contain our scheduled tasks. We don't add any EC2 instances to the cluster though, as we are only running the task serverless via Fargate.&lt;/p&gt;

&lt;p&gt;The task role is where things get interesting. This is the role that our task can assume as declared by &lt;code&gt;assumed_by&lt;/code&gt;. The task role has some policies attached, including one that allows assuming the cross-account role called &lt;code&gt;SteampipeQueryRole&lt;/code&gt;. The rest of the permissions are need to actually also allow Steampipe to query information for the management account.&lt;/p&gt;

&lt;p&gt;Next we need a task definition. Here we specify some of CPU and memory limits. Specifying those limits is very important when working with Fargate, as they define what kind of VM is used in the background. When we went with smaller limits the task actually ran into the memory limit and crashed. So 2048MB seems to be a good limit for our environment. We also decided to go with ARM64 architecture, because cost will be a bit lower and why not? In the next step, we then need to add our only container to the task definition. Here we can also define environment variables and secrets.&lt;/p&gt;

&lt;p&gt;The last part of the stack is the most powerful. It's a pattern that creates all the necessary resources to run a scheduled Fargate task. Here we are just referencing some of the resources defined above, and set the schedule for the task to run at.&lt;/p&gt;

&lt;p&gt;You are wondering how we are running CDK? Going into detail would go beyond the scope of this article, but to summarize it: We are using Github Actions. We connect to AWS using &lt;a href="https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services" rel="noopener noreferrer"&gt;OpenID connect&lt;/a&gt;, so that we don't need to store any (secret) access keys in the Github repository settings. We then run &lt;code&gt;cdk diff&lt;/code&gt; for PRs and &lt;code&gt;cdk deploy&lt;/code&gt; on the protected main branch. We assume different roles for PRs (read-only) and main branch (write permissions). &lt;/p&gt;

&lt;h3&gt;
  
  
  Debugging Steampipe on Fargate
&lt;/h3&gt;

&lt;p&gt;While running into the issues described above, we struggled to gather the information that would help us solve them. In some cases the Steampipe query in the Fargate task just ran for 8 minutes and then timed out, without any output. In this section we want to share a few tricks for finding out what's going on.&lt;/p&gt;

&lt;p&gt;The Steampipe CLI didn't allow us to activate additional logging to the console--at least as far as we were able to tell. But it actually writes log files that we can use to find out more. So there is a simple first thing you can do, if you ever run into issues with Steampipe without any console error output: just make sure to run the following line after your steampipe query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tail -n +1 ~/.steampipe/logs/*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you've configured your Fargate task to write to a CloudWatch log group, you can then find all the output with an additional line with each file name in those logs. This helped us tremendously, especially when debugging issues regarding assuming the role for cross-account queries. To build upon this, it's probably also possible to just stream the newly appended lines directly to &lt;code&gt;stderr&lt;/code&gt;. Then it will show up live while the Steampipe command is still running. We didn't spend the time to implement it in our Docker image, since we just needed it for temporary debugging.&lt;/p&gt;

&lt;h2&gt;
  
  
  Possible next steps
&lt;/h2&gt;

&lt;p&gt;Our solution works, but it is not super sophisticated in all areas. As mentioned earlier, we could enhance it with a framework to be able to supply it with a collection of queries and sinks where the data should be sent, and also potentially specify what format it should be. It could be possible to create multiple scheduled tasks using the same Docker image. That way, each query can have a custom schedule. But if it's okay to run them all at the same cadence, then the queries could also run in the same task. That would have some benefits: Initial setup is only done once, and subsequent queries might be sped up based on the queries that came before them.&lt;/p&gt;

&lt;p&gt;At a certain point it might make sense to actually run the task continuously and use Steampipe in its &lt;a href="https://steampipe.io/docs/managing/service" rel="noopener noreferrer"&gt;service mode&lt;/a&gt;. We could open up a protected interface to that Steampipe service from the outside, and could then directly perform queries from other components in our infrastructure and SaaS landscape. Or we could start up very lightweight Lambda functions that run queries against the service and push the data where it needs to go. There is probably a certain break-even point where it makes sense to run Steampipe in service mode, but we don't think we have reached it yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this post we've shown how it's possible to run cross-account Steampipe queries in scheduled AWS Fargate tasks. While the initial setup didn't go super smooth early on, we hope that this blog post will provide some helpful guidance should you want to build something similar. For us the solution has been running stable for the last few weeks since taking it live. We are confident that we'll build upon this approach to tackle future challenges going forward. Also, up to this point, we didn't really have to build any software from the ground up, but just had to make sure that we were able to wire everything together correctly. Now we can benefit from great tools like Steampipe that help us understand our own cloud landscape better.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>steampipe</category>
      <category>serverless</category>
    </item>
    <item>
      <title>From tech hypes to autonomous leadership: an interview with FINN CTO Andreas Stryz</title>
      <dc:creator>Chris Onrust Meyns</dc:creator>
      <pubDate>Wed, 14 Jun 2023 08:27:00 +0000</pubDate>
      <link>https://forem.com/finnauto/from-tech-hypes-to-autonomous-leadership-an-interview-with-finn-cto-andreas-stryz-3j88</link>
      <guid>https://forem.com/finnauto/from-tech-hypes-to-autonomous-leadership-an-interview-with-finn-cto-andreas-stryz-3j88</guid>
      <description>&lt;p&gt;Starting out in engineering can be both thrilling and a bit daunting. How to distinguish one tech hype from the next? What to do with those tricky career decisions that will inevitably come your way? Fear not, we’re here to help! We sat down with &lt;a href="https://www.linkedin.com/in/andreasstryz/"&gt;Andreas Stryz&lt;/a&gt;, Chief Technology Officer (CTO) and co-founder of &lt;a href="https://www.finn.com"&gt;FINN&lt;/a&gt;, who has extensive experience as an engineer and in technical leadership. He shares his thoughts on the power of giving autonomy, the impact of no-code tools, and focusing on the tech you love. So grab a cuppa and let's dive into some key insights from a seasoned tech leader.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hi Andi 👋 and thank you for talking with me today. To start things off, could you say something about what your current role as Chief Technology Officer (CTO) at FINN involves?
&lt;/h2&gt;

&lt;p&gt;Sure! I don't have a typical day. Very often I will be working on a range of completely different topics. About 60% of my job is one-on-one meetings. This includes talking with people, listening to their challenges and successes, and occasionally giving guidance on the decisions they face. Next, 10-20% of the job involves participating in meetings, such as our Senior Leadership meetings, where we talk about big, strategic decisions regarding the future of the company. And the rest is basically random stuff. It can be recording a podcast, giving a conference talk, things like that.&lt;/p&gt;

&lt;p&gt;What is the same, though, is that each of my days has to be highly structured. Because I can be chaos in my private life. Totally unstructured. At the same time I can have a very high discipline, for example in sports. So to bring that structure, I now use my calendar for everything. Not just for meetings, but also to place blockers to work on specific topics. My calendar is my boss -- if it's in my calendar, I will do it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Are you mainly involved in tech strategy, or also in some nitty-gritty engineering decisions?
&lt;/h2&gt;

&lt;p&gt;Today I am purely involved in tech strategy. I actively try to avoid getting into any nitty-gritty technical decisions. Because I’ve been an engineer for a long time, and with the high level that I'm operating on right now, there's a good risk that any of my statements on detailed engineering decisions would do more damage than good. People might cling to something I mentioned and say: ‘Hey, but Andi told us to …’ Even though I never, ever told anyone to do anything. That's why it's really important for me in my current role not to get involved in detailed technical decisions within the company. &lt;/p&gt;

&lt;p&gt;In the early phases of FINN, the situation was completely different of course. Then I was the entire engineering team. But at the current size of FINN, with over 400 employees, it is important to me to avoid getting involved in any technical decision. Which is hard, because I am (was) a really good tech guy 😀&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For me, a CTO is someone who makes the life of the whole company better, easier, simpler, faster, by technology.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I do not miss being involved in these detailed technical decisions at all though. I have worked as an engineer myself for a long time. That extensive experience is a great foundation for technical leadership, because it means that I can easily identify bullshit. If someone is bullshiting me on a technical level, I can spot that very, very fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  What do you like best about your current position?
&lt;/h2&gt;

&lt;p&gt;What I like best is actually not based on my position itself. It's because of the company. I love to work with young, fresh-minded, and fast-thinking people. I need this environment to feel alive, like a flower. Without such a stimulating environment, I would just feel sad and perish and die and that's it. So for me it’s really the high-energy company environment that powers me up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Was there a key moment that helped you get where you are now?
&lt;/h2&gt;

&lt;p&gt;There were two moments from which I learned the most in my previous job. One was when, on a Friday morning, I was told I had to fire multiple people on the same day because of a financial situation. At that time, I was not on the level where I knew about any of this in advance. I just had to execute. Moreover, it wasn't even my decision whom to fire. It was simply everyone who was still in their probation period. It had nothing to do with skill -- it was just bad timing. That hurt me a lot. &lt;/p&gt;

&lt;p&gt;Number two was when my team got almost doubled within a single week. Initially, I was working with around 40 engineers. A few days before, there was the announcement that I would now also get to lead additional teams. There had been two separate development teams, which then got consolidated into one. Which meant that all of a sudden I was the manager not for 40, but almost 80 people -- 40 additional people, some of them I had never met before in my life, or even heard their names. That took me a good one and a half years, really to get the full buy-in to the way I manage people. Those two experiences were very important for my career, because I learned a lot.&lt;/p&gt;

&lt;h2&gt;
  
  
  What would you say has been the biggest change within the field of engineering since you started out?
&lt;/h2&gt;

&lt;p&gt;From a technical perspective it’s always the same thing. Always, always the same, I can guarantee you. I started coding in 1996. This was &lt;a href="https://www.qbasic.net/"&gt;QuickBASIC&lt;/a&gt;. Since then, it’s always been the same principles. What you do is you apply those well-known principles to different entities -- different code, different data. Of course there are a lot of great technologies over the years that I have worked with that are new. But take for example &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt;: it's not a milestone. It's orchestration. You can orchestrate code, you can orchestrate environments, you can orchestrate everything. So from a broad technical perspective it’s always the same. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;the impact of no-code breaks through this little engineering bubble. With no-code tools, it's now convenient enough to equip non-tech people and give them another tool to be more productive.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The one really major change that for me had this &lt;em&gt;wow!&lt;/em&gt;-impact is the whole &lt;a href="https://www.nocode.tech/what-is-no-code"&gt;no-code movement&lt;/a&gt;. This is because the impact of no-code breaks through this little engineering bubble. With no-code tools, it's now convenient enough to equip non-tech people and give them another tool to be more productive. To me, no-code is similar to &lt;a href="https://www.cfo.com/technology/2003/09/spreadsheets-forever/"&gt;when Excel was introduced to the whole business environment&lt;/a&gt;, and the impact that had.&lt;/p&gt;

&lt;p&gt;Because with no-code, it’s not about the tech people. At FINN, approximately 20% of the workforce are engineers, while 80% are non-engineers. If, as a CTO, you would just focus on the engineering part, you're only focusing on the 20%. Why not on the 80%? Everyone is talking about the 80:20 rule, and so everyone just ignores the 80%. Wtf? For me, a CTO is not someone who's focusing on engineering exclusively. A CTO is someone who makes the life of the whole company better, easier, simpler, faster, by technology. &lt;/p&gt;

&lt;h2&gt;
  
  
  Why is it important to you that people feel more productive?
&lt;/h2&gt;

&lt;p&gt;The problem is always: How do you measure productivity? For instance, some people would say that mobile phones changed our world. Yes, but they haven't increased productivity -- that rather decreased, I would say. Of course I do think that with today’s tools, people are also very much more productive. There's definitely an impact on that. But ultimately, the question is: when you leave your workspace or finish your work for the day, do you feel like you did something essential? That's what it’s all about for me.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is your approach for keeping up to date with new technologies or developments in the field?
&lt;/h2&gt;

&lt;p&gt;I recently discovered a podcast, called &lt;a href="https://www.doppelgaenger.io/"&gt;Doppelgänger&lt;/a&gt; (it’s in German), which is among my favourite podcasts. It's nice, digestible, and you can listen to it while running. But overall my best way to stay up to date with what is happening right now is doing interviews. Reading CVs, and interviewing people. As simple as that. All the core technologies that we use here at FINN -- Node.js or Typescript, Python, Go, or also Serverless -- I have never used in my professional career so far. Of course, I have some basic experience with Python, but I have never worked with it professionally. Yet from reading people’s CVs, and then talking about it in interviews, I get to understand what is currently the hottest shit out there. &lt;/p&gt;

&lt;h2&gt;
  
  
  How do you cut through the hype?
&lt;/h2&gt;

&lt;p&gt;To be clear: I'm not trying to stop or avoid any hype. I try to understand if it's legit or not. Whether it is something worth investing energy into. And the way I do that is also through interviews. In an interview I might ask: Why are we doing that? For instance: Why should we use React? Then you go into discussion, and if the discussion makes sense and the arguments are strong, then: Hey, do it. So this process allows me to see: Is it just a hype or not? But overall, for the wider company, I want to give autonomy to the teams so that they can try things out. &lt;/p&gt;

&lt;p&gt;But let's also step back and think about: Why is this whole engineering sphere so sensitive to hypes? Why?? They’re more sensitive to hypes than TikTok. (I know that’s a populist statement 😂) As I see it, in the last decades, engineers got loaded with simple, annoying requests. Always the same bloody tasks. So, what do you do, as a human? As a creative, purpose-driven person? You execute, but at least you try to use different tools to solve the same problem, just to have something new and interesting to add. That's why engineers are so easily affected by these technical hypes. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ultimately, the question is: when you leave your workspace or finish your work for the day, do you feel like you did something essential? That's what it’s all about for me.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For me this also comes back to the no-code movement we talked about earlier. If you equip business people with the tools to implement automations in a simple way, then you can reduce the number of annoying ticket requests for the engineering teams. That will ensure that engineers can generate more value with the well-known tools they already have, without having to jump on the next hype train to keep things interesting. Because just the problems -- the new, attractive, complicated problems -- will give enough brain food not to have to dive into the next hype. &lt;/p&gt;

&lt;h2&gt;
  
  
  What advice would you have for someone who's starting out in engineering right now?
&lt;/h2&gt;

&lt;p&gt;Just the sheer number of different technologies out there … If I had to start out afresh today, I would be overwhelmed. So, to someone starting out in engineering today I would say: Pick the technologies and tools that make things most fun for you. Try to find what that is for you, and then focus on that. When I started, it was PHP, Python, or C. Or maybe some other niche programming language. It was simpler back then. So I'm not jealous of the young people starting out right now. &lt;/p&gt;

&lt;p&gt;My advice in terms of career development would be: only go into the people management part of things if you're actually curious about people. Don’t do it if you think people management is something you somehow need to do to get more money -- which, by the way, is not true. Because to go into tech leadership, you should be curious about people. If you have that, then you can take the people management path. But if that’s not you, then really try to become an expert in your chosen area and to hyper-focus on that.&lt;/p&gt;

</description>
      <category>strategy</category>
      <category>career</category>
      <category>leadership</category>
      <category>tech</category>
    </item>
    <item>
      <title>Driving the future of mobility with an AI Hackathon at FINN</title>
      <dc:creator>FINN Admin</dc:creator>
      <pubDate>Wed, 07 Jun 2023 12:12:53 +0000</pubDate>
      <link>https://forem.com/finnauto/driving-the-future-of-mobility-with-an-ai-hackathon-at-finn-27e0</link>
      <guid>https://forem.com/finnauto/driving-the-future-of-mobility-with-an-ai-hackathon-at-finn-27e0</guid>
      <description>&lt;p&gt;How do you get a bunch of disruptive minds together in a (virtual) room to build innovative AI applications? Organize a hackathon! During the FINN feat. AI Hackathon held on May 30, participants embarked on a one-day challenge to generate measurable business impact by leveraging AI to solve problems. Solutions developed ranged from automated vehicle damage recognition and GPT-4-powered dynamic car pricing to a bot to answer queries about vehicles directly from Slack.&lt;/p&gt;

&lt;p&gt;In this article, we’ll showcase the two winning projects: one winner from the teams competing at FINN’s base in Germany, and the other from the New York City headquarters. Get ready for some inspiration on hacking your business with AI 🧠🤖&lt;/p&gt;

&lt;h2&gt;
  
  
  Winner 🇩🇪: Finding your dream car with the power of AI
&lt;/h2&gt;

&lt;p&gt;Picture discovering your perfect car with just a few clicks. That’s the remarkable power of personalization. Did you know that &lt;a href="https://segment.com/pdfs/State-of-Personalization-Report-Twilio-Segment-2023.pdf"&gt;56% of consumers&lt;/a&gt; state they would become repeat buyers if they get a personalized experience?&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem
&lt;/h3&gt;

&lt;p&gt;How can we at FINN leverage personalization to bring the car dealership experience to our customers on the go?&lt;/p&gt;

&lt;p&gt;That’s the question our winning team in Germany asked themselves. &lt;a href="https://www.linkedin.com/in/sofiane-zeghoud-152b15176/"&gt;Sofiane Zeghoud&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/ishtiaquezafar/"&gt;Ishtiaque Zafar&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/sofyadurneva/"&gt;Sofya Durneva&lt;/a&gt;, and &lt;a href="https://www.linkedin.com/in/robert-ghazaryan/"&gt;Robert Ghazaryan&lt;/a&gt; were driven by a shared vision: transforming the way our customers engage with car subscriptions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nxWRKQMX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/omos62npmpjqyiv25mdu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nxWRKQMX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/omos62npmpjqyiv25mdu.jpg" alt="From left to right: Sofya Durneva, Sofiane Zeghoud, Ishtiaque Zafar, and Robert Ghazaryan." width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;From left to right: Sofya Durneva, Sofiane Zeghoud, Ishtiaque Zafar, and Robert Ghazaryan.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The team noticed that we currently offer a uniform product listing page (PLP) experience to everyone. The problem? Our customers are all unique and have distinct needs and preferences. We want to meet them where they are.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution
&lt;/h3&gt;

&lt;p&gt;In just a few hours, the team created a fully functional recommendation engine prototype for our website, delivering personalized suggestions that instantly connect with each customer.&lt;/p&gt;

&lt;p&gt;For a smooth customer experience, the engine uses already available analytics data to display the most relevant vehicles on the PLP. Imagine effortlessly navigating our website and being greeted by a vibrant display of vehicles that are exactly what you’re looking for. The “Help me pick a car” chat box becomes your trusted advisor that can always find the perfect car for your taste.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TwUAGDCu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6epcuqkzwp7oi8o50ucf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TwUAGDCu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6epcuqkzwp7oi8o50ucf.png" alt="Personalized car recommendations hand picked for the visitor’s needs." width="800" height="448"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Personalized car recommendations hand picked for the visitor’s needs.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;By offering a seamless personalized experience this solution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;drives &lt;strong&gt;higher clickthrough rates&lt;/strong&gt; (CTR)&lt;/li&gt;
&lt;li&gt;drives &lt;strong&gt;higher conversion rates&lt;/strong&gt; (CVR) to subscription&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;reduces paid marketing costs&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is how the magic happens: Sofiane leveraged the power of deep learning to anticipate the preferences of each visitor for our top car models and list the models accordingly. The “Help me pick a car” feature runs on GPT-4 under the hood, a powerful language model, to seamlessly fuse AI and user-centric design, helping customers find their dream car with ease.&lt;/p&gt;

&lt;p&gt;The winning team is already setting its sights on training the models on richer fleet data and fine-tuning the engine on more matching parameters.&lt;/p&gt;

&lt;p&gt;As we continue making mobility fun and sustainable, we invite you to stay tuned for more exciting innovations from us! But first, let’s explore the winning idea of our USA-based team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Winner 🇺🇲: Revolutionizing user acquisition calls with AI
&lt;/h2&gt;

&lt;p&gt;In the hackathon held at FINN’s NYC base, the winner was Team User Acquisition (UA), consisting of &lt;a href="https://www.linkedin.com/in/kevintallen/"&gt;Kevin Allen&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/bethanylooi/"&gt;Bethany Looi&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/anna-kohlasch-75703516a/"&gt;Anna Kohlasch&lt;/a&gt;, and &lt;a href="https://www.linkedin.com/in/ericvanthuyne/"&gt;Eric Van Thuyne&lt;/a&gt;. Team UA pitched a project to revolutionize the processing of user acquisition call information with the use of artificial intelligence.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem
&lt;/h3&gt;

&lt;p&gt;Team UA addressed the issue that on the one hand calls with customers are a crucial part of the work for many people working in user acquisition at FINN, while on the other hand call records can be quite cumbersome to extract actionable information from. Yes, calls are recorded, but re-listening those calls would take ages. And yes, call recordings are transcribed, but again, plowing through all those call transcriptions would be a huge effort. As a result, currently it is difficult to keep track of which topics came up in an individual call with a customer, or whether there are any trends or common topics across multiple calls. This means that a lot of valuable call data and information is currently left unused, or takes significant effort to use, hence restricting the UA department’s overall efficiency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution
&lt;/h3&gt;

&lt;p&gt;To solve this issue, Team UA developed an AI-powered automation to facilitate quick and easy call summaries. The solution automatically summarizes individual calls, identifies customer intent (that is, whether the customer is likely to want to get a subscription), action items (such as requesting a customer’s confirmation), as well as any keywords associated with the call. With this in hand, anyone can swiftly get the gist of what was covered in a call.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DYDHvU9m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h09aunpbfsg8xsl95ovq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DYDHvU9m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h09aunpbfsg8xsl95ovq.png" alt="Note on Hubspot with estimated customer intent, a call summary, action items, and keywords" width="800" height="259"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Note on Hubspot with estimated customer intent, a call summary, action items, and keywords&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In addition, the keywords extracted from a call can be used to automatically summarize the contents of multiple calls over a specific period. A keyword cloud can give a visual summary of the common topics that user acquisition agents are dealing with.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rTD6ySII--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ti1z86c8zrveyyqm133y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rTD6ySII--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ti1z86c8zrveyyqm133y.png" alt="A keyword cloud visually summarizes the topics of multiple calls" width="800" height="467"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;A keyword cloud visually summarizes the topics of multiple calls&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;How does the solution work in practice? The AI-driven automation consists of the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A phone call between a sales agent and a customer is recorded and transcribed via &lt;a href="https://www.cloudtalk.io/"&gt;CloudTalk&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Using the low-code tool &lt;a href="http://www.make.com"&gt;Make&lt;/a&gt;, an automation scenario fetches the call transcript from CloudTalk, matches the call to the customer contact details on &lt;a href="https://www.hubspot.com/"&gt;Hubspot&lt;/a&gt;, and sends the transcript to &lt;a href="https://chat.openai.com"&gt;ChatGPT&lt;/a&gt; for parsing.&lt;/li&gt;
&lt;li&gt;ChatGPT uses the call transcript to create a call summary, and to extract inferred customer intent, action items, and keywords.&lt;/li&gt;
&lt;li&gt;The Make scenario adds a note with the extracted information to the relevant customer record on Hubspot, and saves the extracted keywords in a Google sheet.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cQtpALbz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9a91ncsvrf30y2lxn9pt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cQtpALbz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9a91ncsvrf30y2lxn9pt.png" alt="Make scenario that summarizes calls, and extracts customer intent, action items, and keywords" width="800" height="492"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Make scenario that summarizes calls, and extracts customer intent, action items, and keywords&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Why it's so good
&lt;/h3&gt;

&lt;p&gt;Team UA’s AI-driven solution is expected to offer three core direct benefits to anyone working with call data in user acquisition:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Time management&lt;/strong&gt; — Automated call summaries can reduce manual effort in figuring out what happened in a call, or across multiple calls.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Increase sales&lt;/strong&gt; — The summaries can capture valuable insights that can in turn be used to train agents and enhance scripts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaboration&lt;/strong&gt; — The call summaries can be useful in collaboration, by allowing the easy sharing of keywords and learnings with other departments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition, in the longer term the automated call summaries and keyword clouds are also expected to offer the following advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sales training&lt;/strong&gt; — Call summaries can be used to create training exercises and prepare agents for specific sales scenarios.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Capacity planning&lt;/strong&gt; — Having a good, quick and easy overview of call topics will enable the department to scale, anticipate resource needs, and to allocate resources effectively.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customer service and close rates&lt;/strong&gt; — Call information can ultimately be used to increase the number of subscriptions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In fact, Team UA’s project was developed to such a high standard that, within days of winning the US hackathon, it has already been implemented 🥳&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next?
&lt;/h2&gt;

&lt;p&gt;AI hackathons can be a powerful tool when you’re seeking to drive innovation and generate tangible business outcomes. At FINN, we’re definitely going to continue implementing some more of the solutions developed during the hackathon day — and come up with new ones! How about you, have you used AI to build any business solutions? What worked well, what didn’t? Let us know in the comments.&lt;/p&gt;

</description>
      <category>finn</category>
      <category>hackathon</category>
      <category>ai</category>
    </item>
    <item>
      <title>AI and Talent Acquisition</title>
      <dc:creator>nivbat1</dc:creator>
      <pubDate>Sun, 23 Apr 2023 22:04:52 +0000</pubDate>
      <link>https://forem.com/finnauto/ai-and-talent-acquisition-5anb</link>
      <guid>https://forem.com/finnauto/ai-and-talent-acquisition-5anb</guid>
      <description>&lt;p&gt;Talent acquisition is a super critical function for any organisation that aims to attract and retain top talent in an increasingly competitive marketplace! The traditional approach to talent acquisition has been very labour-intensive and time-consuming, involving daily manual screening of resumes, conducting interviews, and evaluating candidates based on both subjective and objective criteria. However, advancements in AI  are creating massive disruptions to the way organisations approach talent acquisition, providing a more efficient and effective way to identify the best fit and best add candidates for the job.&lt;/p&gt;

&lt;p&gt;Here, we will take a look at the ways in which AI is changing internal talent acquisition practices and the benefits it provides. We will also take a look at the potential challenges and limitations of these new functionalities and how to address, avoid or at least be aware of them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI-powered recruitment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI-powered recruitment involves using AI algorithms to automate and streamline the recruitment process. These tools can assist with a range of tasks, from screening resumes to conducting interviews and evaluating candidates. &lt;/p&gt;

&lt;p&gt;Some cool tools to check out are as follows: &lt;br&gt;
metaview.ai - AI interview note-taker &lt;br&gt;
Turing.com - AI assisted sourcing&lt;br&gt;
seekout.com - Talent analytics&lt;/p&gt;

&lt;p&gt;One of the most significant benefits of AI-powered recruitment is the ability to reduce bias in the hiring process. Traditional recruitment methods are often highly susceptible to unconscious bias, leading to a lack of diversity in the workforce. AI algorithms, on the other hand, can be programmed to eliminate bias by focusing on objective criteria such as skills, experience, and qualifications.&lt;/p&gt;

&lt;p&gt;Another benefit of AI-powered recruitment is the ability to handle large volumes of data. Algorithms can analyse and process vast amounts of data, enabling organisations to identify the best candidates from a large pool of applicants quickly. This can significantly reduce the time and resources required for the recruitment process, allowing organisations to focus on other critical business functions.&lt;/p&gt;

&lt;p&gt;AI-powered recruitment also provides a more personalised experience for candidates. Chatbots and virtual assistants powered by AI can answer candidates' questions and provide them with real-time feedback which creates a more engaging and interactive experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI-powered talent management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI is not only transforming the recruitment process, but it is also changing the way organisations manage their talent. AI-powered talent management tools can assist with a range of tasks, from onboarding new employees to managing performance and career development.&lt;/p&gt;

&lt;p&gt;One of the most significant benefits of AI-powered talent management is the ability to provide personalised development opportunities for employees. AI algorithms can analyse employee data and provide recommendations for training and development based on their skills, strengths, and weaknesses. This can help organisations create a more agile and adaptable workforce, capable of responding to changing business needs.&lt;/p&gt;

&lt;p&gt;AI-powered talent management tools can also help organisations identify employees who are at risk of leaving the organisation. By analysing employee data, AI algorithms can identify factors that contribute to employee turnover, such as low job satisfaction or a lack of development opportunities. This can enable organisations to take proactive steps to retain their top talent.&lt;/p&gt;

&lt;p&gt;Another benefit of AI-powered talent management is the ability to provide real-time feedback to employees. AI algorithms can analyse employee performance data and provide feedback on areas for improvement in real time. This can enable employees to make adjustments quickly and continuously improve their performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges and limitations of AI in talent acquisition&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While AI-powered recruitment and talent management provide many benefits to organisations, there are also potential challenges and limitations to consider. One of the main concerns with AI in talent acquisition is the risk of perpetuating or amplifying bias. Although AI algorithms can be programmed to eliminate bias, they are only as objective as the data they are trained on. If the data used to train the algorithms is biased, the algorithms may perpetuate that bias.&lt;/p&gt;

&lt;p&gt;Another challenge is the potential for AI to overlook important human qualities that may not be easily quantifiable, such as emotional intelligence or cultural fit. While AI algorithms can analyse data such as skills and experience, they may not be able to evaluate intangible qualities that are critical to job success.&lt;/p&gt;

&lt;p&gt;Finally, there is also the concern that AI-powered talent acquisition may replace human judgment altogether, leading to a loss of empathy and connection with candidates. While AI can certainly streamline and automate many aspects of talent acquisition, it is essential to maintain a human touch to ensure that candidates feel valued and engaged throughout the recruitment process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Addressing challenges and limitations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To address the challenges and limitations of AI in talent acquisition, organisations can take several steps. First, it is essential to ensure that the data used to train AI algorithms is diverse and free from bias. This may involve working with a diverse range of data sources and regularly auditing algorithms to ensure that they are not perpetuating bias.&lt;/p&gt;

&lt;p&gt;Second, organisations should use AI as a tool to augment human judgment, rather than replace it altogether. This means that while AI algorithms can assist with tasks such as resume screening and interview scheduling, human recruiters should still be involved in the process to evaluate intangible qualities and ensure that candidates feel valued.&lt;/p&gt;

&lt;p&gt;Finally, it is essential to maintain a human touch throughout the recruitment process. This may involve incorporating video interviews, virtual career fairs, and chatbots that can provide real-time feedback to candidates. By creating a more engaging and interactive recruitment experience, organisations can ensure that candidates feel valued and engaged throughout the process.&lt;/p&gt;

&lt;p&gt;In conclusion, AI is transforming the way organisations approach talent acquisition, providing a more efficient and effective way to identify the best-fit candidates for the job. AI-powered recruitment and talent management provide many benefits, including reduced bias, improved efficiency, and a more personalised experience for candidates.&lt;/p&gt;

&lt;p&gt;However, there are also potential challenges and limitations to consider, including the risk of perpetuating bias, overlooking important human qualities, and replacing human judgment altogether. To address these challenges, organisations should ensure that the data used to train AI algorithms is diverse and free from bias, use AI to augment human judgment rather than replace it, and maintain a human touch throughout the recruitment process.&lt;/p&gt;

&lt;p&gt;Overall, AI has the potential to revolutionise talent acquisition and help organisations build a more agile and adaptable workforce. By embracing AI-powered recruitment and talent management, organisations can gain a competitive advantage in the ever-evolving business landscape.&lt;/p&gt;

</description>
      <category>career</category>
      <category>ai</category>
    </item>
    <item>
      <title>How to get into product management: my learnings from doing a PM internship</title>
      <dc:creator>Paula Garcia</dc:creator>
      <pubDate>Tue, 28 Feb 2023 09:48:56 +0000</pubDate>
      <link>https://forem.com/finnauto/how-to-get-into-product-management-my-learnings-from-doing-a-pm-internship-3nn2</link>
      <guid>https://forem.com/finnauto/how-to-get-into-product-management-my-learnings-from-doing-a-pm-internship-3nn2</guid>
      <description>&lt;p&gt;On the first day of my six-month internship in Product Management at FINN, I came in only to find that the team I would be joining didn’t have a full time Product Manager (PM) yet. So, would I be happy to fulfil the PM role myself for a few months? I hadn't expected to get the chance of having so much responsibility in an internship... 🧐 &lt;/p&gt;

&lt;h2&gt;
  
  
  From business administration to product
&lt;/h2&gt;

&lt;p&gt;Hi, I’m &lt;a href="https://www.linkedin.com/in/paula-garcia-de-la-varga/" rel="noopener noreferrer"&gt;Paula&lt;/a&gt; 👋 and I have a background in business. During my studies for a Bachelor's degree in Business Administration at Universidad Autónoma de Madrid, I hadn’t even considered Product as a career option. While the field is &lt;a href="https://www.productboard.com/blog/golden-age-of-product-management-trends/" rel="noopener noreferrer"&gt;getting more popular now&lt;/a&gt;, it’s not an area that is commonly thought of as accessible to business graduates. Coming from business, you might consider consulting, marketing, sales, human resources (HR), finance—but the product area can easily seem so far away and out of reach, as it’s something we don’t learn about much in uni. Plus, as it is a more technical role it can seem as targeted towards engineers. &lt;/p&gt;

&lt;p&gt;Through internships I gained experience in marketing, and later worked in sales, both of which enabled me to develop tons of valuable skills. Yet I also quickly realised that sales was not for me. However, because all the jobs I had were at tech companies, I decided I wanted to learn something new and build up my technical skills. Previously I had taken a short introduction to coding course where we learnt the basics of HTML, CS and some JS and had really enjoyed it, but didn’t have the time to dive deeper. This time I enrolled in an intensive web development bootcamp at &lt;a href="https://www.lewagon.com/%E2%80%9D" rel="noopener noreferrer"&gt;Le Wagon&lt;/a&gt;, to gain a better understanding of the technical aspects of websites and apps. &lt;/p&gt;

&lt;p&gt;The web development bootcamp turned out to be one of the most challenging and valuable learning experiences I’ve ever had. During the final project, I got the chance to act as a team lead and found that I enjoyed the planning, communication, and prioritisation aspects of the project. This insight led me to consider a career in product management. After completing the bootcamp and learning more about Product, I started looking for PM internships. This is how I ended up doing product management at FINN.&lt;/p&gt;

&lt;h2&gt;
  
  
  The daily grind as a product manager
&lt;/h2&gt;

&lt;p&gt;“No two days are the same,” is what I often heard about the day-to-day tasks of a product manager before starting the job. This is indeed true. Days are non-typical and the role can vary greatly depending on the company, product, and industry. As a product manager, you have to handle a wide range of responsibilities and interact with many different stakeholders.  At FINN I was part of the B2B Revenue Operations (RevOps) team and the Internal Tools Engineering team, which included engineers, a technical writer, a product manager, a VP of Engineering, and a VP of Revenue Operations. In such a role, you always act kind of as the bridge between the business and engineering teams, and therefore need to be able to communicate effectively with both.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8y9u3wqnyg7g54myhzts.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8y9u3wqnyg7g54myhzts.png" alt="Zoom meeting with colleagues"&gt;&lt;/a&gt;&lt;em&gt;A RevOps team meeting with my colleagues &lt;a href="https://www.linkedin.com/in/murtazawani/" rel="noopener noreferrer"&gt;Murtaza Wani&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/mithat-berk-ozture-66a4b6136/" rel="noopener noreferrer"&gt;Mithat Ozture&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/ctambourgi/" rel="noopener noreferrer"&gt;Christophe Tambourgi&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/shpatcheliku/" rel="noopener noreferrer"&gt;Shpat Celiku&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/shun-long-hong/" rel="noopener noreferrer"&gt;Shun Long Hong&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/dominik-f%C3%BCbi/" rel="noopener noreferrer"&gt;Dominik Fübi&lt;/a&gt;, and &lt;a href="https://www.linkedin.com/in/alfonsocomino/" rel="noopener noreferrer"&gt;Alfonso Comino.&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;While I started my internship acting as the sole product manager in the team, about three months into the job, my colleague &lt;a href="https://www.linkedin.com/in/shun-long-hong/" rel="noopener noreferrer"&gt;Shun Long Hong&lt;/a&gt; joined our team as a full time PM. This was definitely a turning point in my time at FINN. Long was a great mentor and I learned tons from him! We spent half of our days on calls brainstorming together, so the three months we worked together were a huge learning experience for me.&lt;/p&gt;

&lt;p&gt;During my time at FINN, my schedule was filled with meetings, as you need to align with many stakeholders. At times it can feel like there are too many meetings and not enough time to complete tasks, but in reality, a big portion of the PM work actually happens during these meetings. Sprint ceremonies, such as sprint refinement and sprint planning, are crucial. Sprint Refinement is where we refine the tickets that the engineers will tackle during the next sprint. Sprint Planning is an opportunity to review what was accomplished in the previous sprint, as well as the moment when the tickets for the upcoming two weeks are assigned to the engineers—therefore it’s when the team can estimate what we think will be achieved. As a PM, you have to prepare and run these meetings. Additionally, there are also the sprint retrospective meetings with the team. This is by far my favorite session, as it’s a safe space where past sprints are evaluated and team members provide feedback, ideas for improvement, and can raise any issues encountered. Moreover, in addition to preparing and running these core meetings, as a PM you need to be available for your engineers at all times in case there are any questions, issues or blockers you need to solve related to their work.&lt;/p&gt;

&lt;p&gt;A significant part of the role also includes writing product requirements documentation,  creating epics, writing tickets, and backlog grooming. Additionally, product roadmapping is also very important. Product roadmapping involves estimating the work and new initiatives in the upcoming weeks and months, therefore it is part of roadmapping to determine if there is capacity for new ideas and requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  What skills do you need to be successful in product management?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqyzfeeedx8btomwkdt63.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqyzfeeedx8btomwkdt63.png" alt="Framed picture with the text "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I would say there’s a variety of skills needed to become a successful product manager: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Strategic thinking:&lt;/strong&gt; The ability to think strategically and understand the big picture is essential for product managers. You need to be able to identify customer needs, research market opportunities, and create a product roadmap that aligns with the company's goals.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Communication and collaboration:&lt;/strong&gt; As a product manager, you will need to be able to communicate effectively with a wide range of stakeholders, including customers, business leaders, cross-functional teams, and external partners. You also need to be able to collaborate effectively with these groups to drive the product development process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Project management:&lt;/strong&gt; Product managers need to be able to manage the product development process from concept to launch, including gathering and prioritising product requirements, working with cross-functional teams, and analysing and reporting on product performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Technical skills:&lt;/strong&gt; Even if you don't have a background in engineering, as a product manager you will need to have a basic understanding of the technical aspects of the product. Understanding the technology enables PMs to better communicate with their engineers and make more informed decisions about product development.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Analytical skills:&lt;/strong&gt; Product managers need to be able to analyse data and use it to make informed decisions about the product. You should be able to use data to measure product performance, identify trends and customer needs, make decisions about product features and prioritise.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How product management differs from being a tech lead
&lt;/h2&gt;

&lt;p&gt;The role of a product manager is different from that of, for example, a tech lead. PMs focus on overall product strategy and vision. As a PM, you are responsible for the entire product lifecycle, make decisions related to product strategy and prioritisation, and interact with a wide range of stakeholders. Tech leads, on the other hand, focus on leading and managing the technical aspects of a project. As a tech lead, you primarily interact with the engineering team, and will make decisions related to the technical direction of the product, including the technologies to use, managing the development timeline, and ensuring that the product is delivered on time. &lt;/p&gt;

&lt;p&gt;In my experience, product managers and tech leads should collaborate closely. Tech leads play a crucial role in the technical refinement of tickets and addressing any issues raised by engineers during standups. They also bring attention to technical debt, which may not be immediately apparent from a business perspective, and they can help ensure that adequate time and resources are allocated to addressing tech debt. Additionally, tech leads provide valuable insight on the technical feasibility of initiatives proposed by the business side.&lt;/p&gt;

&lt;h2&gt;
  
  
  Highlights and challenges
&lt;/h2&gt;

&lt;p&gt;One of the biggest highlights of my product management internship was seeing the tangible impact of the work we did—this was definitely what I enjoyed the most from the PM work. For example, we developed and implemented a tool for the B2B Customer Success team that streamlined their subscription management, saving them a significant amount of time and reducing their need to search for information across multiple sources. Through user interviews, we gained an understanding of their daily workflow, needs, and actions performed, and used this information to put together the product requirements. Being able to witness the entire product development process from start to finish, and also giving a training session to the team to demonstrate the value of the tool was a great way to deliver results during my time at FINN.&lt;/p&gt;

&lt;p&gt;In the beginning, running the refinement and planning meetings with the engineering team seemed like a big challenge to me. This was the time when &lt;a href="https://en.wikipedia.org/wiki/Impostor_syndrome" rel="noopener noreferrer"&gt;impostor syndrome&lt;/a&gt; hit hard, as I felt everything was way too technical for me. But &lt;a href="https://www.linkedin.com/in/cpoetter/" rel="noopener noreferrer"&gt;Christian Pötter&lt;/a&gt;, our VP of Engineering, as well as all the other engineers in the team, were always there to support and clear out any questions or doubts. Besides, as a PM you’re not expected to know all the technical details,  that falls within the domain of the engineers, which I came to see more clearly during my first months. Everybody I worked with was incredibly approachable, supportive and always willing to help. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzet4oni3wv1yhd4vhpso.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzet4oni3wv1yhd4vhpso.jpg" alt="Team event picture"&gt;&lt;/a&gt;&lt;em&gt;One of our team events last summer&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Another highlight of my experience at FINN was the opportunity to work with incredibly talented individuals. Despite the majority of the team working remotely, we still managed to maintain a strong team dynamic and were always there to support each other, also celebrating each other’s successes! This was in large part due to the efforts of our managers to foster a positive and collaborative environment, which I don’t think is that easy in a remote setup. I was able to visit the FINN Munich office several times and it was really amazing to work together with my colleagues and also to enjoy great team events together.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tips for getting started in product management
&lt;/h2&gt;

&lt;p&gt;Don’t be scared! And don't let job requirements intimidate you. Even if you don't have all the qualifications or technical knowledge listed, trust in your ability to learn on the job. Impostor syndrome will be there, but remember: you won't be alone in this process, there's a team to help you figure things out. &lt;/p&gt;

&lt;p&gt;Being thrown in at the deep end from day one was daunting at first, but now I can only be super grateful for this experience: the fact that I was given so much trust and responsibility as an intern, enabled me to learn and grow immensely. &lt;/p&gt;

&lt;p&gt;So: be patient, learn when to say no, and be flexible. Priorities in a company can change quickly, so be prepared to adjust your plans accordingly! &lt;/p&gt;

</description>
      <category>career</category>
      <category>product</category>
      <category>beginners</category>
      <category>productmanagement</category>
    </item>
    <item>
      <title>Strategy is delivery: Scaling 10x through ownership and trust</title>
      <dc:creator>Verena Ermes</dc:creator>
      <pubDate>Mon, 27 Feb 2023 13:20:59 +0000</pubDate>
      <link>https://forem.com/finnauto/strategy-is-delivery-scaling-10x-through-ownership-and-trust-232p</link>
      <guid>https://forem.com/finnauto/strategy-is-delivery-scaling-10x-through-ownership-and-trust-232p</guid>
      <description>&lt;p&gt;At FINN, we are determined to build the most popular car subscription platform. We are on a promising path and hit $100 million annualized recurring revenues (ARR) in 2022, in our third year of existence. Supporting this immense growth we need technology. But as the company changed so much in only three years, our technological needs changed just as much. &lt;/p&gt;

&lt;p&gt;As my colleague &lt;a href="https://www.linkedin.com/in/ishtiaquezafar/"&gt;Ish Zafar&lt;/a&gt; outlined in his article &lt;a href="https://dev.to/finnauto/no-code-isnt-scalable-our-learnings-at-finn-going-from-1000-toward-100000-car-subscriptions-50l0"&gt;No-code isn’t scalable&lt;/a&gt;, we were facing a range of technical challenges that were a loud cry for action. For in order to enable the business to grow as fast and ambitiously as we were planning to, we had to make our technical foundation scalable. So we decided to build a next generation car subscription platform standing on a foundation of pro-code. &lt;/p&gt;

&lt;p&gt;Now imagine a growing company that is busy as a bee hive. And then you come to realize you need to do this tiny change of turning the core tech platform upside down while the business is running ever-faster. How do you manage a project like this? &lt;/p&gt;

&lt;p&gt;I was visiting the &lt;a href="https://omr.com/en/events/omr22/"&gt;OMR Festival&lt;/a&gt; in Hamburg in May last year and listened to a talk by the Delivery Hero CTO Christian von Hardenberg. He was presenting their approach to unifying all Delivery Hero’s acquisitions’ technical systems. This talk gave me two key insights:&lt;br&gt;
Think 10x – a Google paradigm to innovate and &lt;a href="https://x.company/moonshot/"&gt;create moonshots&lt;/a&gt;&lt;br&gt;
A centralized approach may not win over a decentralized one&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Moonshot mindset&lt;/strong&gt;&lt;br&gt;
Thinking 10x—this refers to shifting the mindset and aim for improving 10x rather than by 10%. The idea behind it is that people get way more excited about an opportunity to make something 10 times better. The goal becomes to create breakthroughs and to be radical. An accelerator is the ability to detach from existing solutions and old assumptions. &lt;/p&gt;

&lt;p&gt;Discovering 10x-thinking triggered another thought process. In my view, it in addition emphasizes the iterative nature of evolving a minimal viable product (MVP). Building an MVP requires some kind of agreement of what’s minimal. So, when I thought about our goal—to create a scalable car subscription platform that serves 100,000 cars, having that number in mind and discovering the 10x thinking—suddenly I knew how to structure this project: we would start with an MVP serving 10 cars and iterate 10x the number of cars in each cycle going from 10 to 100 to 1,000 … and eventually to 100k cars. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let’s go decentralized&lt;/strong&gt;&lt;br&gt;
FINN’s organization is split into vertical departments that are very independent, autonomous, and follow their own departmental missions. The sum of its parts results in the company mission. The Engineering department, however, is an enabling function, which means that Engineering teams are mostly distributed across all other departments. When starting the new subscription platform project, it quickly became obvious that it was going to be a cross-department effort involving teams from all departments. &lt;/p&gt;

&lt;p&gt;A decentralized organization can hold a lot of complexity. Holding the complexity of a cross-departmental project that would reach the size of coordinating ten streams plus guiding the technical side of things would simply have been too much for one person alone. I am very grateful for my colleague &lt;a href="https://www.linkedin.com/in/andreaperizzato/"&gt;Andrea Perrizato&lt;/a&gt; for joining me and taking ownership of the technical guidance throughout this project. Having two perspectives on the same situations is incredibly valuable and can elevate outcomes to a higher level, I am convinced. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A project principle&lt;/strong&gt;&lt;br&gt;
After having the basic project structure and approach defined, we added one simple project principle for everyone to keep in mind and use as a decision aid. The principle is “strategy is delivery”. For me it has one very clear meaning: to prioritize shipping features fast. Not in order to rush the process—to the contrary. We wanted to use this project to do it right. To review processes and workflows that grew organically and adjust them if we learned how to do things better in the meantime. The principle’s meaning underscores the sense of building an MVP. This MVP not only aims to do things better in the long run, but also, during short-term implementation iterations, aims to ship fast to maximize feedback cycles. &lt;/p&gt;

&lt;p&gt;You could argue that the principle “strategy is delivery” has another meaning, at least in the beginning. After all, we are delivering cars that have been subscribed to be delivered at our customers’ doorsteps. For the very first car (and probably the following 100 too) serving the car delivery was very much the goal. On reaching every new 10x milestone we had a special sticker counting the number of cars delivered. We just got the 1,000 cars version added to the collection. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XaoYaGkx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nnjdtj7wq9svaa5y6q3b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XaoYaGkx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nnjdtj7wq9svaa5y6q3b.png" alt="Proudly presenting our sticker collection of the first four milestones" width="880" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zooming out to zoom in&lt;/strong&gt;&lt;br&gt;
Okay, I hear you asking, but how do you get going? My personal preference on doing things is to get context and see the bigger picture, so that I know what I am working towards. I’d like to think about it as zooming out, in order to be able to zoom in. In our first scoping workshop for our MVP 10 Cars, people from all teams came together to do exactly that: zoom out, to zoom in. &lt;/p&gt;

&lt;p&gt;The tricky part about having a decentralized organization is the question of how to access distributed knowledge and make it available. Bringing relevant people onto the same (figurative) table is step one. Step two is pouring out the knowledge. My biggest objective was to understand what the whole lifecycle of a car and a subscriber at FINN looks like. I knew every department works on parts of this cycle, but I couldn’t visualize the whole process. &lt;/p&gt;

&lt;p&gt;Everyone drew their core parts on a Miro board and we could stitch it together having one flow. We could break each step down into features and prioritize these features to match the MVP iteration’s focus. We would repeat this process for every MVP iteration and refine the overall flow as well as the next prioritized features. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o3fspiQo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mafag0wrmq8ahi0c3c64.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o3fspiQo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mafag0wrmq8ahi0c3c64.png" alt="Snapshot of the first scoping workshop looking at a consolidated flow" width="880" height="353"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Give out trust and ownership instead of managing risk &lt;br&gt;
When you google for project management it is highly likely that you’ll find articles on managing risk. In the context of software development I refuse to understand this concept. Building software is inherently risky, because you’ll never really be able to predict the exact outcome and timeline. So, let’s just accept that there is risk. &lt;/p&gt;

&lt;p&gt;Instead I want to give people trust and ownership. After choosing and defining the scope for every iteration collectively, I trust people will pick up the right things to work on in order to reach our joint goals. In a decentralized organization like ours, they effectively know their domain best. I like to give people true ownership, because I want to avoid micromanagement at all costs. Never forget that with ownership comes freedom and responsibility. I strongly believe that when giving out trust first and foremost, adding ownership on top, most people won’t risk you pulling the plug because they enjoy their radius of operation and know their impact. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;See the gaps&lt;/strong&gt; &lt;br&gt;
For following through, it is essential to have an overview at all times and to never lose track of the target picture. Establish a weekly stage to exchange updates, challenges, and successes between teams. This will provide two opportunities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It creates accountability &lt;/li&gt;
&lt;li&gt;It can uncover gaps &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You want to have week-by-week accountability to encourage progress to be made in a timely manner on the defined scope. More importantly, I think it is crucial to see gaps when connecting the dots week by week. This way you can course correct without a lot of delay and manage the project’s successful progress effectively. You are by default not waiting for crashes to occur and adjust direction only after they happened, but instead you are trying to read the ocean and navigate it based on your observations (which still leaves room for crashes). &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Communicate interdependencies&lt;/strong&gt;&lt;br&gt;
Lastly, and this is especially true for a cross-department project, communicating interdependencies right from the beginning is key. Make this step mandatory, prior to any line of code being written or any automated workflow being built. For whenever there are dependencies between multiple teams it is most important to get back onto that figurative table with the teams involved and get clarity on each other's requirements and dependencies. We aim to design solutions for each other, because every team has stakeholders to serve and we highly value being customer-first at &lt;a href="https://www.finn.com/"&gt;FINN&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Originally published on &lt;a href="https://www.linkedin.com/pulse/strategy-delivery-scaling-10x-through-ownership-trust-verena-ermes"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>management</category>
      <category>startup</category>
    </item>
    <item>
      <title>How to "Make" your no-code backend reliable, secure and maintainable</title>
      <dc:creator>CorneliusSchramm</dc:creator>
      <pubDate>Thu, 02 Feb 2023 22:55:51 +0000</pubDate>
      <link>https://forem.com/finnauto/how-to-make-your-no-code-backend-reliable-secure-and-maintainable-270h</link>
      <guid>https://forem.com/finnauto/how-to-make-your-no-code-backend-reliable-secure-and-maintainable-270h</guid>
      <description>&lt;p&gt;&lt;em&gt;Many thanks to my colleague &lt;a href="https://www.linkedin.com/in/chrisonrustmeyns/?originalSubdomain=dk" rel="noopener noreferrer"&gt;Chris Meyns&lt;/a&gt; for helping me put together this article!&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.make.com" rel="noopener noreferrer"&gt;Make&lt;/a&gt; (formerly ‘Integromat’) is a highly functional low-code tool that can help you complete complex tasks, quickly. "Low-code, and Make in particular, are incredible tools for rapidly creating a proof-of-concept (POC) of new processes and iterating on existing ones. It can increase your organization's product velocity by an unbelievable factor, especially when starting from a base of zero automations. It is the not quite so secret sauce of how we at FINN managed to get to $100 million ARR in just 3.5 years. Low-code tools are not a silver bullet, however, and come with their own set of limitations compared to traditional pro code software development. If not built properly, and at a certain level of complexity, they can get very cumbersome to maintain.and difficult to debug, find errors and fix key processes. However, through trial and error, we have found that there are some clear principles and best practices that allow you to get further using low-code than is commonly expected before needing to switch to pro-code solutions.&lt;/p&gt;

&lt;p&gt;We use a fair bit of Make at FINN. So I thought it would be worth sharing some of the insights that we've collected on how to build better automations. In short: how to low-code like a pro. &lt;/p&gt;

&lt;p&gt;In this article I want to present you with &lt;strong&gt;four main principles&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1 - Complexity and Modularization&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Make is visual programming; therefore using the right patterns matters. Make it modular and break complexity down into individual pieces.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;2 - Orchestration Through API-Like Behaviour&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Orchestrate the communication in your network of scenarios efficiently with API-like behaviour and HTTP protocols.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;3 - Application Monitoring and Organisation&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Effectively monitoring and debugging is crucial in a low code environment. TraceIds, standardized alerts, naming conventions and capacity dashboards help you with that.  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;4 - Quality of Life Hacks&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Make your life as easy as possible. The DevTool, a search interface and our scenario template can be huge time savers.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I will showcase some hypotheses I have on good and bad design principles for Make, and in addition will share some best practices that we have learned so far.&lt;/p&gt;

&lt;h1&gt;
  
  
  Principle 1 - Complexity and Modularization
&lt;/h1&gt;

&lt;p&gt;Make is really just visual programming, so it's important that you use the right patterns. The first principle that I'm reasonably confident about is: make it modular. Avoid building big scenarios. Try not to cram too much logic into one scenario, but instead split off individual scenarios to accomplish each task. Individual pieces of logic should be individual scenarios, so that you can orchestrate more of a network of Make scenarios. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpj5d6eagx1hmfzxoi0dx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpj5d6eagx1hmfzxoi0dx.png" alt="Example of a big Make scenario to avoid (on the left), and of the preferred, modularized alternative (on the right)&amp;lt;br&amp;gt;
" width="800" height="384"&gt;&lt;/a&gt;Example of a big Make scenario to avoid (on the left), and of the preferred, modularized alternative (on the right)&lt;/p&gt;

&lt;p&gt;When you split logic into different scenarios that trigger each other and pass states (that is, data), then that means the scenarios will need to start with webhooks. A webhook is a URL that can receive HTTP requests, and as such can enable event-driven communication between web apps or pages. In &lt;a href="https://www.make.com/en/help/tools/webhooks" rel="noopener noreferrer"&gt;Make, you can use webhooks&lt;/a&gt; such that, as soon as a certain event type occurs, that event will trigger data to be sent to the unique URL (webhook) you specify in the form of an HTTP request. That means you won’t constantly have to check another scenario, app or service for new data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ml05y3545dwt4p1b28f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ml05y3545dwt4p1b28f.png" alt="The webhook section in Make" width="800" height="347"&gt;&lt;/a&gt;The webhook section in Make&lt;/p&gt;

&lt;p&gt;Why is it best to work modularly? First, because big scenarios are incredibly difficult to maintain. Even if you’re the only person using your big scenario, after a while you will likely forget what you have done. Second, because it's almost impossibly difficult for someone new to familiarize themselves with a huge scenario. Hence, it's far more effective to break things down into individual pieces, and build a network of individual scenarios. That will get you a better architecture. &lt;/p&gt;

&lt;p&gt;Just like in code, you want to break things apart into their individual components. Building as modularly and as event-based as possible is most likely the right pattern for most situations. I will further illustrate this principle with a concrete example - the "car availability calculation" -  in section 2.&lt;/p&gt;

&lt;h1&gt;
  
  
  Principle 2 - Orchestration Through API-Like Behaviour
&lt;/h1&gt;

&lt;h4&gt;
  
  
  2.1 A Network of Interlinking Scenarios
&lt;/h4&gt;

&lt;p&gt;Once you have a bunch of individual scenarios, you need to orchestrate the communication in your network of scenarios efficiently. Arranging your scenarios in the right way saves you time, not just in building things, but also in debugging and fixing scenarios if something is broken. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4j1zi1tfr0pp70oa1f5t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4j1zi1tfr0pp70oa1f5t.png" alt="Schematic overview of a Make scenario calling another scenario and receiving a success response&amp;lt;br&amp;gt;
" width="800" height="413"&gt;&lt;/a&gt;Schematic overview of a Make scenario calling another scenario and receiving a success response&lt;/p&gt;

&lt;p&gt;Let me give an example from FINN Operations. At Operations we work with compounds, which are essentially giant car processing facilities with parking lots and workshops. Compounds store our cars, mount our license plates, and stage our vehicles for final delivery to our customers. Obviously we have a lot of crucial data to share and transact upon with these providers. For example, a compound will tell us when a new car has arrived. Once we get that information, we may want to find the car in our database, and update its arrival date. You may even want to trigger other consequences, such as recalculating the availability date—because now that the car is on the compound, it can be made available for an earlier delivery date.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvlxwkhwm6pz188rangl4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvlxwkhwm6pz188rangl4.png" alt="The FINN car lifecycle, from production to defleeting" width="800" height="292"&gt;&lt;/a&gt;The FINN car lifecycle, from production to defleeting&lt;/p&gt;

&lt;p&gt;Ideally, when we get information that a car has arrived at a compound, we would want this information to arrive in the same way, no matter what the facility is. However, all of our partners have slightly different interfaces and data structures, so integrations work slightly differently for each compound.&lt;/p&gt;

&lt;p&gt;Of course you could, technically, build a separate scenario for each compound that performs all of the relevant integration actions for that compound. But that would mean that if, say, you want to change some of the consequences or how you calculate the availability date, you would have to change each and every one of those scenarios. &lt;/p&gt;

&lt;p&gt;For that reason, a better idea is to split scenarios up. That way, you can have scenarios that are more like event collectors that simply gather the data and map it into the desired data structure, and other scenarios that act like a shared handler that always does the same thing, no matter where the call came from, given that it receives a standardized payload. &lt;/p&gt;

&lt;p&gt;Let's say that you want to recalculate the car’s availability date whenever you receive information from a compound that a car has arrived. If you have split your scenarios apart, you could then have a compound-specific scenario that receives information about when a car has arrived. It collects the input data. Next, you could have a completely general scenario that recalculates the availability date, regardless of which compound the arrival data came from. The first, input-collecting scenario could then pass—with an HTTP call, via a webhook—the consequence to the next, date-calculating scenario. The date-calculating scenario gets the input, finds the car by its Vehicle Identification Number (VIN), recalculates the availability date, and then updates the available-from-date too. &lt;/p&gt;

&lt;p&gt;The beauty is: you could have ten different input-collecting scenarios, each for a different compound, that then all route into this one availability calculation scenario. And this means that if anything changes in how you want to calculate the availability date, you’ll only have to update it in one place, rather than in ten different scenarios.&lt;/p&gt;

&lt;p&gt;But what if you want to add some other consequence to the initial, input-collecting scenario that requires that the availability date has been recalculated successfully? Would that not be a problem for orchestration? This is where input validation comes in.&lt;/p&gt;

&lt;h4&gt;
  
  
  2.2 Always Validate Inputs for Increased Stability
&lt;/h4&gt;

&lt;p&gt;When you separate your scenarios, you will sometimes still need to send data back and forth, to be sure that a certain scenario actually ran successfully. Before running a scenario, we want to check that all the required input variables are included in the payload. Additionally, when passing back data between two scenarios, we want to make sure that the response payload is in a standardized format and that the called scenario has actually run successfully. In case it did not run successfully the called scenario needs to communicate that information and we need to catch and handle that error in the calling scenario appropriately.&lt;/p&gt;

&lt;p&gt;For example, if we get an update from a compound that a car has arrived, but this car doesn't actually exist in our database, then the scenario that uses the car_ID to recalculate the car’s availability date will produce an error. In turn, any other scenario that relies on the recalculated availability date should be made aware of that error. So, if an expected variable car_ID is not passed for a certain scenario, then you can return a HTTP 400 status code, and in the body, you can say that the response code is 400 and return an error message, for example &lt;code&gt;"error”: “data missing [car_id]"&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;If you build your scenarios like this, then whoever calls the scenario immediately knows why things failed. Whenever you have a webhook response, always provide:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a response code&lt;sup id="fnref1"&gt;1&lt;/sup&gt;
&lt;/li&gt;
&lt;li&gt;a message&lt;/li&gt;
&lt;li&gt;data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While the data can be &lt;code&gt;null&lt;/code&gt;, the response code and message should always be filled. &lt;/p&gt;

&lt;p&gt;We want to avoid errors. Besides, the webhook URLs in Make are public, which means that technically anyone with an internet connection could trigger them at any time if they knew the URL. So at any point where you expect an input from a previous module, you should have a filter that checks if you in fact have that variable. Additionally, you can increase the stability and security of your scenario by requiring a password to be passed. This basic authentication is analogous to how pro-code APIs use API keys and tokens to authenticate API calls. A good way to manage these different API tokens and avoid hard-coding them in all scenarios is to use &lt;a href="https://www.make.com/en/blog/system-and-custom-variables" rel="noopener noreferrer"&gt;Make’s new custom variables feature&lt;/a&gt;. This allows for a more streamlined maintenance of these tokens and allows you to easily swap the tokens in case they become compromised. This is analogous to using &lt;code&gt;env&lt;/code&gt; variables in a pro-code CI/CD pipeline. With your filter that checks for the presence of a certain variable, if you don't find the variable, then you return the relevant message and response code. Building in such basic validation will save you significant time in the long run and help assure peace of mind that your scenarios are secure. &lt;/p&gt;

&lt;h4&gt;
  
  
  2.3 Security Through API Keys
&lt;/h4&gt;

&lt;p&gt;A best practice is to reject any call that does not have an API token specified. Make webhooks are basically public, anyone could use the web hook link. That means you want to be sure that your scenario is not used by people that might not know how to use it. So you can reject them if they don't have the right password. It just adds another layer of security.&lt;/p&gt;

&lt;p&gt;So we want most scenarios to be modular, orchestrated, and we want to validate inputs. But how do you keep tabs on such an orchestrated network of scenarios? This is where we turn to application monitoring.&lt;/p&gt;

&lt;h1&gt;
  
  
  Principle 3 - Application Monitoring and Organisation
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ua0dfaw809or27zkjyn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ua0dfaw809or27zkjyn.png" alt="Overview of some key Make best practices for application monitoring (the execution log in this graphic is identical to the trace ID discussed in what follows)" width="800" height="362"&gt;&lt;/a&gt;Overview of some key Make best practices for application monitoring (the execution log in this graphic is identical to the trace ID discussed in what follows)&lt;/p&gt;

&lt;p&gt;One of the bigger downsides or limitations when using Make, compared to traditional pro-code services that have advanced logging frameworks and comprehensive test coverage, is application monitoring. If left unchecked, the sprawling, ever-expanding and interlinked network of automations can quickly become a black box where no one really knows what is going on and how things work. Thankfully, there are very powerful tricks and best practices to remedy a good portion of these issues. One such best practice that everyone should be aware of is application monitoring. How we approach application monitoring with our networked scenarios has several aspects, and can be set up as follows. &lt;/p&gt;

&lt;h4&gt;
  
  
  3.1 Use naming conventions
&lt;/h4&gt;

&lt;p&gt;Using naming conventions for naming your scenarios, and sticking to those conventions, will help save you many headaches in the long run. At FINN we use a rulebook that specifies our naming conventions for Make scenarios, so that everyone knows what to call a new scenario. This includes rules such as ‘Always put the team in front of the title of your scenario’, ‘Always include the email address of the DRI (Directly Responsible Individual) at the end of the scenario name’, and ‘Use draft_ when you start building a scenario and is not ready to be turned ON’. Naming conventions are also important for alerting.&lt;/p&gt;

&lt;h4&gt;
  
  
  3.2 Use standardized alerts
&lt;/h4&gt;

&lt;p&gt;The naming conventions discussed above significantly impact how we manage our error notification system. We utilize regular expressions to extract relevant information from HTML-formatted alert emails and swiftly identify the responsible person and tag them via Slack. FINN is a very Slack-heavy organization and we use Slack a lot for human-system interactions. We have a template for how we format Slack alerts, because it just makes it nicer to work with. Our template for Slack alerts for Make operations also always includes—you guessed it—the trace ID, plus a link to any other useful resources. Using standardized logs, alerts, and passing trace IDs everywhere will save you so much time in the long run.&lt;/p&gt;

&lt;h4&gt;
  
  
  3.3 Pass a Trace ID
&lt;/h4&gt;

&lt;p&gt;Whenever we make a call from one scenario to another, we pass the execution log—so, the URL of the scenario combined with the execution ID, &lt;code&gt;{scenarios_URL}&lt;/code&gt;/log/&lt;code&gt;{execution_ID}&lt;/code&gt;—as a string. Passing this execution log allows you to track and store the different executions in Make. On the one hand, if you call another scenario you will see the execution ID in the headers, and you can use that URL to check out which execution failed, or produced an unexpected result. Conversely, for the receiving scenario, it tells you which scenario it was called by, ​​and therefore instantly trace where calls are coming from. Always having instant access to the calling or responding scenario logs can save you hours in debugging time and prove especially useful in 'moments of truth' where something is really going awry and quick action is needed. No matter how skilled you are, things will inevitably go wrong from time to time. Debugging effectively is crucial to building a stable automation landscape which is why we are religiously including trace IDs in everything we build.&lt;/p&gt;

&lt;p&gt;Let's jump into an example. Say that we're looking at a call scenario that calculates the availability of a car. In the beginning we define a bunch of return headers, like a JSON string that contains both the fact that we're returning a JSON, and that we are returning the &lt;code&gt;{scenarios_URL}&lt;/code&gt;/log/&lt;code&gt;{execution_ID}&lt;/code&gt;. That means that whenever you open a scenario execution, you can find this &lt;code&gt;{scenarios_URL}&lt;/code&gt;/log/&lt;code&gt;{execution_ID}&lt;/code&gt;. Sending these logs allows you to jump back and forth very easily. &lt;/p&gt;

&lt;p&gt;In short, we want to have the trace IDs literally everywhere. So: whenever you call another scenario, you pass the &lt;code&gt;trace ID&lt;/code&gt;. Whenever you respond to a call from another scenario, you pass the &lt;code&gt;trace ID&lt;/code&gt;. At FINN, passing the &lt;code&gt;trace ID&lt;/code&gt; in your calls is part of our ’non-negotiable rules to follow’. If there's one thing to take away from this article: pass the trace ID. &lt;br&gt;
Provide webhook responses&lt;br&gt;
In addition, we want to use webhook responses to tell a calling scenario whether something worked or whether there was some kind of error (and, in that case, return an error code). These are classic HTTP responses. Plus, this is also how any API communication protocols work: you give a HTTP &lt;code&gt;200&lt;/code&gt; status code if everything was good; you return a &lt;code&gt;500&lt;/code&gt; status code if there was a server error, or &lt;code&gt;400&lt;/code&gt; if it was a bad request. For example, in case that you request a &lt;code&gt;car_ID&lt;/code&gt; that doesn't exist, you should return a HTTP &lt;code&gt;400&lt;/code&gt; status code, because it was a bad request. &lt;/p&gt;

&lt;h4&gt;
  
  
  3.4 Capacity Monitoring
&lt;/h4&gt;

&lt;p&gt;At FINN we also have a Make capacity monitoring dashboard. A big shout-out to my colleague &lt;a href="https://www.linkedin.com/in/delenamalan/" rel="noopener noreferrer"&gt;Delena Malan&lt;/a&gt;, who recently built this. You can see this as a command center-view of what is happening currently in our automation landscape. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fsc7yh35bb1nx6hc4kd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fsc7yh35bb1nx6hc4kd.png" alt="Our Integromat/Make capacity monitoring dashboard" width="800" height="439"&gt;&lt;/a&gt;Our Integromat/Make capacity monitoring dashboard&lt;/p&gt;

&lt;p&gt;Keep in mind that, like anything in life, automation compute-time is not free. We have certain limits on how much execution and computational power we use. If you build a scenario that spams another one with 50,000 records and there are 50,000 scenario calls in the queue, then you will block other processes. &lt;/p&gt;

&lt;p&gt;In the past, we’ve occasionally had instances when a customer was temporarily not able to check out, because our Make was over-capacity and didn't have any more compute time. So always keep those computational constraints in mind. A capacity monitoring dashboard helps you to see if there is a spike in, let's say, the queue length, or the rate of change, or the number of incomplete executions. It's a useful thing for everyone to look at, especially if you're wondering: Why is Make so slow? Why is my stuff not running?’ Such a dashboard will help you find out what scenarios are causing it, allowing you to remedy it quickly. It has already helped me a lot of times.&lt;/p&gt;

&lt;p&gt;These were my three main principles for low-coding like a pro. Let me now turn to some best practices that can improve your Make quality of life even further.&lt;/p&gt;

&lt;h1&gt;
  
  
  Principle 4 - Quality of Life Hacks
&lt;/h1&gt;

&lt;p&gt;Here are some small quality of life improvements that I've learned over time. They can save you a bit of time here and there, and make you want to pull your hair out just slightly less frequently. &lt;/p&gt;

&lt;h4&gt;
  
  
  4.1 Make Search Tool
&lt;/h4&gt;

&lt;p&gt;My colleague &lt;a href="https://www.linkedin.com/in/dinalivia/" rel="noopener noreferrer"&gt;Dina Nogueira&lt;/a&gt; built a very cool and useful Make search tool. Consider how, in a code editor, you can do &lt;code&gt;Ctrl+F&lt;/code&gt; to find some string in your code. The search tool does exactly that, but then for Make scenarios: it goes through all the scenarios, tries to find the string that you're searching for, and will just output a list of scenarios that reference this string. So whenever, let's say, you need to adjust all the scenarios that have &lt;code&gt;car_ID&lt;/code&gt; in it, then you can use a search tool like this, and it will produce a list for you. From that list you can just click directly on the links. The search tool can be especially useful when making data model changes. As you are more likely than not exposing your database's schema directly when building backend processes with Make, the blast radius of changing the data model can be potentially very big, as all scenarios that rely, say, on a column that got renamed will break. The search tool will allow you to fix things quickly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpe84c54pncsf8pxrato3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpe84c54pncsf8pxrato3.png" alt="Our FINN custom-built Integromat / Make search tool" width="800" height="441"&gt;&lt;/a&gt;Our FINN custom-built Integromat / Make search tool&lt;/p&gt;

&lt;h4&gt;
  
  
  4.2 Integromat (Make) DevTool
&lt;/h4&gt;

&lt;p&gt;Another useful thing is &lt;a href="https://chrome.google.com/webstore/detail/integromat-devtool/ainnemkhpnjgkhcdkfbhmlenkhehmfhi?hl=en-US" rel="noopener noreferrer"&gt;the Integromat (Make) DevTool&lt;/a&gt;, which is a Chrome browser extension. The DevTool allows you to have a more detailed view of the things that are actually happening in the background. For instance, sometimes some apps on Make give you unhelpful error messages. They just will tell you: some error occurred. However, with this browser extension installed, if you open the log, and you open the console, you can find more details about what the exact response was from the API that produced an error. Plus, you can do a bunch of other cool stuff. It's a pretty powerful tool. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8qailujbb8gwn50y5a3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8qailujbb8gwn50y5a3.png" alt="The Integromat (Make) DevTool" width="800" height="501"&gt;&lt;/a&gt;The Integromat (Make) DevTool&lt;/p&gt;

&lt;p&gt;Use the DevTool carefully, though, because you can also screw some things up with it. But for debugging purposes, it can save you a lot of time. If you've found debugging Make scenarios cumbersome in the past, this is a useful quality of life hack.&lt;/p&gt;

&lt;h4&gt;
  
  
  4.3 Use a Template 😉
&lt;/h4&gt;

&lt;p&gt;Here’s a cool thing: I've &lt;a href="https://eu1.make.com/templates/10451?templatePublicId=10451" rel="noopener noreferrer"&gt;built a template&lt;/a&gt; that has all of the relevant base structure, and basically gives you many of the checks discussed in this article for free. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqro1o9g12d6bwsq476z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqro1o9g12d6bwsq476z.png" alt="The scenario template with basic checks and validation steps built-in" width="800" height="708"&gt;&lt;/a&gt;The scenario template with basic checks and validation steps built-in&lt;/p&gt;

&lt;p&gt;With the template, you can create a new scenario just as fast as you would if you did it from scratch: you just click ‘clone’, you push, add a new webhook, and make sure that the webhook setting includes ‘retrieve HTTP headers’. You will need to replace the dummy variables with your own variables, set the correct headers and methods, and then you can just put whatever additional logic you need in there. Then you have a new scenario with all of that logic, checking, and validation for free. Trust me, this will save you so much time and pain in the future. &lt;/p&gt;

&lt;p&gt;–&lt;/p&gt;

&lt;p&gt;That’s it! I hope these Make principles and quality of life hacks will be useful to you. If you have questions, feel free to reach out or put them in the comments. Follow FINN for more Low Code content and events :) &lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;The reason you also want to pass the response code in the body of the response, in addition to sending the actual HTTP code, is due to a small particularity of how Make webhooks work. In case a webhook is attached to an inactive scenario, you still get back a 200 response when calling it, even though the scenario itself did not run. From the perspective of the calling scenario, you obviously want to make sure that the scenario you called actually ran successfully (rather than it just being inactive). When using the filter &lt;code&gt;{{if(5.data.response_code = 200)}}&lt;/code&gt; you can ensure that the called scenario actually ran before you proceed. This is especially important if the calling scenario relies on data passed back from the called scenario. It is the principle of validating inputs before proceeding at work. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>crypto</category>
      <category>cryptocurrency</category>
      <category>web3</category>
      <category>blockchain</category>
    </item>
    <item>
      <title>Tech recruitment - The good, the bad and the ugly</title>
      <dc:creator>nivbat1</dc:creator>
      <pubDate>Mon, 30 Jan 2023 14:34:02 +0000</pubDate>
      <link>https://forem.com/finnauto/tech-recruitment-the-good-the-bad-and-the-ugly-4je5</link>
      <guid>https://forem.com/finnauto/tech-recruitment-the-good-the-bad-and-the-ugly-4je5</guid>
      <description>&lt;p&gt;Everyone has an idea of what tech recruitment is. But much like how people only like photographs of themselves that they look good in, tech recruitment tends to vary from stakeholder to stakeholder and organisation to organisation. &lt;/p&gt;

&lt;p&gt;Let’s start at the very beginning: technical recruitment is the process of hiring technical talent for organisations. This process can include recruiting for roles such as engineers, developers, as well as data scientists. Moreover, it can encompass both internal recruitment teams, external service providers, and individual headhunters, depending on the nature, urgency and level of the roles in question. &lt;/p&gt;

&lt;p&gt;Technical recruitment is different from generalist recruitment in many ways. The most obvious difference is that most people have an understanding of what an Accountant or Sales Manager does, but those same people might have quite a limited understanding of the difference between a Backend Developer and a UI Developer. Tech recruiters, on the other hand, while not being required to build and maintain technical applications, must at least have some knowledge of the technology stacks that they’re recruiting for, and know where people using those specific technologies tend to spend time online. As such, technical recruitment comes with a couple of challenges.&lt;/p&gt;

&lt;p&gt;Let's first take a look at some of these challenges: &lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Skills shortages: *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;One of the biggest challenges that technical recruiters face is the limited talent pool. Highly skilled technical candidates are rarely unemployed, and often highly irritated by constant communication attempts from both internal and external recruiters trying to garner their attention. (Evidence of this, you ask? Check out r/recruitinghell on Reddit.) Finding the perfect candidate is one thing. Convincing them to move to a different organisation is another Herculean task entirely.  &lt;/p&gt;

&lt;p&gt;*&lt;em&gt;The requirements are constantly changing: *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Recruitment requirements change as the needs of the business change. In tech recruitment, the pace of these changes happens at warp speed. It’s imperative that a good tech recruiter is agile and able to pivot along with these changes. Another key skill here is to be able to identify transferable skills. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The competition is incredibly fierce&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Gone are the days when organisations considered candidates lucky to be employed. Employer value propositions (or EVPs) are critical to finding and maintaining a top-tier technical workforce. &lt;br&gt;
Given all these challenges, getting started in technical recruitment may sound like a daunting task. But fear not! There are some things that you can do to strengthen your approach.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;So how do you get started as a Tech Recruiter? *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There are many paths into tech recruitment but for the most part, I recommend the following as a very basic pathway into this incredibly rewarding environment:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Educate yourself. No varsity? No problem. There are loads of resources available on Youtube, LinkedIn and Reddit. &lt;/li&gt;
&lt;li&gt;Perfect your non-technical skills. Recruitment is ultimately about people, so you need to know how to connect with them.&lt;/li&gt;
&lt;li&gt;Know your tools, and be open to new ones!&lt;/li&gt;
&lt;li&gt;Learn how to take, and act upon, criticism. Feedback is often ruthless, but don't take it personally, and instead grow from it. &lt;/li&gt;
&lt;li&gt;Keep your stakeholders at the forefront of everything you do. Both candidates and hiring managers respond to great experiences, it’s your job to provide those for them!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Sounds tough, right? It is. But a career as a technical recruiter can be an incredibly rewarding career choice. Firstly, as a technical recruiter, you get to understand how and why technologies are built from the ground up, as well as stay on-top of tech trends. But, at least in my humble opinion, by far the most rewarding element of recruitment—technical or otherwise—is that you get to make a genuine impact on people's lives.&lt;/p&gt;

&lt;p&gt;I can only speak for myself when I say that placing a candidate in a job that you know is going to make them happy, make their situation easier, or will improve their life in some way, is absolute magic! &lt;/p&gt;

&lt;p&gt;One last thing, we’re #hiring!!&lt;/p&gt;

&lt;p&gt;Check out our open positions &lt;a href="https://www.finn.com/jobs/de-DE/careers#positions"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
