<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Michael Dombrowski</title>
    <description>The latest articles on Forem by Michael Dombrowski (@mikedombo).</description>
    <link>https://forem.com/mikedombo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mikedombo"/>
    <language>en</language>
    <item>
      <title>Queues don't make things faster (except when they do)</title>
      <dc:creator>Michael Dombrowski</dc:creator>
      <pubDate>Thu, 28 Sep 2023 18:04:34 +0000</pubDate>
      <link>https://forem.com/iotbuilders/queues-dont-make-things-faster-except-when-they-do-4mm1</link>
      <guid>https://forem.com/iotbuilders/queues-dont-make-things-faster-except-when-they-do-4mm1</guid>
      <description>&lt;h2&gt;
  
  
  From cloud to edge
&lt;/h2&gt;

&lt;p&gt;You may be familiar with various queue-like products including AWS SQS, Apache Kafka, and Redis. These technologies are at home in the datacenter where they're used to reliably and quickly hold and send events for processing. In the datacenter, the consumers of the queue are often able to scale based on the queue size to process a backlog of events more quickly. AWS Lambda, for example, will spawn new instances of the lambda to handle the events to avoid the queue getting too big.&lt;/p&gt;

&lt;p&gt;The world outside the datacenter is quite different though. Queue consumers cannot simply autoscale to handle increased load because the physical hardware is limited.&lt;/p&gt;

&lt;p&gt;Making a queue bigger does not increase your system's transaction rate unless you can scale the processing resources based on the queue size. When processing resources are not scalable, such as within a single physical device, then increasing any queue size will not help that device process transactions any more quickly than with a smaller queue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where queues don't help
&lt;/h2&gt;

&lt;p&gt;Something that I've seen several times when working with customers using AWS IoT Greengrass is that they'll see a log that says something to the effect of "queue is full, dropping this input" and their first instinct is to just make the queue bigger. Making the queue bigger may avoid the error, but only for so long if the underlying cause of why the queue filled is not addressed. If your system has a relatively constant transaction rate (measure in transactions per second (TPS)), then the queue will always fill up and overflow if the TPS going into the queue is higher than the TPS going out of the queue. If the queue capacity is enormous then the overflow may take quite a long time to be reached, but ultimately it will overflow because &lt;code&gt;TPS in &amp;gt; TPS out&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let's now make this more concrete. If we have a lambda function running on an AWS IoT Greengrass device, then that lambda will pick up events from a queue and process them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o9XWZEYg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hxfpz0a6mn8w1taczi8k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o9XWZEYg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hxfpz0a6mn8w1taczi8k.png" alt="Happy queue" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's say that the lambda can complete work at a rate of 1 TPS. If new events are added to this lambda's queue at less than or equal to 1 TPS then everything will be fine. If work comes in at 10 TPS though, then the queue is going to overflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x60AJ5jz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/82d54lhm6wofhrtzkkfm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x60AJ5jz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/82d54lhm6wofhrtzkkfm.png" alt="Overflowing queue" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Assume that the lambda has a queue capacity of 100 events. Events are added to the queue at 10 TPS which means that it will fill up and then start overflowing in 11 seconds &lt;code&gt;(100 capacity / (10 TPS in - 1 TPS out) = 11.1 s)&lt;/code&gt;. So we can then make the capacity bigger, but that only extends the time to overflow; it does not prevent the overflow from happening. Fundamentally, the lambda is unable to keep up with the amount of work because 1 TPS is less than 10 TPS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TERful6b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0uqp2knqq08mo7ukcwfc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TERful6b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0uqp2knqq08mo7ukcwfc.png" alt="Bigger queue, still overflowing" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now maybe you're thinking that lambdas should scale and fix the problem, "we just need 10 lambdas working at 1 TPS each and the problem is solved". Yes, that is technically correct that if you can perfectly scale 10 instances then the problem would be solved for this level of load, but you need to remember that AWS IoT Greengrass and these lambdas are running on a single physical device. That single device only has so much compute power so that perhaps you can scale to 5 TPS with 5 or 6 lambda instances, but you then hit a brick wall of scaling because of the hardware limits.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oyYnskM---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xpnh5p4dtjcx11lagi04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oyYnskM---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xpnh5p4dtjcx11lagi04.png" alt="Consumer scaling limits" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So what can be done at this point? Perhaps the lambda can be optimized to process more quickly, but let's just say that it is as good as it gets. If the lambda cannot be optimized, then the only options are to accept that the queue will overflow and drop events or else you need to find a way to slow down the inputs to the queue.&lt;/p&gt;

&lt;h2&gt;
  
  
  What good are queues then?
&lt;/h2&gt;

&lt;p&gt;You may now think that queues are good for nothing, but of course queues do exist for a reason, you just need to understand what problems they can and cannot help with.&lt;/p&gt;

&lt;p&gt;If the consumer of the queue can scale up the compute resources, such as AWS Lambda (lambda in the cloud, not on AWS IoT Greengrass) with AWS SQS, then a queue certainly makes sense and will help to process the events quickly.&lt;/p&gt;

&lt;p&gt;On a single device, queues can help with bursty traffic. If your traffic is steady like in the example above, then queues won't help you. On the other hand, if you sometimes have 10 TPS and other times have 0 TPS input, then a queue (and even a large queue) can make sense.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aJn7km67--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cs0jbimh6ew0odu5osvl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aJn7km67--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cs0jbimh6ew0odu5osvl.png" alt="Burst of traffic with plenty of room in the queue" width="800" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Going back to the example from above, our lambda can process at 1 TPS. Let's say that our input is now very bursty where we'll get 10 TPS for 20 seconds and then 0 TPS for 200 seconds. This means that the queue would receive 200 events during the 20 second period and then would drain to 0 events in the 200 second period since no data is coming in and data is flowing out at 1 TPS. If the queue size was 100 like in the earlier example, then the queue would have overflowed and we'd lose 100 events even though in theory the lambda could have eventually processed them if the queue were large enough. So in this case, making the queue capacity at least 200 is reasonable and should minimize any overflow events.&lt;/p&gt;

&lt;p&gt;To summarize, if &lt;code&gt;average TPS input &amp;gt; average TPS output&lt;/code&gt; then the queue is going to overflow eventually and it does not matter how big you make the queue. The only options are to 1. increase the output TPS, 2. decrease the input TPS, or 3. accept that you will drop events. When your input TPS is relatively constant, keep the queue size small which will be more memory efficient and will show errors due to overflow sooner than a larger queue. Finding problems like this early on can then encourage you to understand the traffic pattern and processing transaction rate so you can then choose one of the 3 options for dealing with overflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application to Greengrass
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Lambda
&lt;/h3&gt;

&lt;p&gt;In this post I used lambda as an example, so how about some specific recommendations for configuring a lambda's max queue size?   &lt;/p&gt;

&lt;p&gt;For a pinned lambda which will not scale based on load, start with a queue size of 10 or less. If you're able to calculate the expected incoming TPS and traffic pattern (steady or bursty) then you can change the queue size based on that data. I would not recommend going beyond perhaps 100-500. If your queue is still overflowing at those sizes then you probably need to find another solution instead of just increasing the size.&lt;/p&gt;

&lt;p&gt;For on-demand lambdas which do scale based on load I'd recommend that you start with a queue size of 2x the number of worker lambdas that you want to have. This way effectively each worker has its own mini-queue of 2 items. Again, the same recommendations from above apply here too if you understand your traffic pattern and can calculate the optimal queue size.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stream Manager
&lt;/h3&gt;

&lt;p&gt;Stream Manager is a Greengrass component which accepts data locally and then (optionally) exports it to various cloud services. It is effectively a queue, connecting the local device to cloud services, where those cloud services are the consumer of the queue. Since it is a queue, the exact same logic applies to it. If data is written faster than the data is exported to the cloud, then eventually the queue is going to overflow and in this use case, some data would be removed from the queue before being exported. It is very important to understand how quickly data is coming into a stream and how quickly it can be exported based on the cloud service limits and your internet connection.&lt;/p&gt;

&lt;h3&gt;
  
  
  MQTT Publish to IoT Core
&lt;/h3&gt;

&lt;p&gt;When publishing from Greengrass to AWS IoT Core, all MQTT messages are queued in what's called the "spooler". This spooler may either store messages in memory or on disk depending on your configuration. The spooler is a queue with a configurable limited size, so again the same logic as to all queues applies to the spooler. AWS IoT Core limits each connection to a maximum of 100 TPS publishing, so if you're attempting to publish faster than 100 TPS through Greengrass, the spooler will inevitably fill up and reject some messages. To resolve this, you'd need to publish more slowly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further reading
&lt;/h2&gt;

&lt;p&gt;For some deeper understanding of queuing see the following resources.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Queuing_Rule_of_Thumb"&gt;Wikipedia - Queuing Rule of Thumb&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.danslimmon.com/2016/08/26/the-most-important-thing-to-understand-about-queues/"&gt;Dan Slimmon - The most important thing to understand about queues&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>greengrass</category>
      <category>iot</category>
      <category>aws</category>
      <category>queue</category>
    </item>
    <item>
      <title>Managing Per-Device Configuration with an AWS IoT Greengrass Fleet</title>
      <dc:creator>Michael Dombrowski</dc:creator>
      <pubDate>Thu, 01 Jun 2023 22:41:26 +0000</pubDate>
      <link>https://forem.com/iotbuilders/managing-per-device-configuration-with-an-aws-iot-greengrass-fleet-4obk</link>
      <guid>https://forem.com/iotbuilders/managing-per-device-configuration-with-an-aws-iot-greengrass-fleet-4obk</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;One of the best parts of working on AWS IoT Greengrass is the opportunity to talk to our customers and see all the varied use cases they have. Through these discussions we’ve identified some common patterns that I will share here with the wider IoT builder community.&lt;/p&gt;

&lt;p&gt;In this first article, I’ll cover a couple approaches to manage a fleet of many devices using AWS IoT Thing Groups, but with per-device configuration.&lt;/p&gt;

&lt;p&gt;Imagine that you are building a solution for ACME Corporation’s factories to collect sensor readings and upload them into AWS IoT Core using MQTT. There are 10 factories around the world and each factory has 10 lines and 10 cells in a line. In this example, there is 1 Greengrass core device per cell, so this means that you need to manage 10x10x10 = 1,000 Greengrass devices. Greengrass allows you to deploy to a single device or to a group of devices in an AWS IoT Thing Group. This means that you could manage all 1,000 devices uniquely, but that may be an unreasonable operational burden and you instead want to manage the 1,000 devices as a group.&lt;/p&gt;

&lt;p&gt;Now that you decided to manage the devices as a single Thing Group, there is a problem because your solution requires that each device has some amount of unique configuration such as 1. what factory is it in, 2. which line, and 3. which workcell. These configurations are unlikely to ever change, and if they do change then it will be at a very low frequency (less than once per day for example). Other solutions would be more appropriate for high frequency configuration changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;To solve the problem with treating all the devices as a group while needing unique configuration I suggest to create a “configuration holder” Greengrass component such as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
RecipeFormatVersion: "2020-01-25"
ComponentName: "ConfigHolder"
ComponentVersion: "1.0.0"
ComponentType: "aws.greengrass.generic"
ComponentDescription: "Holds configuration, does nothing."
ComponentPublisher: "ACME Corp"
ComponentConfiguration:
  DefaultConfiguration: {}
Manifests:
- Lifecycle: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This component will do absolutely nothing on its own, it only exists to hold onto the per-device configuration that you need in your solution. I will show two different ways to use this component, first is using it with a one-time per-device deployment and the second is to setup the component during Greengrass installation.&lt;/p&gt;

&lt;h2&gt;
  
  
  One-time Per-device Deployment
&lt;/h2&gt;

&lt;p&gt;As mentioned before, you can deploy to Greengrass devices either individually or to a group of devices. With this solution, you will deploy the ConfigHolder component with the unique device configuration to the specific individual device that needs that configuration. At the same time, you will have a different Greengrass deployment to a group of devices which deploys your business logic components. Your business logic components will depend on the configuration holder deployment to provide the necessary unique configuration. Any shared configuration may go into the business logic components.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a deployment targeting the individual device&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add the ConfigHolder component - Important: Add only the ConfigHolder here and nothing else&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3iyqzjo601ink7mbscy6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3iyqzjo601ink7mbscy6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure the ConfigHolder component with the unique configuration&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcevrnzqn7c15nfhw837b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcevrnzqn7c15nfhw837b.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy this deployment and wait for it to complete successfully&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Your device now has the ConfigHolder component properly configured with unique configurations, now we need to talk about how to actually use this configuration on the device.&lt;/p&gt;

&lt;p&gt;As an example, I have created a business logic component helpfully named BusinessLogic as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
RecipeFormatVersion: "2020-01-25"
ComponentName: "BusinessLogic"
ComponentVersion: "1.0.0"
ComponentType: "aws.greengrass.generic"
ComponentDescription: "Does the work"
ComponentPublisher: "ACME Corp"
ComponentConfiguration:
  DefaultConfiguration: {}
ComponentDependencies:
  ConfigHolder:
    VersionRequirement: ^1.0.0
Manifests:
- Lifecycle:
    Run: &amp;gt;-
      echo "Running with config from holder: {ConfigHolder:configuration:/workcell}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This component has a dependency on the ConfigHolder component and uses interpolation to extract the unique configuration and use it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Important: In this example, BusinessLogic is deployed to the Greengrass core device via a group deployment and ConfigHolder is deployed via an individual deployment. If the Greengrass core device is already in the thing group, the thing group deployment may execute before the individual deployment which means that BusinessLogic will execute when ConfigHolder hasn’t been configured.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are 2 ways to address this issue, the most conceptually simple would be to ensure that the individual deployment completes before adding the device into the thing group which means that the thing group deployment will not execute until ConfigHolder is already configured. The more robust way to handle this is to have validation logic in your business logic which checks to see if the configuration is present and makes sense. If it does not make sense, then your business logic should not execute because it doesn’t have all the information it needs to execute correctly. Make sure that your business logic component does not exit with an error because this will make the thing group deployment fail and rollback (if configured to rollback), and you would then need to manually retry the deployment. When ConfigHolder is later deployed with the configuration, the BusinessLogic component will be restarted because the configuration that is interpolated into the run script has changed.&lt;/p&gt;

&lt;p&gt;Individual device deployments utilize the IoT Thing’s Shadow in order to send the deployment information, you will be charged for the Shadow usage required to execute the deployment on each of your devices. You can avoid this cost and the deployment ordering issue mentioned above by trying the second way to use ConfigHolder, which is to configure it when installing Greengrass.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure ConfigHolder on Installation
&lt;/h2&gt;

&lt;p&gt;With this approach, you will provide an initial configuration file to Greengrass during the installation which contains the ConfigHolder component and the desired unique configuration. This approach is not mutually exclusive to the individual device deployment described above, you may want to use this approach at first and then update the unique configuration over time using individual device deployments.&lt;/p&gt;

&lt;p&gt;You may already use the initial configuration file during installation for port, proxy, or provisioning settings, but if not don’t worry, it is quite simple. When installing Greengrass, add the command line option &lt;code&gt;--init-config initial-config.yaml&lt;/code&gt; this option can be combined with other options that you’re using such as &lt;code&gt;--provision&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I will create a file: &lt;code&gt;initial-config.yaml&lt;/code&gt; with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  ConfigHolder:
    componentType: "GENERIC"
    configuration:
      factory: "Madison"
      line: "1"
      workcell: "weld"
    dependencies: []
    lifecycle: {}
    version: "1.0.0"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here I describe the ConfigHolder component that you’ve seen before along with the same configuration as before. When Greengrass finishes the installation, it will have this component as part of its configuration. It can then receive the thing group deployment and BusinessLogic will be able to pick up the configuration as before.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration over IPC
&lt;/h2&gt;

&lt;p&gt;In BusinessLogic, I’ve shown how configuration is interpolated into the recipe and that will work for any component. Now though, I will show how to use the configuration in a more “native” way by using Greengrass IPC to read configuration and subscribe to configuration changes in order to react without restarting the business logic component and without needing to interpolate the configuration into the recipe. I will also take this opportunity to show off a component using NodeJS as we now have a developer preview version of &lt;a href="https://github.com/aws/aws-iot-device-sdk-js-v2/tree/main/samples/node/gg_ipc" rel="noopener noreferrer"&gt;Greengrass IPC for Node&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the code below I use the developer preview version of Greengrass IPC for NodeJS in order to &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get the configuration from ConfigHolder&lt;/li&gt;
&lt;li&gt;Subscribe to changes in configuration in ConfigHolder&lt;/li&gt;
&lt;li&gt;Get the configuration from ConfigHolder again if any of it ever changes
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { greengrasscoreipc } from 'aws-iot-device-sdk-v2';

const CONFIG_COMPONENT = "ConfigHolder";

async function main() {
    try {
        let client = greengrasscoreipc.createClient();

        await client.connect();

        const config = await client.getConfiguration({ componentName: CONFIG_COMPONENT, keyPath: [] });
        console.log("Got initial config", JSON.stringify(config.value));
        console.log("Subscribing to config changes");

        // Setup subscription handle
        const subscription_handle = client.subscribeToConfigurationUpdate({ componentName: CONFIG_COMPONENT, keyPath: [] });
        // Setup listener for config change events
        subscription_handle.on("message", async (event) =&amp;gt; {
            console.log("Config changed, will pull full new config immediately", JSON.stringify(event.configurationUpdateEvent?.keyPath));

            const config = await client.getConfiguration({ componentName: CONFIG_COMPONENT, keyPath: [] });
            console.log("Got new full config", JSON.stringify(config.value));
        });

        // Perform the subscription
        await subscription_handle.activate();
        console.log("Subscribed to config changes");
    } catch (err) {
        console.log("Aw shucks: ", err);
    }
}

main();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full component code is available &lt;a href="https://github.com/aws-greengrass/aws-greengrass-component-examples/tree/main/blog-config-node" rel="noopener noreferrer"&gt;here&lt;/a&gt;, see the readme in the repository for instructions to build and publish the component into your account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I’ve covered multiple ways that you can use two components to have a mix of per-device and fleet-wide configurations. I hope that this blog is helpful to builders who want to simplify Greengrass fleet management while still allowing for per-device configurations as needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/MikeDombo" rel="noopener noreferrer"&gt;Follow me on GitHub&lt;/a&gt; and I look forward to your comments and suggestions for future topics in the comments!&lt;/p&gt;

</description>
      <category>greengrass</category>
      <category>iot</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
