<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Osvaldo Brignoni</title>
    <description>The latest articles on Forem by Osvaldo Brignoni (@brignoni).</description>
    <link>https://forem.com/brignoni</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/brignoni"/>
    <language>en</language>
    <item>
      <title>Import API data to Google Sheets</title>
      <dc:creator>Osvaldo Brignoni</dc:creator>
      <pubDate>Sat, 25 Feb 2023 23:57:06 +0000</pubDate>
      <link>https://forem.com/brignoni/import-api-data-to-google-sheets-13j5</link>
      <guid>https://forem.com/brignoni/import-api-data-to-google-sheets-13j5</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Vz3DoKSy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t2nj99xnld6f0t323mw6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Vz3DoKSy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t2nj99xnld6f0t323mw6.jpg" alt="A cat using Google Sheets" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  A simple use case
&lt;/h2&gt;

&lt;p&gt;I like using low-code tools to provide simple solutions. I will show you how to import data from any API directly into Google Sheets. This can be useful in a wide variety of scenarios and will work with almost any API that returns data in JSON format.&lt;/p&gt;

&lt;p&gt;In the first example, I will import stories from the Hashnode GraphQL API and explain how it works. It is free to use and does not require an authorization key. After that, I will show you three additional examples. You can be creative with this technique and develop your formulas. The possibilities are endless.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Importing stories from the Hashnode API&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let's take a look at the first example. As soon as we visit the spreadsheet, the formula on cell A7 uses the IMPORTJSONAPI function to request the Hashnode API and populates the rows with the results. Notice the Auto-refresh option in the following screenshot. When this option is enabled, the function makes a new request each time you revisit or edit the spreadsheet. In this case, you will always get the latest Hashnode stories.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZrPRGnEy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679092546892/37beb3f6-7c74-4824-8632-3ea43ddff476.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZrPRGnEy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679092546892/37beb3f6-7c74-4824-8632-3ea43ddff476.png" alt="Import Hashnode API" width="800" height="555"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This formula is made easy by using an open-source Google app script called IMPORTJSONAPI. It provides a custom function to selectively extract data from an API in a tabular format suitable for importing into a Google Sheets spreadsheet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation steps
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Access your free Google Sheets account at &lt;a href="https://docs.google.com/spreadsheets"&gt;https://docs.google.com/spreadsheets&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open Extensions &amp;gt; Apps Script from the top menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a blank script file called IMPORTJSONAPI.gs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy and paste the entire contents of the IMPORTJSONAPI.gs file from the Github repository found &lt;a href="https://raw.githubusercontent.com/qeet/IMPORTJSONAPI/master/IMPORTJSONAPI.gs"&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You should now be able to use the &lt;strong&gt;=IMPORTJSONAPI()&lt;/strong&gt; function in your sheet.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  About Google Sheets
&lt;/h2&gt;

&lt;p&gt;You are probably already familiar with Google Sheets. I think it is worth going over what makes Google Sheets such a popular tool and how you can leverage Google Apps Script's extensions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why is Google Sheets so popular?
&lt;/h3&gt;

&lt;p&gt;Its popularity stems from its user-friendly interface, cloud-based accessibility, and cost-effectiveness, as it is a free tool. Additionally, it offers collaborative features that enable multiple users to work on the same sheet simultaneously from anywhere in the world, making it easy for team members to collaborate on a project. Users can also customize their sheets with various formatting options, charts, and formulas to analyze data and make informed decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Google Apps Script?
&lt;/h3&gt;

&lt;p&gt;Google Apps Script is a scripting language that allows you to extend and automate the functionality of various Google Workspace applications, including Google Sheets, Docs, Forms, and more. Some of the benefits of using Google Apps Script include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automating&lt;/strong&gt; repetitive tasks, such as data entry or report generation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Customizing&lt;/strong&gt; your Google Workspace applications to fit your specific needs. For example, you can create custom functions or menus to simplify your workflow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integrating&lt;/strong&gt; seamlessly with other Google services, allowing you to easily access and manipulate data from different sources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Collaborating&lt;/strong&gt; with multiple users on the same project simultaneously.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Developing&lt;/strong&gt; powerful scripts with a wide range of capabilities, including sending emails, creating Google Calendar events, accessing Google Drive files, and more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Free to use&lt;/strong&gt;, making it a cost-effective solution for individuals and businesses that need to automate and extend their Google Workspace applications.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  A closer look at the formula
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;=IMPORTJSONAPI(
    "https://api.hashnode.com/graphql",
    "$.data.storiesFeed[*]",
    "title, author.username, slug",
    "method=post",
    "payload={
        'query': '{
            storiesFeed(type: FEATURED) { 
                title
                slug
                author {
                    username
                }
            }
        }'
    }",
    "contentType=application/json",
    'Auto-refresh'!$B$1
)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you read this formula it should be easy to understand the purpose of the arguments. The required arguments are the URL, JSONPath query and columns. The rest of the arguments are optional.&lt;/p&gt;

&lt;h3&gt;
  
  
  IMPORTJSONAPI Function Arguments
&lt;/h3&gt;

&lt;p&gt;| Parameter | Description |&lt;br&gt;
| &lt;strong&gt;URL&lt;/strong&gt; | The URL endpoint of the API. |&lt;br&gt;
| &lt;strong&gt;JSONPath query&lt;/strong&gt; | JSONPath query expression. |&lt;br&gt;
| &lt;strong&gt;Columns&lt;/strong&gt; | Comma-separated list of column path expressions. |&lt;br&gt;
| &lt;strong&gt;Parameters&lt;/strong&gt; | An optional list of parameters. |&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;URL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I used the GraphQL API URL &lt;strong&gt;&lt;a href="https://api.hashnode.com/graphql"&gt;https://api.hashnode.com/graphql&lt;/a&gt;&lt;/strong&gt;. You can use any API that returns the data in JSON format.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JSONPath query&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;JSONPath allows you to write expressions to selectively extract data from JSON objects. I used the &lt;strong&gt;$.data.storiesFeed[*]&lt;/strong&gt; expression to extract the array of stories from the response. Each JSON object in the stories feed will become a row in the spreadsheet. To learn more about how to use JSONPath, you can take a look at the article written by Stefan Goessner, referenced at the end of this article.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Columns&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I selected the &lt;strong&gt;title, username and slug&lt;/strong&gt; from each story object. The columns will be populated left to right in that order, starting from the formula's cell.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parameters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The rest of the parameters are optional but may be necessary for the request to work properly. For example, this GraphQL request requires that we include the query with the request body (payload). The request method and headers are additional parameters you can use. Some APIs may require an Authorization header.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hashnode stories feed response&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "data": {
    "storiesFeed": [
      {
        "title": "On-Demand Code Review With ChatGPT",
        "slug": "on-demand-code-review-with-chatgpt",
        "author": {
          "username": "DamoGirling",
          "name": "NearForm"
        }
      },
      {
        "title": "React-ing to TypeScript",
        "slug": "react-ing-to-typescript",
        "author": {
          "username": "TreciaKS",
          "name": "Trecia Kat "
        }
      },
      {
        "title": "Next.js and Rust: An Innovative Approach to Full-Stack Development",
        "slug": "nextjs-and-rust-an-innovative-approach-to-full-stack-development",
        "author": {
          "username": "joshuamo",
          "name": "Josh Mo"
        }
      }
    ]
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Other Examples
&lt;/h2&gt;

&lt;p&gt;I explained how importing works by walking through the Hashnode API example. Now I will show you three additional examples with some minor differences.&lt;/p&gt;

&lt;p&gt;First, let's take a look at the Serply REST API. It allows us to perform various kinds of web searches and gather market data. In the following example, I used the Serply News API.&lt;/p&gt;

&lt;h3&gt;
  
  
  Search Google news with Serply API
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;=IMPORTJSONAPI(
    CONCATENATE("https://api.serply.io/v1/news/q=", ENCODEURL(B3)),
    "$.entries[*]", 
    "title, link, source.title",
    CONCATENATE("headers={
        'X-Api-Key': '", B4, "'
    }"),
    'Auto-refresh'!$B$1
)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, this example requires an X-Api-Key header. Instead of hardcoding the API key as part of the argument, I concatenated it by referencing cell B4. The following screenshot shows the news populated in the spreadsheet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6aHFQIP9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679088640538/93fd6966-74eb-4ff1-8bf6-3e8d3cd6fbf2.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6aHFQIP9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679088640538/93fd6966-74eb-4ff1-8bf6-3e8d3cd6fbf2.jpeg" alt="Import Serply Google news" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am referencing the value from cell B3 for the search term. CONCATENATE and ENCODEURL are built-in functions in Google Sheets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Serply API news response&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "feed": {
        "generator_detail": {
            "name": "NFE/5.0"
        },
        "generator": "NFE/5.0",
        "title": "\"stock market\" - Google News",
        "title_detail": {
            "type": "text/plain",
            "language": null,
            "base": "",
            "value": "\"stock market\" - Google News"
        },
        "links": [
            {
                "rel": "alternate",
                "type": "text/html",
                "href": "https://news.google.com/search?q=stock+market&amp;amp;hl=en-US&amp;amp;gl=US&amp;amp;ceid=US:en"
            }
        ],
        "link": "https://news.google.com/search?q=stock+market&amp;amp;hl=en-US&amp;amp;gl=US&amp;amp;ceid=US:en",
        "language": "en-US",
        "publisher": "news-webmaster@google.com",
        "publisher_detail": {
            "email": "news-webmaster@google.com"
        },
        "rights": "2023 Google Inc.",
        "rights_detail": {
            "type": "text/plain",
            "language": null,
            "base": "",
            "value": "2023 Google Inc."
        },
        "updated": "Sun, 12 Mar 2023 00:59:28 GMT",
        "updated_parsed": [
            2023,
            3,
            12,
            0,
            59,
            28,
            6,
            71,
            0
        ],
        "subtitle": "Google News",
        "subtitle_detail": {
            "type": "text/html",
            "language": null,
            "base": "",
            "value": "Google News"
        }
    },
    "entries": [
        {
            "title": "Dow closes more than 300 points lower, posts worst week since June as Silicon Valley Bank collapse sparks selloff: Live updates - CNBC",
            "title_detail": {
                "type": "text/plain",
                "language": null,
                "base": "",
                "value": "Dow closes more than 300 points lower, posts worst week since June as Silicon Valley Bank collapse sparks selloff: Live updates - CNBC"
            },
            "links": [
                {
                    "rel": "alternate",
                    "type": "text/html",
                    "href": "https://news.google.com/rss/articles/CBMiRGh0dHBzOi8vd3d3LmNuYmMuY29tLzIwMjMvMDMvMDkvc3RvY2stbWFya2V0LXRvZGF5LWxpdmUtdXBkYXRlcy5odG1s0gFIaHR0cHM6Ly93d3cuY25iYy5jb20vYW1wLzIwMjMvMDMvMDkvc3RvY2stbWFya2V0LXRvZGF5LWxpdmUtdXBkYXRlcy5odG1s?oc=5"
                }
            ],
            "link": "https://news.google.com/rss/articles/CBMiRGh0dHBzOi8vd3d3LmNuYmMuY29tLzIwMjMvMDMvMDkvc3RvY2stbWFya2V0LXRvZGF5LWxpdmUtdXBkYXRlcy5odG1s0gFIaHR0cHM6Ly93d3cuY25iYy5jb20vYW1wLzIwMjMvMDMvMDkvc3RvY2stbWFya2V0LXRvZGF5LWxpdmUtdXBkYXRlcy5odG1s?oc=5",
            "id": "CBMiRGh0dHBzOi8vd3d3LmNuYmMuY29tLzIwMjMvMDMvMDkvc3RvY2stbWFya2V0LXRvZGF5LWxpdmUtdXBkYXRlcy5odG1s0gFIaHR0cHM6Ly93d3cuY25iYy5jb20vYW1wLzIwMjMvMDMvMDkvc3RvY2stbWFya2V0LXRvZGF5LWxpdmUtdXBkYXRlcy5odG1s",
            "guidislink": false,
            "published": "Fri, 10 Mar 2023 18:38:00 GMT",
            "published_parsed": [
                2023,
                3,
                10,
                18,
                38,
                0,
                4,
                69,
                0
            ],
            "summary": "&amp;lt;ol&amp;gt;&amp;lt;li&amp;gt;&amp;lt;a href=\"https://news.google.com/rss/articles/CBMiRGh0dHBzOi8vd3d3LmNuYmMuY29tLzIwMjMvMDMvMDkvc3RvY2stbWFya2V0LXRvZGF5LWxpdmUtdXBkYXRlcy5odG1s0gFIaHR0cHM6Ly93d3cuY25iYy5jb20vYW1wLzIwMjMvMDMvMDkvc3RvY2stbWFya2V0LXRvZGF5LWxpdmUtdXBkYXRlcy5odG1s?oc=5\" target=\"_blank\"&amp;gt;Dow closes more than 300 points lower, posts worst week since June as Silicon Valley Bank collapse sparks selloff: Live updates&amp;lt;/a&amp;gt;  &amp;lt;font color=\"#6f6f6f\"&amp;gt;CNBC&amp;lt;/font&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;li&amp;gt;&amp;lt;a href=\"https://news.google.com/rss/articles/CBMiSWh0dHBzOi8vd3d3Lm55dGltZXMuY29tLzIwMjMvMDMvMTAvYnVzaW5lc3Mvc3RvY2stbWFya2V0LWpvYnMtcmVwb3J0Lmh0bWzSAU1odHRwczovL3d3dy5ueXRpbWVzLmNvbS8yMDIzLzAzLzEwL2J1c2luZXNzL3N0b2NrLW1hcmtldC1qb2JzLXJlcG9ydC5hbXAuaHRtbA?oc=5\" target=\"_blank\"&amp;gt;Silicon Valley Bank Collapse Jolts Stock Market&amp;lt;/a&amp;gt;  &amp;lt;font color=\"#6f6f6f\"&amp;gt;The New York Times&amp;lt;/font&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;li&amp;gt;&amp;lt;a href=\"https://news.google.com/rss/articles/CBMiTWh0dHBzOi8vZmluYW5jZS55YWhvby5jb20vbmV3cy9tdWx0aXBsZS1hbGFybS1tYXJrZXRzLWZpcmUtbWF5LTIyMzIyMzAyMC5odG1s0gFVaHR0cHM6Ly9maW5hbmNlLnlhaG9vLmNvbS9hbXBodG1sL25ld3MvbXVsdGlwbGUtYWxhcm0tbWFya2V0cy1maXJlLW1heS0yMjMyMjMwMjAuaHRtbA?oc=5\" target=\"_blank\"&amp;gt;Traders Brace for More Market Shocks After Week of Wild Swings&amp;lt;/a&amp;gt;  &amp;lt;font color=\"#6f6f6f\"&amp;gt;Yahoo Finance&amp;lt;/font&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;",
            "summary_detail": {
                "type": "text/html",
                "language": null,
                "base": "",
                "value": "&amp;lt;ol&amp;gt;&amp;lt;li&amp;gt;&amp;lt;a href=\"https://news.google.com/rss/articles/CBMiRGh0dHBzOi8vd3d3LmNuYmMuY29tLzIwMjMvMDMvMDkvc3RvY2stbWFya2V0LXRvZGF5LWxpdmUtdXBkYXRlcy5odG1s0gFIaHR0cHM6Ly93d3cuY25iYy5jb20vYW1wLzIwMjMvMDMvMDkvc3RvY2stbWFya2V0LXRvZGF5LWxpdmUtdXBkYXRlcy5odG1s?oc=5\" target=\"_blank\"&amp;gt;Dow closes more than 300 points lower, posts worst week since June as Silicon Valley Bank collapse sparks selloff: Live updates&amp;lt;/a&amp;gt;  &amp;lt;font color=\"#6f6f6f\"&amp;gt;CNBC&amp;lt;/font&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;li&amp;gt;&amp;lt;a href=\"https://news.google.com/rss/articles/CBMiSWh0dHBzOi8vd3d3Lm55dGltZXMuY29tLzIwMjMvMDMvMTAvYnVzaW5lc3Mvc3RvY2stbWFya2V0LWpvYnMtcmVwb3J0Lmh0bWzSAU1odHRwczovL3d3dy5ueXRpbWVzLmNvbS8yMDIzLzAzLzEwL2J1c2luZXNzL3N0b2NrLW1hcmtldC1qb2JzLXJlcG9ydC5hbXAuaHRtbA?oc=5\" target=\"_blank\"&amp;gt;Silicon Valley Bank Collapse Jolts Stock Market&amp;lt;/a&amp;gt;  &amp;lt;font color=\"#6f6f6f\"&amp;gt;The New York Times&amp;lt;/font&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;li&amp;gt;&amp;lt;a href=\"https://news.google.com/rss/articles/CBMiTWh0dHBzOi8vZmluYW5jZS55YWhvby5jb20vbmV3cy9tdWx0aXBsZS1hbGFybS1tYXJrZXRzLWZpcmUtbWF5LTIyMzIyMzAyMC5odG1s0gFVaHR0cHM6Ly9maW5hbmNlLnlhaG9vLmNvbS9hbXBodG1sL25ld3MvbXVsdGlwbGUtYWxhcm0tbWFya2V0cy1maXJlLW1heS0yMjMyMjMwMjAuaHRtbA?oc=5\" target=\"_blank\"&amp;gt;Traders Brace for More Market Shocks After Week of Wild Swings&amp;lt;/a&amp;gt;  &amp;lt;font color=\"#6f6f6f\"&amp;gt;Yahoo Finance&amp;lt;/font&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;"
            },
            "source": {
                "href": "https://www.cnbc.com",
                "title": "CNBC"
            },
            "sub_articles": [
                {
                    "url": "https://news.google.com/rss/articles/CBMiRGh0dHBzOi8vd3d3LmNuYmMuY29tLzIwMjMvMDMvMDkvc3RvY2stbWFya2V0LXRvZGF5LWxpdmUtdXBkYXRlcy5odG1s0gFIaHR0cHM6Ly93d3cuY25iYy5jb20vYW1wLzIwMjMvMDMvMDkvc3RvY2stbWFya2V0LXRvZGF5LWxpdmUtdXBkYXRlcy5odG1s?oc=5",
                    "title": "Dow closes more than 300 points lower, posts worst week since June as Silicon Valley Bank collapse sparks selloff: Live updates",
                    "publisher": "CNBC"
                },
                {
                    "url": "https://news.google.com/rss/articles/CBMiSWh0dHBzOi8vd3d3Lm55dGltZXMuY29tLzIwMjMvMDMvMTAvYnVzaW5lc3Mvc3RvY2stbWFya2V0LWpvYnMtcmVwb3J0Lmh0bWzSAU1odHRwczovL3d3dy5ueXRpbWVzLmNvbS8yMDIzLzAzLzEwL2J1c2luZXNzL3N0b2NrLW1hcmtldC1qb2JzLXJlcG9ydC5hbXAuaHRtbA?oc=5",
                    "title": "Silicon Valley Bank Collapse Jolts Stock Market",
                    "publisher": "The New York Times"
                },
                {
                    "url": "https://news.google.com/rss/articles/CBMiTWh0dHBzOi8vZmluYW5jZS55YWhvby5jb20vbmV3cy9tdWx0aXBsZS1hbGFybS1tYXJrZXRzLWZpcmUtbWF5LTIyMzIyMzAyMC5odG1s0gFVaHR0cHM6Ly9maW5hbmNlLnlhaG9vLmNvbS9hbXBodG1sL25ld3MvbXVsdGlwbGUtYWxhcm0tbWFya2V0cy1maXJlLW1heS0yMjMyMjMwMjAuaHRtbA?oc=5",
                    "title": "Traders Brace for More Market Shocks After Week of Wild Swings",
                    "publisher": "Yahoo Finance"
                }
            ]
        }
    ],
    "ts": 1.8348586559295654,
    "device_type": null
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to try this example, you can get your API key by creating a free account at &lt;a href="https://serply.io"&gt;serply.io&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cfCIt3qA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679090731655/7714e48e-45de-4edd-b8e7-d64dd137c1bc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cfCIt3qA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679090731655/7714e48e-45de-4edd-b8e7-d64dd137c1bc.png" alt="Serply API key" width="800" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Cute puppy images generated with Openai
&lt;/h3&gt;

&lt;p&gt;Let's try something fun and import images generated with Openai. I am selecting only one column: the image URL. Similar to the previous example, I concatenated the search term as the prompt for the request payload. I also concatenated the Openai API key as the Authorization header.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;=IMPORTJSONAPI(
    "https://api.openai.com/v1/images/generations",
    "$.data[*]", 
    "url", 
    "method=post", 
    CONCATENATE("payload={
        'n': 3, 
        'size': '256x256', 
        'prompt':'", B3, "' 
    }"), 
    "contentType=application/json", 
    CONCATENATE("headers={
        'Authorization': 'Bearer ", B4, "'
    }"), 
    'Auto-refresh'!$B$1
)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Cute puppy images&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LZaYuU-R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679091392737/912242cc-c268-45e6-a4dc-9aa3053bd5a4.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LZaYuU-R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679091392737/912242cc-c268-45e6-a4dc-9aa3053bd5a4.jpeg" alt="Import Openai images" width="800" height="1083"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rendering images&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An additional formula is needed to render the images. Without it, the script would just populate the rows with URLs. The solution is simple. I used the first formula in column B and the &lt;strong&gt;image()&lt;/strong&gt; function in column A. I combined it with the &lt;strong&gt;Arrayformula()&lt;/strong&gt; function which allows iterating through the rows of a single column to apply the image formula. Both of these functions are built-in into Google Sheets. What I would like for you to take away from this example is that you can process cells after the data has been imported.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;=Arrayformula(image(B7:B9))

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Openai API image generation response&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "created": 1679707702,
  "data": [
    {
      "url": "https://oaidalleapiprodscus.blob.core.windows.net/private/org-mnywxW4v2jQ8xH3mh4x7znhs/user-rLuI09VBzVeNpWdV3HZy0mRm/img-Ks1LtvvPJBWwlbmPVqG8mtiG.png?st=2023-03-25T00%3A28%3A22Z&amp;amp;se=2023-03-25T02%3A28%3A22Z&amp;amp;sp=r&amp;amp;sv=2021-08-06&amp;amp;sr=b&amp;amp;rscd=inline&amp;amp;rsct=image/png&amp;amp;skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&amp;amp;sktid=a48cca56-e6da-484e-a814-9c849652bcb3&amp;amp;skt=2023-03-24T22%3A33%3A50Z&amp;amp;ske=2023-03-25T22%3A33%3A50Z&amp;amp;sks=b&amp;amp;skv=2021-08-06&amp;amp;sig=bwsEsZqY3hrCsT4/6zsyAoIEkasCfZ2xt6mw9jGphTc%3D"
    },
    {
      "url": "https://oaidalleapiprodscus.blob.core.windows.net/private/org-mnywxW4v2jQ8xH3mh4x7znhs/user-rLuI09VBzVeNpWdV3HZy0mRm/img-36HPr7Y1rTSvlf1kMHKhFus0.png?st=2023-03-25T00%3A28%3A22Z&amp;amp;se=2023-03-25T02%3A28%3A22Z&amp;amp;sp=r&amp;amp;sv=2021-08-06&amp;amp;sr=b&amp;amp;rscd=inline&amp;amp;rsct=image/png&amp;amp;skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&amp;amp;sktid=a48cca56-e6da-484e-a814-9c849652bcb3&amp;amp;skt=2023-03-24T22%3A33%3A50Z&amp;amp;ske=2023-03-25T22%3A33%3A50Z&amp;amp;sks=b&amp;amp;skv=2021-08-06&amp;amp;sig=j4sLyf/C0Vfi8FmUCeDFjUHDyothhp2Nc2GCD1xcdRg%3D"
    },
    {
      "url": "https://oaidalleapiprodscus.blob.core.windows.net/private/org-mnywxW4v2jQ8xH3mh4x7znhs/user-rLuI09VBzVeNpWdV3HZy0mRm/img-OnD3pixbq0kEFoDN0LqPuoWP.png?st=2023-03-25T00%3A28%3A22Z&amp;amp;se=2023-03-25T02%3A28%3A22Z&amp;amp;sp=r&amp;amp;sv=2021-08-06&amp;amp;sr=b&amp;amp;rscd=inline&amp;amp;rsct=image/png&amp;amp;skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&amp;amp;sktid=a48cca56-e6da-484e-a814-9c849652bcb3&amp;amp;skt=2023-03-24T22%3A33%3A50Z&amp;amp;ske=2023-03-25T22%3A33%3A50Z&amp;amp;sks=b&amp;amp;skv=2021-08-06&amp;amp;sig=SY2jkUtS2mVIO3Ji%2BZUJeHJl2eMMp53%2B8ncGvwGekBA%3D"
    }
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can get your Openai API key by creating your account at &lt;a href="http://platform.openai.com"&gt;platform.openai.com&lt;/a&gt;. You will get a generous amount of free credits to play with.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---E4r0pFY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679091440221/ae92db66-f27a-4d3e-a525-4a189dcd1ded.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---E4r0pFY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679091440221/ae92db66-f27a-4d3e-a525-4a189dcd1ded.png" alt="Openai API key" width="800" height="583"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Funny cat images with Serply API
&lt;/h3&gt;

&lt;p&gt;Here is another fun example using the Serply Image Search API. This is very similar to the Openai example, except these images are not generated. These come from Google Image search results.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;=IMPORTJSONAPI(
    CONCATENATE("https://api.serply.io/v1/image/q=", ENCODEURL(B3)),
    "$.image_results[*]",
    "link.title, image.src",
    CONCATENATE("headers={
        'X-Api-Key': '", B4, "'
    }"), 'Auto-refresh'!$B$1
)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Funny cat images&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yBoZuxtz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679093089790/e59861e5-5b55-4671-b079-057fbae97ff9.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yBoZuxtz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679093089790/e59861e5-5b55-4671-b079-057fbae97ff9.jpeg" alt="Import Serply Google images" width="800" height="541"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Serply API image search response&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "ads": [],
  "ads_count": 0,
  "answers": [],
  "results": [],
  "shopping_ads": [],
  "places": [],
  "related_searches": { "images": [], "text": [] },
  "image_results": [
    {
      "image": {
        "src": "https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSpcPBqqdl0GyyJldY3wynsNbK5RVQpBavKabUPq-MRjHxmKHlZTRP_gdog0A&amp;amp;s",
        "alt": ""
      },
      "link": {
        "href": "https://www.google.com/url?q=https://www.nationalgeographic.com/animals/mammals/facts/domestic-cat&amp;amp;sa=U&amp;amp;ved=2ahUKEwiQ4_ym2db9AhWaU2wGHVtZD2sQr4kDegQIBBAC&amp;amp;usg=AOvVaw0i8bYceafnn6-F69n1uHjS",
        "title": "Domestic cat www.nationalgeographic.com",
        "domain": "Domestic cat www.nationalgeographic.com"
      }
    },
    {
      "image": {
        "src": "https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSQVJ8iIWM5ZRV2OyXAz2qN-JtpNbHmSgGbF9gkW5yzQx9llnHlAu-zhT-z4A&amp;amp;s",
        "alt": ""
      },
      "link": {
        "href": "https://www.google.com/url?q=https://www.britannica.com/animal/cat&amp;amp;sa=U&amp;amp;ved=2ahUKEwiQ4_ym2db9AhWaU2wGHVtZD2sQr4kDegQIExAC&amp;amp;usg=AOvVaw1SvlVb_lV7ZpZ5ePxNCCmg",
        "title": "Cat | Breeds &amp;amp; Facts |... www.britannica.com",
        "domain": "Cat | Breeds &amp;amp; Facts |... www.britannica.com"
      }
    }
  ],
  "total": null,
  "knowledge_graph": "",
  "related_questions": [],
  "ts": 2.8050615787506104,
  "device_type": null
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, I am using the same formula to render images here.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;=Arrayformula(image(C7:C20))

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Endless possibilities
&lt;/h2&gt;

&lt;p&gt;The Google Apps Script platform is a very powerful scripting language and automation tool. I only scratched the surface of what you can do with it. You can build sophisticated solutions that are easy to use for Google Sheets users and seamlessly integrate with other applications within and outside the Google Workspace ecosystem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reference Links&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://serply.io/docs/"&gt;Serply API Spec Documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://serply.io/docs/operations/v1/news"&gt;Serply API - Google News&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://serply.io/docs/operations/v1/image"&gt;Serply API - Google Images&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://platform.openai.com/docs/guides/images"&gt;Openai Documentation - Image Generation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://api.hashnode.com/"&gt;Hashnode API&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://goessner.net/articles/JsonPath/"&gt;Introduction to JSONPath expressions by Stefan Goessner&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/JSONPath-Plus/JSONPath"&gt;JSONPath Plus open-source repository&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/qeet/IMPORTJSONAPI"&gt;IMPORTJSONAPI open-source repository&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developers.google.com/apps-script/samples"&gt;App Script Samples&lt;/a&gt;&lt;/p&gt;

</description>
      <category>googlesheets</category>
      <category>googlescript</category>
      <category>api</category>
      <category>spreadsheets</category>
    </item>
    <item>
      <title>Serply Notifications Part 1: Search Engine Result Pages (SERP)</title>
      <dc:creator>Osvaldo Brignoni</dc:creator>
      <pubDate>Sat, 11 Feb 2023 03:58:30 +0000</pubDate>
      <link>https://forem.com/brignoni/serply-notifications-part-1-search-engine-result-pages-serp-9h2</link>
      <guid>https://forem.com/brignoni/serply-notifications-part-1-search-engine-result-pages-serp-9h2</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapon6w63z4v9xkm7itkw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapon6w63z4v9xkm7itkw.jpg" alt="Serply Notifications" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Serply Notifications?
&lt;/h2&gt;

&lt;p&gt;Serply Notifications is an open-source notification scheduler for the Serply API. It allows you to schedule and receive notifications for Google Search Engine Result Pages (SERP) on your Slack account.&lt;/p&gt;

&lt;p&gt;The application is developed with AWS serverless services: API Gateway, Lambda, DynamoDB and EventBridge. All of its resources are defined as a Cloud Development Kit Stack. This is a detailed walk-through of all its components. First, let's look at the application features from the user's point of view.&lt;/p&gt;

&lt;h2&gt;
  
  
  As a Slack user, I can...
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Schedule SERP notifications for specific search queries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Receive notifications daily, weekly or monthly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;List notification schedules.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Disable and re-enable scheduled notifications.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How can you track the SERP position of a website?
&lt;/h2&gt;

&lt;p&gt;To track the position of a website in Google Search Engine Result Pages (SERP), you can use a few methods:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Manual Search&lt;/strong&gt; : You can manually search for a specific keyword related to your website and note the position of your website in the SERP.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SERP Tracking Tool&lt;/strong&gt; : There are various online tools available that can track the position of your website in the SERP. Some popular tools include Ahrefs, SEMrush, Moz, and SERPstat.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Google Search Console&lt;/strong&gt; : This is a free tool provided by Google that allows you to track your website's performance in Google search. It provides information about the keywords for which your website is ranking and the position of your website for each keyword.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Regardless of the method you use, it's important to track your website's position regularly to monitor its performance and make necessary adjustments to improve its ranking in the SERP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example manual search&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.google.com/search?q=professional+network&amp;amp;num=100&amp;amp;domain=linkedin.com" rel="noopener noreferrer"&gt;https://www.google.com/search?q=professional+network&amp;amp;num=100&amp;amp;domain=linkedin.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbrignoni.dev%2Fimages%2Fgoogle-search-engine-result-pages.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbrignoni.dev%2Fimages%2Fgoogle-search-engine-result-pages.jpg" alt="Search Engine Result Pages" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Slack /serply command
&lt;/h2&gt;

&lt;p&gt;The command structure is really simple. It starts with the slash command /serply, followed by the sub-command and specific parameters. There are two sub-commands available: serp and list.&lt;/p&gt;

&lt;p&gt;| command | sub-command | website/domain | query | interval |&lt;br&gt;
| /serply | serp | linkedin.com | "professional+network" | daily |&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;/serply serp - schedule a SERP notification&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The serp sub-command creates a scheduled notification for the linkedin.com domain with the search query "professional+network". You will see a confirmation message with the Schedule details, shortly after entering the command. You will receive the notification on the specified interval: daily, weekly, or monthly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jpgs3v6tabqagzkitxa.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jpgs3v6tabqagzkitxa.jpeg" alt="Slack /serply command" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;/serply list - list all scheduled notifications&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The list sub-command returns a list of all schedules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disable a scheduled notification&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can disable a scheduled notification by clicking the Disable button on a given Slack message. The history of notifications will remain on the database. You can also re-enable it by clicking the Enable button.&lt;/p&gt;
&lt;h2&gt;
  
  
  Application Design
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Let's start with the request patterns&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Since this is an event-driven application, it will be very helpful to plan for all the request patterns before coding. You won't need to code because you can just clone the Serply Notifications repository. However, I do want to go over the thought process for designing this application. In simple terms, this is the sequence of requests.&lt;/p&gt;

&lt;p&gt;After a user enters the /serply command...&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Slack will perform a POST request to our application webhook with the message payload.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The webhook will respond to Slack with a 200 status code, acknowledging the receipt of the request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The backend will parse the payload and parameters from the slash command string.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The backend will transform that payload to a Schedule object and send it to the event bus.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A function will receive that event and perform 2 tasks:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Another function will send a confirmation message to Slack letting the user know that the schedule was created.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Schedule will call another function that performs 3 tasks:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Another function will receive the SERP data from the event bus and send the notification back to Slack for users to see.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;DynamoDB: Planning for access patterns&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We should also anticipate how we will model, index and query the data. Let's make a list of DynamoDB access patterns and the query conditions that we will use to accommodate them.&lt;/p&gt;

&lt;p&gt;| Access patterns | Query conditions |&lt;br&gt;
| Get a schedule by ID | PK=schedule_[attributes], SK=schedule_[attributes] |&lt;br&gt;
| Get a notification by ID | PK=schedule_[attributes], SK=notification_[attributes]#[timestamp] |&lt;br&gt;
| Query notifications by schedule | Query table PK=schedule_[attributes], SK=BEGINS_WITH notification_ |&lt;br&gt;
| Query schedules by account | Query CollectionIndex GSI collection=[account]#schedule, SK=BEGINS_WITH schedule_ |&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Adjacency list design pattern&lt;/p&gt;

&lt;p&gt;When different entities of an application have a many-to-many relationship between them, the relationship can be modeled as an adjacency list. In this pattern, all top-level entities (synonymous with nodes in the graph model) are represented using the partition key. Any relationships with other entities (edges in a graph) are represented as an item within the partition by setting the value of the sort key to the target entity ID (target node).&lt;/p&gt;

&lt;p&gt;The advantages of this pattern include minimal data duplication and simplified query patterns to find all entities (nodes) related to a target entity (having an edge to a target node).&lt;/p&gt;

&lt;p&gt;~ Amazon DynamoDB Developer Guide&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Primary Key, Sort Key Design
&lt;/h3&gt;

&lt;p&gt;In addition to the request and DynamoDB access patterns, we need to take into consideration other service constraints.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Map the Slack command to the Schedule database key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Schedule Primary Key is the concatenated attributes prefixed with schedule_.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Schedule Sort Key is intentionally the same as the Schedule Primary Key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The SERP Notification is similar to the Schedule Primary Key prefixed with notification_ and suffixed with an ISO 8601 datetime string.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Map the Schedule database key to the EventBridge Schedule Name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The EventBridge Schedule Name has a maximum limit of 64 characters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Slack command string and database key attributes might exceed 64 characters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generate a deterministic hash from the database key for the EventBridge Schedule Name that is always 64 characters in length.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Example Keys&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Slack command will always produce the same keys. The EventBridge Schedule Name hash is generated from the Schedule PK.&lt;/p&gt;

&lt;p&gt;| Command | /serply serp linkedin.com "professional+network" daily |&lt;br&gt;
| Schedule PK | schedule_serp#linkedin.com#professional+network#daily |&lt;br&gt;
| Schedule SK | schedule_serp#linkedin.com#professional+network#daily |&lt;br&gt;
| Schedule collection | f492a0afbef84b5b8e4fedeb635a7737#schedules |&lt;br&gt;
| SERP Notification PK | schedule_serp#linkedin.com#professional+network#daily |&lt;br&gt;
| SERP Notification SK | notification_serp#linkedin.com#professional+network#daily#2023-02-04T07:21:04 |&lt;br&gt;
| SERP collection | f492a0afbef84b5b8e4fedeb635a7737#serp |&lt;br&gt;
| EventBridge Schedule Name | d8f9c6fe6ed8a3049c7f8900298d981e58edacfb0c28b5eca4869f2f16ba92c0 |&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EventBridge Schedule API Reference: Name Length Constraint&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8ybxjb1fae7bxb45qbb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8ybxjb1fae7bxb45qbb.png" alt="EventBridge Schedule Name Length Constraint" width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ready for Multi-Tenancy: The Global Secondary Index&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this case, the main reason for creating a Global Secondary Index is that we need to query all the schedules for a given Slack account when we enter the /serply list command. Another benefit of this index is that we could develop multi-tenancy for this application if we need it later. We could scope notifications for each Slack account. For now, there is only one default account ID.&lt;/p&gt;
&lt;h2&gt;
  
  
  SerplyStack
&lt;/h2&gt;

&lt;p&gt;All the AWS resources are defined in the SerplyStack. When we run the &lt;strong&gt;cdk deploy&lt;/strong&gt; command, the CDK builds the application's infrastructure as a set of AWS CloudFormation templates and then uses these templates to create or update a CloudFormation stack in the specified AWS account and region. During deployment, the command sets up the required AWS resources as defined in the CDK application's code and also configures any necessary connections between the resources. The result is a fully functioning, deployed application in the target AWS environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# src/cdk/serply_stack.py

class SerplyStack(Stack):

    def __init__ (self, scope: Construct, construct_id: str, config: SerplyConfig, **kwargs) -&amp;gt; None:
        super(). __init__ (scope, construct_id, **kwargs)

        RUNTIME = _lambda.Runtime.PYTHON_3_9

        lambda_layer = _lambda.LayerVersion(
            self, f'{config.STACK_NAME}LambdaLayer{config.STAGE_SUFFIX}',
            code=_lambda.Code.from_asset(config.LAYER_DIR),
            compatible_runtimes=[RUNTIME],
            compatible_architectures=[
                _lambda.Architecture.X86_64,
                _lambda.Architecture.ARM_64,
            ]
        )

        event_bus = events.EventBus(
            self, config.EVENT_BUS_NAME,
            event_bus_name=config.EVENT_BUS_NAME,
        )

        event_bus.apply_removal_policy(RemovalPolicy.DESTROY)

        scheduler_managed_policy = iam.ManagedPolicy.from_aws_managed_policy_name(
            'AmazonEventBridgeSchedulerFullAccess'
        )

        scheduler_role = iam.Role(
            self, 'SerplySchedulerRole',
            assumed_by=iam.ServicePrincipal('scheduler.amazonaws.com'),
        )

        scheduler_role.add_to_policy(
            iam.PolicyStatement(
                effect=iam.Effect.ALLOW,
                actions=['lambda:InvokeFunction'],
                resources=['*'],
            )
        )

        slack_receive_lambda = _lambda.Function(
            self, 'SlackReceiveLambdaFunction',
            runtime=RUNTIME,
            code=_lambda.Code.from_asset(config.SLACK_DIR),
            handler='slack_receive_lambda.handler',
            timeout=Duration.seconds(5),
            layers=[lambda_layer],
            environment={
                'STACK_NAME': config.STACK_NAME,
                'STAGE': config.STAGE,
            },
        )

        event_bus.grant_put_events_to(slack_receive_lambda)

        slack_respond_lambda = _lambda.Function(
            self, 'SlackRespondLambdaFunction',
            runtime=RUNTIME,
            code=_lambda.Code.from_asset(config.SLACK_DIR),
            handler='slack_respond_lambda.handler',
            timeout=Duration.seconds(5),
            layers=[lambda_layer],
            environment={
                'SLACK_BOT_TOKEN': config.SLACK_BOT_TOKEN,
                'STACK_NAME': config.STACK_NAME,
                'STAGE': config.STAGE,
            },
        )

        slack_notify_lambda = _lambda.Function(
            self, 'SlackNotifyLambdaFunction',
            runtime=RUNTIME,
            code=_lambda.Code.from_asset(config.SLACK_DIR),
            handler='slack_notify_lambda.handler',
            timeout=Duration.seconds(5),
            layers=[lambda_layer],
            environment={
                'SLACK_BOT_TOKEN': config.SLACK_BOT_TOKEN,
                'STACK_NAME': config.STACK_NAME,
                'STAGE': config.STAGE,
            },
        )

        slack_notify_lambda.role.add_managed_policy(scheduler_managed_policy)

        # More resources...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;List of CloudFormation resources&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS::ApiGateway::RestApi&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS::ApiGateway::Account&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS::ApiGateway::Deployment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS::ApiGateway::Method&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS::ApiGateway::Resource&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS::ApiGateway::Stage&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS::DynamoDB::Table&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS::Events::EventBus&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS::Events::EventRule&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS::IAM::Policy&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS::IAM::Role&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS::Lambda::Function&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS::Lambda::LayerVersion&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS::Lambda::Permission&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS::Scheduler::ScheduleGroup&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Useful tip&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All the CloudFormation resources are well indexed on Google. Searching for any of them will quickly return the relevant documentation. In the documentation, you will be able to see all the available parameters and validation criteria.&lt;/p&gt;

&lt;h2&gt;
  
  
  API Gateway Rest API /slack/{proxy+}
&lt;/h2&gt;

&lt;p&gt;The CDK creates an AWS::ApiGateway::RestApi resource that will act as a catch-all webhook for all Slack events. The request is forwarded to the slack_receive_lambda function via the LAMBDA_PROXY Integration Request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb40q3u2jm7v2ga05equk.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb40q3u2jm7v2ga05equk.jpeg" alt="API Gateway Rest API" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda functions
&lt;/h2&gt;

&lt;p&gt;The business logic lives entirely in Lambda functions developed with Python. These functions are triggered by three sources: API Gateway, EventBridge Events and EventBridge Schedules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;src/slack/&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;slack_notify_lambda&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;slack_receive_lambda&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;slack_respond_lambda&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;src/schedule/&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;schedule_disable_lambda&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;schedule_enable_lambda&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;schedule_save_lambda&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;schedule_target_lambda&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  slack_receive_lambda
&lt;/h3&gt;

&lt;p&gt;This function has a few important tasks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Receives and processes the event forwarded by API Gateway.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Checks the Slack Challenge string if present in the request body.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Verifies the &lt;code&gt;X-Slack-Signature&lt;/code&gt; header to make sure it is the Slack app calling our Rest API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Parses the &lt;code&gt;/serply&lt;/code&gt; command string with the &lt;code&gt;SlackCommand&lt;/code&gt; class.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Forwards all the data to the &lt;code&gt;SerplyEventBus&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Returns an acknowledgment response to Slack within 3 seconds.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;This confirmation must be received by Slack within 3000 milliseconds of the original request being sent, otherwise, a "Timeout reached" will be displayed to the user. If you couldn't verify the request payload, your app should return an error instead and ignore the request.&lt;/p&gt;

&lt;p&gt;~ Confirming receipt | api.slack.com&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
from slack_receive_response import (
    command_response,
    default_response,
    event_response,
    interaction_response,
)

def get_challenge(body):
    if not body.startswith('{') and not body.endswith('}'):
        return False
    return json.loads(body).get('challenge', '')

responses = {
    '/slack/commands': command_response,
    '/slack/events': event_response,
    '/slack/interactions': interaction_response,
}

def handler(event, context):

    challenge = get_challenge(event.get('body'))

    if challenge:
        return {
            'statusCode': 200,
            'body': challenge,
            'headers': {
                'Content-Type': 'text/plain',
            },
        }

    return responses.get(event.get('path'), default_response)(event=event)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Serply Event Bus
&lt;/h3&gt;

&lt;p&gt;An event bus is a channel that allows different services to communicate and exchange events. EventBridge enables you to create rules that automatically trigger reactions to events, such as sending an email in response to an event from a SaaS application. The event bus is capable of triggering multiple targets simultaneously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Receiving the Slack command&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4gx2ejip7g7esvsrgtm.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4gx2ejip7g7esvsrgtm.jpeg" alt="Receiving the Slack command" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After receiving the initial request from Slack, the slack_receive_lambda function puts an event into the SerplyEventBus.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07yyzaa0blhjpf21nrwq.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07yyzaa0blhjpf21nrwq.jpeg" alt="Event bus triggers 2 functions" width="800" height="528"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The event bus triggers 2 functions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;slack_respond_lambda&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;schedule_save_lambda&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  slack_respond_lambda
&lt;/h3&gt;

&lt;p&gt;This function sends a SERP Notification Scheduled message to the Slack channel. I could have done this in the slack_receive_lambda function. However, I needed to keep that function as light as possible to acknowledge receipt within 3 seconds. The response message is delegated to the slack_respond_lambda function intentionally.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
from pydash import objects
from slack_api import SlackClient, SlackCommand
from slack_messages import ScheduleMessage, ScheduleListMessage
from serply_config import SERPLY_CONFIG
from serply_database import NotificationsDatabase, schedule_from_dict

notifications = NotificationsDatabase(boto3.resource('dynamodb'))
slack = SlackClient()

def handler(event, context):

    detail_type = event.get('detail-type')
    detail_input = event.get('detail').get('input')
    detail_schedule = event.get('detail').get('schedule')

    if detail_type == SERPLY_CONFIG.EVENT_SCHEDULE_SAVE:
        schedule = schedule_from_dict(detail_schedule)
        message = ScheduleMessage(
            channel=detail_input.get('channel_id'),
            user_id=detail_input.get('user_id'),
            command=schedule.command,
            interval=schedule.interval,
            type=schedule.type,
            domain=schedule.domain,
            domain_or_website=schedule.domain_or_website,
            query=schedule.query,
            website=schedule.website,
            enabled=True,
            replace_original=False,
        )
        slack.respond(
            response_url=detail_input.get('response_url'),
            message=message,
        )

    elif detail_type == SERPLY_CONFIG.EVENT_SCHEDULE_LIST:
        schedules = notifications.schedules()
        message = ScheduleListMessage(
            channel=detail_input.get('channel_id'),
            schedules=schedules,
        )
        slack.notify(message)

    elif detail_type in [
        SERPLY_CONFIG.EVENT_SCHEDULE_DISABLE_FROM_LIST,
        SERPLY_CONFIG.EVENT_SCHEDULE_ENABLE_FROM_LIST,
    ]:
        schedules = notifications.schedules()
        message = ScheduleListMessage(
            schedules=schedules,
            replace_original=True,
        )
        slack.respond(
            response_url=detail_input.get('response_url'),
            message=message,
        )

    elif detail_type == SERPLY_CONFIG.EVENT_SCHEDULE_DISABLE:
        schedule = SlackCommand(
            command=objects.get(detail_input, 'actions[0].value'),
        )
        message = ScheduleMessage(
            user_id=detail_input.get('user').get('id'),
            command=schedule.command,
            interval=schedule.interval,
            type=schedule.type,
            domain=schedule.domain,
            domain_or_website=schedule.domain_or_website,
            query=schedule.query,
            website=schedule.website,
            enabled=False,
            replace_original=True,
        )
        slack.respond(
            response_url=detail_input.get('response_url'),
            message=message,
        )

    elif detail_type == SERPLY_CONFIG.EVENT_SCHEDULE_ENABLE:
        schedule = SlackCommand(
            command=objects.get(detail_input, 'actions[0].value'),
        )
        message = ScheduleMessage(
            user_id=detail_input.get('user').get('id'),
            command=schedule.command,
            interval=schedule.interval,
            type=schedule.type,
            domain=schedule.domain,
            domain_or_website=schedule.domain_or_website,
            query=schedule.query,
            website=schedule.website,
            enabled=True,
            replace_original=True,
        )
        slack.respond(
            response_url=detail_input.get('response_url'),
            message=message,
        )

    return {'ok': True}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  schedule_save_lambda
&lt;/h3&gt;

&lt;p&gt;This function saves the schedule data to the DynamoDB Notifications table.&lt;/p&gt;

&lt;p&gt;EventBridge Schedules are not provisioned via the CDK. Instead, there is a specific schedule for each /serply serp command that is created by the schedule_save_lambda function via the &lt;strong&gt;boto3.client('scheduler')&lt;/strong&gt; client.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EventBridge Schedule&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxepqg5z5jjthpwr2xy9e.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxepqg5z5jjthpwr2xy9e.jpeg" alt="EventBridge Schedule" width="800" height="520"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import json
from serply_database import NotificationsDatabase, schedule_from_dict
from serply_scheduler import NotificationScheduler

notifications = NotificationsDatabase(boto3.resource('dynamodb'))
scheduler = NotificationScheduler(boto3.client('scheduler'))

def handler(event, context):

    schedule = schedule_fro_dict(event.get('detail').get('schedule'))

    notifications.save(schedule)

    scheduler.save_schedule(
        schedule=schedule,
        event=event,
    )

    return {'ok': True}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  schedule_target_lambda
&lt;/h3&gt;

&lt;p&gt;This function is triggered by its corresponding schedule and it has 3 tasks.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Get the SERP data from the Serply API &lt;a href="https://api.serply.io/v1/serp" rel="noopener noreferrer"&gt;https://api.serply.io/v1/serp&lt;/a&gt; endpoint.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Save the event and response data to the database as a SerpNotification.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Forward all the data to the SerplyEventBus.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import json
from dataclasses import asdict
from serply_api import SerplyClient
from serply_config import SERPLY_CONFIG
from serply_database import NotificationsDatabase, SerpNotification, schedule_from_dict
from serply_events import EventBus

event_bus = EventBus(boto3.client('events'))
notifications = NotificationsDatabase(boto3.resource('dynamodb'))
serply = SerplyClient(SERPLY_CONFIG.SERPLY_API_KEY)

def handler(event, context):

    detail_headers = event.get('detail').get('headers')
    detail_schedule = event.get('detail').get('schedule')
    detail_input = event.get('detail').get('input')

    schedule = schedule_from_dict(detail_schedule)

    if schedule.type not in [SERPLY_CONFIG.SCHEDULE_TYPE_SERP]:
        raise Exception(f'Invalid schedule type: {schedule.type}')

    if schedule.type == SERPLY_CONFIG.SCHEDULE_TYPE_SERP:

        response = serply.serp(
            domain=schedule.domain,
            website=schedule.website,
            query=schedule.query,
            mock=schedule.interval == 'mock',
        )

        notification = SerpNotification(
            command=schedule.command,
            domain=schedule.domain,
            domain_or_website=schedule.domain_or_website,
            query=schedule.query,
            interval=schedule.interval,
            serp_position=response.position,
            serp_searched_results=response.searched_results,
        )

    notifications.save(notification)

    notification_input = asdict(notification)

    event_bus.put(
        source=schedule.source,
        detail_type=SERPLY_CONFIG.EVENT_SCHEDULE_NOTIFY,
        schedule=schedule,
        input={
            **detail_input,
            **notification_input,
        },
        headers=detail_headers,
    )

    return {'ok': True}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Scheduled notification&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqnf8q5uvht3cqhxmxk3.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqnf8q5uvht3cqhxmxk3.jpeg" alt="Scheduled notification" width="800" height="558"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  slack_notify_lambda
&lt;/h3&gt;

&lt;p&gt;This function builds the notification message and sends it to Slack. The SerpNotificationMessage data class serves as a template for formatting the message using Slack Message Blocks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import json
from serply_config import SERPLY_CONFIG
from slack_api import SlackClient
from slack_messages import SerpNotificationMessage
from serply_database import schedule_from_dict
from serply_scheduler import NotificationScheduler

slack = SlackClient(SERPLY_CONFIG.SLACK_BOT_TOKEN)
scheduler = NotificationScheduler(boto3.client('scheduler'))

def handler(event, context):

    detail_schedule = event.get('detail').get('schedule')
    detail_input = event.get('detail').get('input')

    schedule = schedule_from_dict(detail_schedule)

    if schedule.type not in [SERPLY_CONFIG.SCHEDULE_TYPE_SERP]:
        raise Exception(f'Invalid schedule type: {schedule.type}')

    if schedule.type == SERPLY_CONFIG.SCHEDULE_TYPE_SERP:
        message = SerpNotificationMessage(
            channel=detail_input.get('channel_id'),
            serp_position=detail_input.get('serp_position'),
            serp_searched_results=detail_input.get('serp_searched_results'),
            command=schedule.command,
            domain=schedule.domain,
            domain_or_website=schedule.domain_or_website,
            interval=schedule.interval,
            query=schedule.query,
            website=schedule.website,
        )
        slack.notify(message)

    if schedule.interval in SERPLY_CONFIG.ONE_TIME_INTERVALS:
        scheduler.delete_schedule(schedule)

    return {'ok': True}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;SerpNotificationMessage Slack Message Blocks&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@dataclass
class SerpNotificationMessage:

    blocks: list[dict] = field(init=False)
    domain: str
    domain_or_website: str
    command: str
    interval: str
    query: str
    serp_position: int
    serp_searched_results: str
    website: str
    channel: str = None
    num: int = 100
    replace_original: bool = False

    def __post_init__ (self):

        TEXT_ONE_TIME = f'This is a *one-time* notification.'
        TEXT_YOU_RECEIVE = f'You receive this notification *{self.interval}*. &amp;lt;!here&amp;gt;'

        website = self.domain if self.domain else self.website
        total = int(self.serp_searched_results or 0)
        google_search = f'https://www.google.com/search?q={self.query}&amp;amp;num={self.num}&amp;amp;{self.domain_or_website}={website}'
        results = f'&amp;lt;{google_search}|{total} results&amp;gt;' if total &amp;gt; 0 else f'0 results'

        self.blocks = [
            {
                'type': 'section',
                'text': {
                    'type': 'mrkdwn',
                    'text': f'&amp;gt; `{website}` in position `{self.serp_position or 0}` for `{self.query}` from {results}.'
                },
            },
            {
                'type': 'context',
                'elements': [
                    {
                        'type': 'mrkdwn',
                        'text': f':bell: *SERP Notification* | {TEXT_ONE_TIME if self.interval in SERPLY_CONFIG.ONE_TIME_INTERVALS else TEXT_YOU_RECEIVE}'
                    }
                ]
            },
        ]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpar8zwa53m2xxta89kb4.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpar8zwa53m2xxta89kb4.jpeg" alt="Slack notification" width="800" height="142"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Serply API
&lt;/h3&gt;

&lt;p&gt;The schedule_target_lambda function makes a GET request to the Serply API that is equivalent to the following CURL request.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Request&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --request GET \
  --url 'https://api.serply.io/v1/serp/q=professional+network&amp;amp;num=100&amp;amp;domain=linkedin.com' \
  --header 'Content-Type: application/json' \
  --header 'X-Api-Key: API_KEY'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Response&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "searched_results": 100,
    "result": {
        "title": "Why professional networking is so important - LinkedIn",
        "link": "https://www.linkedin.com/pulse/why-professional-networking-so-important-jordan-parikh",
        "description": "7 de nov. de 2016 Networking becomes a little clearer if we give it a different name: professional relationship building. It's all about getting out there ...",
        "additional_links": [
            {
                "text": "Why professional networking is so important - LinkedInhttps://www.linkedin.com pulse",
                "href": "https://www.linkedin.com/pulse/why-professional-networking-so-important-jordan-parikh"
            },
            {
                "text": "Traduzir esta pgina",
                "href": "https://translate.google.com/translate?hl=pt-BR&amp;amp;sl=en&amp;amp;u=https://www.linkedin.com/pulse/why-professional-networking-so-important-jordan-parikh&amp;amp;prev=search&amp;amp;pto=aue"
            }
        ],
        "cite": {
            "domain": "https://www.linkedin.com pulse",
            "span": " pulse"
        }
    },
    "position": 8,
    "domain": ".linkedin.com",
    "query": "q=professional+network&amp;amp;num=100"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://api.slack.com/interactivity/slash-commands" rel="noopener noreferrer"&gt;Slack - Slash Commands&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://api.slack.com/events/url_verification" rel="noopener noreferrer"&gt;Slack - URL Verification&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://api.slack.com/docs/verifying-requests-from-slack" rel="noopener noreferrer"&gt;Slack - Verifying requests from Slack&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://api.slack.com/reference/block-kit/blocks" rel="noopener noreferrer"&gt;Slack - Block Kit Message Blocks&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-modeling-nosql.html" rel="noopener noreferrer"&gt;AWS - First steps for modeling relational data in DynamoDB&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-modeling-nosql-B.html" rel="noopener noreferrer"&gt;AWS - Example of modeling relational data in DynamoDB&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-time-series.html" rel="noopener noreferrer"&gt;AWS - Best practices for handling time series data in DynamoDB&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  GitHub Repository
&lt;/h2&gt;

&lt;p&gt;You can find the repository &lt;a href="https://github.com/serply-inc/notifications" rel="noopener noreferrer"&gt;here&lt;/a&gt;. It includes all the installation steps: to set up your Slack app, Serply account and AWS CDK deployment. Deploy it on your AWS account within the AWS Free Tier. You can also fork it and customize it to your own needs.&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/serply-inc/notifications" rel="noopener noreferrer"&gt;serply-inc/notifications&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Possibilities
&lt;/h2&gt;

&lt;p&gt;Serply Notifications is structured such that all the scheduling is decoupled from the messaging logic using an event bus. More integrations and notification types could be added such as email notifications, other chatbot platforms and more notification types from the Serply API.&lt;/p&gt;

&lt;p&gt;Stay tuned for Part 2 of this Serply Notifications series!&lt;/p&gt;

</description>
      <category>seo</category>
      <category>slack</category>
      <category>aws</category>
      <category>python</category>
    </item>
    <item>
      <title>The Python Five Minute Journal</title>
      <dc:creator>Osvaldo Brignoni</dc:creator>
      <pubDate>Wed, 18 Jan 2023 21:27:52 +0000</pubDate>
      <link>https://forem.com/brignoni/the-python-five-minute-journal-j4k</link>
      <guid>https://forem.com/brignoni/the-python-five-minute-journal-j4k</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IAms7TLr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p74uo24lxiggx1lc5rcg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IAms7TLr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p74uo24lxiggx1lc5rcg.jpeg" alt="Python Five Minute Journal" width="880" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  "Not another day..."
&lt;/h2&gt;

&lt;p&gt;That was my first thought waking up.&lt;/p&gt;

&lt;p&gt;Every day in 2021 and the first quarter of 2022, I felt burned out. I could have blamed many things: the pandemic, a failed startup, or too much work. The root of the problem was... I was not taking care of my mental health.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Five Minute Journal changed my life
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=UFdR8w_R1HA&amp;amp;t=542s"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F9pCRt3v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1673922057363/07a2881f-bd73-49f0-9c1f-030b457f3bfa.png" alt="How I Journal and Take Notes YouTube video" width="880" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first time I saw &lt;a href="https://www.intelligentchange.com/products/the-five-minute-journal"&gt;The Five Minute Journal by Intelligent Change&lt;/a&gt; in a Tim Ferris YouTube video titled &lt;a href="https://www.youtube.com/watch?v=UFdR8w_R1HA&amp;amp;t=542s"&gt;How I Journal and Take Notes | Brainstorming + Focusing + Reducing Anxiety&lt;/a&gt;, I thought "This is what I need. It will only take me five minutes. I can easily do this every morning". And I did. Every day since April 2022.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SEfyuXFq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1673932213353/223b1258-d728-46da-97f6-0e9f1b455cd1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SEfyuXFq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1673932213353/223b1258-d728-46da-97f6-0e9f1b455cd1.png" alt="Amazon order details" width="880" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I ordered it from Amazon and received it in three days. On the first day, it took me more than fifteen minutes to fill out. I wasn't sure what to put in each section. I had to think about it for a moment. The next day it took me around three minutes. I got used to it quickly.&lt;/p&gt;

&lt;p&gt;Day by day, I felt better and better. I woke up more and more with a positive frame of mind. Within three weeks, that poisonous thought had disappeared. "Not another day..." did not cross my mind ever again. I woke up thinking "Thank you for another day of life".&lt;/p&gt;

&lt;h2&gt;
  
  
  The journal is pricey. Why should I buy it?
&lt;/h2&gt;

&lt;p&gt;You can pick a blank piece of paper and write every day. You could get a cheaper notebook. I think what makes The Five Minute Journal worth it is its simple structure, daily quotes and weekly challenges. It has pages to last for 6 months.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2VhOSxuF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1674075600007/feee3f6a-bbb7-4e23-9fe6-a2d1fa24e4d0.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2VhOSxuF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1674075600007/feee3f6a-bbb7-4e23-9fe6-a2d1fa24e4d0.jpeg" alt="Physical journal" width="880" height="614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The words you tell yourself every day
&lt;/h2&gt;

&lt;p&gt;If you wake up every day and tell yourself that you hate your current situation, it's an indication that you need to change it. Repeating your actions will reinforce what you think. Your thoughts will influence your actions and what you say to yourself. It is a feedback loop.&lt;/p&gt;

&lt;p&gt;The Four Agreements by Don Miguel Ruiz is one of my favorite books on this subject. &lt;em&gt;Be Impeccable With Your Word&lt;/em&gt;, is the most relevant agreement of the four. It's about the impact that your words have on yourself and the people around you.&lt;/p&gt;

&lt;h3&gt;
  
  
  Be Impeccable With Your Word
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Speak with integrity. Say only what you mean. Avoid using the word to speak against yourself or to gossip about others. Use the power of your word in the direction of truth and love.&lt;br&gt;&lt;br&gt;
~ The Four Agreements - Don Miguel Ruiz&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The actions you repeat every day
&lt;/h2&gt;

&lt;p&gt;I am a big proponent of daily practice in small increments. Atomic Habits by James Clear is the book that changed the way I think about habits, repetition and its compound effect on learning. In the same way, your repeated frame of mind has an impact on your mental health, every day and it compounds over months. That is why depression can be very insidious. It starts small and builds over time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Every action you take is a vote for the person you wish to become.&lt;br&gt;&lt;br&gt;
~ Atomic Habits - James Clear&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Filling out the journal every morning
&lt;/h2&gt;

&lt;p&gt;The Five Minute Journal allows you to start every day on a positive note. This is the structure for one day. Remember, it is so easy. It takes less than five minutes. But it has a huge impact on the rest of your day.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Morning (recommended)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Evening (optional)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  PyFiveMinuteJournal: A simple Python application inspired by the physical journal
&lt;/h2&gt;

&lt;p&gt;After 6 months, I ran out of pages in The Five Minute Journal. I decided to create a Python application inspired by it. These are some of the features.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. It gets the current date and time
&lt;/h3&gt;

&lt;p&gt;This part is simple but very important. The date is used as the generated file name.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. It generates a random quote
&lt;/h3&gt;

&lt;p&gt;It requests the Zen Quotes API (&lt;a href="https://zenquotes.io/api/today"&gt;https://zenquotes.io/api/today&lt;/a&gt;) to get a random quote. That quote is saved in your daily journal.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. It prompts questions for the morning
&lt;/h3&gt;

&lt;p&gt;It detects when you are filling the journal the first time. It assumes it is the morning and prompts you with the morning questions.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. It prompts questions for the evening
&lt;/h3&gt;

&lt;p&gt;It detects when you are filling the journal the second time. It assumes it is the evening and prompts you with the evening questions.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. It stores each day as a Markdown file
&lt;/h3&gt;

&lt;p&gt;The date, the quote of the day, and the questions and answers are all stored in a Markdown file each day grouped in a directory by year.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2022/5MJ-2022-12-31.md
2023/5MJ-2023-01-01.md
2023/5MJ-2023-01-02.md
2023/5MJ-2023-01-02.md
2023/5MJ-2023-01-04.md
2023/5MJ-2023-01-05.md

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The markdown file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Five Minute Journal | Wednesday, Jan 18 2023

&amp;gt; Dwell on the beauty of life. Watch the stars, and see yourself running with them.
&amp;gt; 
&amp;gt; ~ Marcus Aurelius, Meditations

---
07:55 AM

### I am grateful for...
1. My family
2. My incredible friends
3. The warm bed that a sleep in

### What would make today great?
1. Walk around the park
2. Go to the coffeeshop
3. Publish this blog article

### Daily affirmations
1. I traveled to Portugal
2. I am grateful for everything I have
3. I am a better Software Engineer than yesterday

---
09:05 PM

### Highlights of the day
1. Had breakfast with family
2. Met a friend at the coffeeshop
3. Practiced Portuguese with my language partner

### What did I learn today?
1. New Portuguese vocabulary
2. Completed an AWS hands-on lab
3. Studied for AWS Certified SysOps Admin Associate

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  To Do: Journal Stats
&lt;/h2&gt;

&lt;p&gt;I might add this feature later. I would like to run a &lt;code&gt;stats.py&lt;/code&gt; command, parse the journals for a given year and get some statistics like those shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ python3 stats.py 2023

-------------------------- 2023 JOURNAL STATS ------------------------

JOURNALS COMPLETED
246 of 365 days

TOP 5 WORDS MENTIONED
- Family: 178
- Friends: 109
- AWS: 83
- Portugal: 64
- Study: 52

PLACES MENTIONED
- Portugal: 64
- Spain: 33
- Puerto Rico: 11

AVERAGE SENTIMENT BY MONTH
January: 0.9 positive
February: 0.7 positive
...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  I traveled to Portugal
&lt;/h2&gt;

&lt;p&gt;No. I haven't traveled to Portugal yet, as of the date I published this article. I write this affirmation almost every day. Most of the things I have accomplished, I have affirmed for months as if they had already happened.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Affirmations are timeless and powerful. You can affirm where you see yourself in the future. Or you can affirm where you are today compared to yesterday.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Be careful not to think "I have not achieved enough"
&lt;/h3&gt;

&lt;p&gt;Be careful not to turn affirmations into expectations. That can lead to feelings of disappointment. Things take time and hard work. Treat affirmations loosely as a state of mind. Let your daily actions take you a step closer to your goals.&lt;/p&gt;

&lt;h2&gt;
  
  
  I fill out the journal every day
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Consistency is key.&lt;/strong&gt; It felt great to complete the first journal by hand. I might buy it again. There is something very satisfying about opening the journal in your hands and looking back at six months of pages full of transformational gratitude, optimism and affirmations turned into reality. For now, I've been using the PyFiveMinuteJournal Python application.&lt;/p&gt;

&lt;h2&gt;
  
  
  It is NOT the be-all and end-all
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Some people pray.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Some people meditate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Some people go for a run.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Some people fill out a journal.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Some people combine all these things.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Find the way that is best for you. The important thing is that you start every day on a positive note. Make your bed, no matter what happened the day before.&lt;/p&gt;

&lt;h2&gt;
  
  
  I highly recommend it
&lt;/h2&gt;

&lt;p&gt;If you are a developer, you can clone the PyFiveMinuteJournal from the GitHub repository and start journaling on your computer. If you are not into command-line applications or you prefer writing by hand, then order The Five Minute Journal from Intelligent Change. It might change your life.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub Repository
&lt;/h2&gt;

&lt;p&gt;Find the repository here. Take it for a spin.&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/brignoni/py-five-minute-journal"&gt;brignoni/py-five-minute-journal&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Take care of yourself
&lt;/h2&gt;

&lt;p&gt;If this article helps just one person, then it would have been worth writing. Be your best friend. Step back from it all. Take a breath. Make time for yourself. And get back to work in a way that is in balance with your life.&lt;/p&gt;

</description>
      <category>python</category>
      <category>markdown</category>
      <category>devjournal</category>
      <category>mentalhealth</category>
    </item>
    <item>
      <title>Scrape, Convert &amp; Read on Kindle</title>
      <dc:creator>Osvaldo Brignoni</dc:creator>
      <pubDate>Sun, 15 Jan 2023 03:32:41 +0000</pubDate>
      <link>https://forem.com/brignoni/scrape-convert-read-on-kindle-11nj</link>
      <guid>https://forem.com/brignoni/scrape-convert-read-on-kindle-11nj</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A3TMVb5---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d7u7v8q0gkxwz31498hg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A3TMVb5---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d7u7v8q0gkxwz31498hg.jpg" alt="Scrape, Convert &amp;amp; Read on Kindle" width="880" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I created PyWebDoc2Ebook?
&lt;/h2&gt;

&lt;p&gt;I enjoy reading tech documentation every day. I am also a big fan of the Kindle e-reader, simply because it helps me to read for hours without eyestrain.&lt;/p&gt;

&lt;p&gt;I've been reading a lot of the AWS documentation to study its wide range of services. Some of the documentation is available for Kindle, but the majority is not. That is why I created PyWebDoc2Ebook. It scrapes AWS documentation for a given service and converts it to an EPUB file. I can send that file to my Kindle device and read it offline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s----__rpUV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1673724176601/729f57a1-de19-42e9-9189-1138b9990767.png" alt="AWS Docs" width="880" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first version was hard-coded to work with AWS documentation. Then, I refactored it to support any documentation on the web via a plugin system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using PyWebDoc2Ebook
&lt;/h2&gt;

&lt;p&gt;Very simply, you can execute the following script with the documentation URL.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ python3 epub.py \ 
https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How does it work?
&lt;/h2&gt;

&lt;p&gt;At a high level, the Python script will take the following steps.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Gets the table of contents.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Downloads the HTML content.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Converts the HTML to Markdown.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improves the Markdown formatting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Parses and downloads all the images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Converts the Markdown file to EPUB.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Each step in detail
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1. Gets the table of contents
&lt;/h3&gt;

&lt;p&gt;The script is capable of parsing the table of contents from the initial HTML response or loading a TOC JSON file. In the case of AWS, a &lt;code&gt;toc-contents.json&lt;/code&gt; file is available for the documentation of each service. You can see an example &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/toc-contents.json"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let's take the Amazon Virtual Private Cloud documentation as an example. The following image shows three steps in the conversion of the table of contents. On the left is the original table of contents from the web documentation. The middle shows the console output when running the script. On the right is the resulting table of contents on the Kindle e-reader.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jEZS5iDn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1673723266125/7eb6025c-267e-401e-bed5-02d9b5a705a2.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jEZS5iDn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1673723266125/7eb6025c-267e-401e-bed5-02d9b5a705a2.jpeg" alt="CLI output" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2. Downloads the HTML content
&lt;/h3&gt;

&lt;p&gt;You will see the console output when you run the script. For each &lt;code&gt;TocItem&lt;/code&gt; , the URI is resolved to an absolute URL and the HTML is downloaded.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;AWSPlugin domain="docs.aws.amazon.com"/&amp;gt;
&amp;lt;TocItem uri="what-is-amazon-vpc.html"/&amp;gt;
&amp;lt;TocItem uri="how-it-works.html"/&amp;gt;
&amp;lt;TocItem uri="vpc-getting-started.html"/&amp;gt;
&amp;lt;TocItem uri="configure-your-vpc.html"/&amp;gt;
&amp;lt;TocItem uri="working-with-vpcs.html"/&amp;gt;
&amp;lt;TocItem uri="default-vpc.html"/&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3. Converts the HTML to Markdown
&lt;/h3&gt;

&lt;p&gt;The downloaded HTML is converted to Markdown and appended to a single file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;markdown = markdownify(html, heading_style=markdownify.ATX)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4. Improves the Markdown formatting
&lt;/h3&gt;

&lt;p&gt;The script cleans up the markdown content and removes excess whitespace. By defining a &lt;code&gt;markdown()&lt;/code&gt; function within a plugin, you can hook into the markdown pre-processing and perform additional modifications before it is converted to EPUB.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def markdown(self, md) -&amp;gt; str:
    # Modify the markdown content before it is converted to EPUB.
    return md

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5. Parses and downloads all the images
&lt;/h3&gt;

&lt;p&gt;The documentation often has diagrams, concepts and code samples that are included in the conversion. The script automatically parses all images from the content and downloads them to the &lt;code&gt;resources/&lt;/code&gt; directory. Those image files are included in the final packaging of the EPUB file. These images can be viewed on the e-reader, the same as in the original web documentation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Processing image: egress-only-igw.png
Processing image: connectivity-overview.png
Processing image: dhcp-custom-update-new.png
Processing image: inter-subnet-appliance-routing.png
Processing image: subnet-diagram.png
Processing image: vpc-migrate-ipv6_updated.png
Processing image: default-vpc.png
Processing image: dns-settings.png

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 6. Converts the Markdown file to EPUB
&lt;/h3&gt;

&lt;p&gt;The conversion from Markdown to EPUB was made simple with the &lt;a href="https://pandoc.org/"&gt;Pandoc&lt;/a&gt; conversion tool. The script is run automatically at the end of the conversion process.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pandoc --resource-path ./resources --metadata \ 
title="What is Amazon VPC" \
author="docs.aws.amazon.com" \
language="en-us" \
-o amazon-vpc.epub amazon-vpc.md

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Plugin System
&lt;/h2&gt;

&lt;p&gt;Within a plugin, you can define the patterns of the documentation that you want to scrape and convert. You can drop a custom python plugin into the &lt;code&gt;plugins/&lt;/code&gt; directory and define a custom class that extends the base &lt;code&gt;Plugin&lt;/code&gt; class.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example: AWSPlugin
&lt;/h3&gt;

&lt;p&gt;In the code below, you can see some of the most essential properties.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;domain&lt;/strong&gt; - activates the plugin when a matching URL is requested.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;html_content_selector&lt;/strong&gt; - the DOM tag where the page content is found. This helps ignore the other page sections like navigation, sidebar and footer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;toc_filename (optional)&lt;/strong&gt; - By default, the table of contents is parsed from the original HTML response. This option allows specifying a filename that explicitly contains the TOC data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;toc_format&lt;/strong&gt; - &lt;code&gt;html&lt;/code&gt; or &lt;code&gt;json&lt;/code&gt; . Defaults to &lt;code&gt;html&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;toc(response)&lt;/strong&gt; - you can add one of two functions: &lt;code&gt;toc_html()&lt;/code&gt; or &lt;code&gt;toc_json()&lt;/code&gt; to define how the table of contents should be parsed and registered.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the AWSPlugin the TOC is a JSON file with nested items. In this case, the &lt;code&gt;toc_json()&lt;/code&gt; function searches recursively for child items and registers them in a flattened array of&lt;code&gt;TocItem&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from Plugin import Plugin

class AWSPlugin(Plugin):

    domain = 'docs.aws.amazon.com'

    html_content_selector = '#main-col-body'

    toc_filename = 'toc-contents.json'

    toc_format = 'json'

    def toc_json(self, json):

        if 'title' in json and 'href' in json:
            self.add(json['title'], json['href'])

        if 'contents' not in json:
            return self.items()

        for child in json['contents']:
            self.toc(child)

        return self.items()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Included Plugins
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWSPlugin | docs.aws.amazon.com | stable&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AzurePlugin | learn.microsoft.com | experimental&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GCPPlugin | cloud.google.com | experimental&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The challenges of web scraping
&lt;/h2&gt;

&lt;p&gt;There are many know challenges with web scraping. In the context of developing this script, the challenges were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Some sites lazy load their table of contents with JavaScript. I've considered crawling the content with the &lt;a href="https://github.com/corywalker/selenium-crawler"&gt;Selenium Crawler&lt;/a&gt;. I may implement this in the next version of this script. For now, it was simpler to work with the available JSON table of contents.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Different TOC HTML and JSON structures. It was challenging to develop a plugin system that would allow customized parsing in the simplest way possible.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Extracting the relevant links and asset URLs with RegEx. The challenge has been striking a balance between standardizing the plugin interface and its precision. To remedy this, I added the ability to customize the RegEx for links.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The challenges of developing the script
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Malformatted content resulting from the conversion from HTML to Markdown. These were corrected via RegEx replacements and the hooks in the plugin system allow for further customized corrections.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Nailing the plugin system was the biggest challenge and required a few days of refactoring and testing out different documentation sites and structures. There is still a lot of room for improvement.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Thanks to great open-source packages
&lt;/h2&gt;

&lt;p&gt;I developed this script by combining these awesome packages. It would not have been possible without them.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;BeautifulSoup - &lt;a href="https://pypi.org/project/beautifulsoup4/"&gt;https://pypi.org/project/beautifulsoup4/&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Used to extract the table of content links and to remove unnecessary content sections with DOM selectors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Markdownify - &lt;a href="https://github.com/matthewwithanm/python-markdownify"&gt;https://github.com/matthewwithanm/python-markdownify&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Used to convert all downloaded HTML into a single Markdown file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pandoc - &lt;a href="https://pandoc.org/"&gt;https://pandoc.org/&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Used to convert the Markdown file to an EPUB file and generate the EPUB table of contents based on the Markdown headings.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ebook Convert by Calibre - &lt;a href="https://manual.calibre-ebook.com/generated/en/ebook-convert.html"&gt;https://manual.calibre-ebook.com/generated/en/ebook-convert.html&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Used to convert the EPUB file to other formats like MOBI.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Development Status
&lt;/h2&gt;

&lt;p&gt;The most stable plugin is the AWSPlugin. The rest of the plugins are experimental and the overall plugin system might evolve. Eventually, I plan to turn this into a proper Python package and publish it. For now, you can fork or clone it from GitHub and take it for a spin. You are welcome to send issues and pull requests there.&lt;/p&gt;

&lt;p&gt;You can find it here.&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/brignoni/py-webdoc-2-ebook"&gt;https://github.com/brignoni/py-webdoc-2-ebook&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;I enjoy developing useful automation tools. For me, this was a great exercise for practicing Python object-oriented programming, designing plugin systems and using various parsing and conversion tools. These are patterns I can reuse in other projects. Overall, it was a very fun experience.&lt;/p&gt;

</description>
      <category>python</category>
      <category>scrapping</category>
      <category>ebook</category>
      <category>markdown</category>
    </item>
  </channel>
</rss>
