<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Xing Wang</title>
    <description>The latest articles on Forem by Xing Wang (@xngwng).</description>
    <link>https://forem.com/xngwng</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/xngwng"/>
    <language>en</language>
    <item>
      <title>Best Practices: Maximizing AI Revenue Growth with Customer Success</title>
      <dc:creator>Xing Wang</dc:creator>
      <pubDate>Fri, 22 Dec 2023 22:32:48 +0000</pubDate>
      <link>https://forem.com/moesif/best-practices-maximizing-ai-revenue-growth-with-customer-success-14lg</link>
      <guid>https://forem.com/moesif/best-practices-maximizing-ai-revenue-growth-with-customer-success-14lg</guid>
      <description>&lt;p&gt;In the world of artificial intelligence, maximizing revenue growth is of great importance for businesses seeking to capitalize on their AI products. Like all SaaS tools, the defining linchpin in widespread product adoption is the end user experience. There is an intricate relationship between customer success and &lt;a href="https://www.moesif.com/solutions/metered-api-billing?utm_campaign=Int-site&amp;amp;utm_source=blog&amp;amp;utm_medium=sticky-cta&amp;amp;utm_content=Maximizing-AI-Revenue-Growth"&gt;how to make money&lt;/a&gt; from AI products, as the successful integration of AI products into a given market relies heavily on not only the technological capabilities of the solution but also on how effectively businesses cater to their customers.&lt;/p&gt;

&lt;p&gt;The intersection of customer success with how to make money from AI offerings is a strategic balance where technology meets user satisfaction and drives &lt;a href="https://www.moesif.com/blog/api-monetization/api-strategy/What-Is-PLG-For-AI/?utm_campaign=Int-site&amp;amp;utm_source=blog&amp;amp;utm_medium=body-cta&amp;amp;utm_content=Maximizing-AI-Revenue-Growth"&gt;business growth&lt;/a&gt;. As businesses introduce AI products to the market, an emphasis on customer success becomes a differentiator. Customer success practices including personalized onboarding, continuous support, and proactive engagement are not just ancillary components of CS success. They’re integral elements that ensure users derive optimal value from AI solutions, encouraging customer satisfaction and lowering churn. &lt;/p&gt;

&lt;p&gt;The commercial viability of an AI tool is not solely contingent on its technical capabilities; if that were the case, there would be a greater monopoly in the market. Rather, success hinges on how effectively businesses can align their AI technology with the unique needs of their customers, fostering loyalty, and driving sustained revenue growth.&lt;/p&gt;

&lt;h2&gt;
  
  
  Commercializing AI-Based Offerings: An Overview
&lt;/h2&gt;

&lt;p&gt;Generating revenue from AI-based offerings is a multifaceted process to turn innovative (or more user friendly than competition) AI technology into a marketable and profitable product. As mentioned before, sometimes a product doesn’t have to be groundbreaking to be successful. Rather, a focus on a cohesive and enjoyable user experience can be the deciding factor in a company’s choice of machine learning solution. &lt;/p&gt;

&lt;h3&gt;
  
  
  Value Proposition
&lt;/h3&gt;

&lt;p&gt;The journey begins with identifying opportunities in a given market where AI solutions can address specific needs or challenges. This phase includes market research to understand customer demands, competition, and potential niches for AI software. Challenges arise in navigating the complexity of AI development, from the initial investment in cutting-edge technology and talent acquisition to research and development. The &lt;a href="https://www.moesif.com/features/api-governance-rules?utm_campaign=Int-site&amp;amp;utm_source=blog&amp;amp;utm_medium=body-cta&amp;amp;utm_content=Maximizing-AI-Revenue-Growth"&gt;commercialization process&lt;/a&gt; also entails creating a compelling value proposition, highlighting how the AI application addresses pain points and delivers unique benefits to potential users. &lt;/p&gt;

&lt;h3&gt;
  
  
  AI in Healthcare
&lt;/h3&gt;

&lt;p&gt;One example of an industry that could significantly benefit from the adoption of AI-based software is the healthcare sector. Generative AI tools and AI algorithms have the potential to revolutionize various aspects of healthcare, ranging from diagnostic tools and personalized treatment plans to administrative processes assisted by AI model technology. For this example, AI startups for healthcare would need to build a HIPAA compliant AI powered tool, and likely trained on unique, complex language models that may not be easy for the AI product provider to set up. As such, AI companies would need to satisfy a need that could be monetized effectively to offset the costs associated with creation and maintenance of a medical LLM, or some kind of integration accessing a current provider like Med-PaLM. &lt;/p&gt;

&lt;h3&gt;
  
  
  Pricing Model
&lt;/h3&gt;

&lt;p&gt;Opportunities emerge as businesses position their generative AI solutions as efficient and capable of driving significant value for users. Once developed, deciding on a pricing model strategy becomes a business priority. Having effective marketing strategies is one thing, but pricing your AI based product is a can of worms on its own. Because of the expensive nature of offering an AI system, the upstart and &lt;a href="https://www.moesif.com/blog/api-monetization/api-strategy/AI-Infrastracture-Costs/?utm_campaign=Int-site&amp;amp;utm_source=blog&amp;amp;utm_medium=body-cta&amp;amp;utm_content=Maximizing-AI-Revenue-Growth"&gt;maintenance costs&lt;/a&gt; associated with artificial intelligence  and big data models can’t be overlooked and should be accounted for in the pricing structure of your AI API. By employing a usage based pricing plan, businesses can directly offset the actual usage of their AI products. Usage based pricing also enables organizations to tailor sales approaches to appeal to a diverse customer base, including self service data scientist or data analyst users. &lt;/p&gt;

&lt;h3&gt;
  
  
  Customer Success
&lt;/h3&gt;

&lt;p&gt;Ensuring user-friendly interfaces, seamless integration into existing workflows, and comprehensive customer service further enhance the customer experience and, by result, improve customer success. Throughout this journey, the iterative nature of AI development enables continuous improvements and adaptations based on user feedback and &lt;a href="https://www.moesif.com/features/api-analytics?H1=API-Analytics-for-AI-Apps&amp;amp;utm_campaign=Int-site&amp;amp;utm_source=blog&amp;amp;utm_medium=body-cta&amp;amp;utm_content=Maximizing-AI-Revenue-Growth"&gt;analytics&lt;/a&gt;, presenting ongoing opportunities for refinement and feature expansion. Ultimately, successful commercialization via the AI revolution requires transformative capabilities to not only meet user needs but to create new opportunities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Considerations in AI Commercialization
&lt;/h2&gt;

&lt;p&gt;Building AI products comes with unique challenges and expenses that reflect the complex nature of artificial intelligence development.Substantial computational power is necessary to train and run AI models effectively, often requiring specialized hardware which can be expensive to acquire and maintain. Another challenge lies in procuring and managing large volumes of high-quality big data for training AI models. Acquiring datasets for large language models can be time-consuming and costly. Additionally, ensuring data privacy and compliance with industry or governmental regulations adds another layer of complexity and expense.&lt;/p&gt;

&lt;p&gt;Beyond initial model costs, research and development can be substantial financial strains, given the iterative nature of AI development and prompt engineering. Continuous testing, algorithm refinement, and research based on the latest natural language processing advancements require ongoing investment, which can be costly in terms of hours and money. &lt;/p&gt;

&lt;h2&gt;
  
  
  Customer Success Best Practices for AI Revenue Growth
&lt;/h2&gt;

&lt;p&gt;The role of customer success holds pivotal importance as businesses strive to not only deliver cutting-edge technology but also ensure that customers derive maximum value from their AI investments. Customer service for generative AI tools extends beyond the initial purchase, as lasting partnerships are cultivated through personalized experiences and ongoing support. This approach can maximize customer satisfaction, retention, and, ultimately, foster long-term revenue growth.&lt;/p&gt;

&lt;h3&gt;
  
  
  Strategies for Maximizing Customer Success
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Personalized Onboarding&lt;/strong&gt;: Tailor the onboarding process to individual customer use cases to ensure a smooth introduction to your generative AI solutions. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ongoing Support&lt;/strong&gt;: Continuous support is essential for addressing evolving customer requirements. Responsive and knowledgeable dedicated customer success teams and intuitive, full-scope documentation reinforces customer confidence and satisfaction.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Proactive Engagement&lt;/strong&gt;: Anticipating customer needs and engaging proactively based on user insights can cultivate a stronger partnership. Regular check-ins, support around bottlenecks, and timely content updates or enhancements demonstrate a commitment to customer success.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Customer Success through Moesif
&lt;/h3&gt;

&lt;p&gt;Moesif can play a pivotal role in elevating customer success for any SaaS product, but particularly within the AI landscape. By providing powerful, real-time insights into how customers interact with machine learning products, Moesif empowers AI companies to understand user behavior, optimize product usage, and intervene proactively when necessary. &lt;/p&gt;

&lt;p&gt;For example, &lt;a href="https://www.moesif.com/features/api-monitoring?utm_campaign=Int-site&amp;amp;utm_source=blog&amp;amp;utm_medium=body-cta&amp;amp;utm_content=Maximizing-AI-Revenue-Growth"&gt;monitoring API usage&lt;/a&gt; as it relates to free trial milestones and notifying sales teams of MQLs at critical junctures enables timely engagement, enabling faster conversion of trial users into paying customers. By analyzing user behavior, customer support teams can offer personalized, targeted content and support to help users maximize their usage of your AI product, streamlining their workflows and reducing your churn rate. Additionally, Moesif's ability to detect changes in API usage patterns offers valuable signals for potential upsells for non-technical teams, ensuring that AI startups can align their pricing and offerings with customer growth.&lt;/p&gt;

&lt;p&gt;Moesif's ability to manage customer success metrics extends beyond mere &lt;a href="https://www.moesif.com/features/api-analytics?utm_campaign=Int-site&amp;amp;utm_source=blog&amp;amp;utm_medium=body-cta&amp;amp;utm_content=Maximizing-AI-Revenue-Growth"&gt;analytics&lt;/a&gt;; it can be a strategic enabler for AI-based organizations to nurture strong, value-driven relationships with their customers. Enhances your overall customer experience and position your AI products for &lt;a href="https://www.moesif.com/solutions/api-product-management?utm_campaign=Int-site&amp;amp;utm_source=blog&amp;amp;utm_medium=body-cta&amp;amp;utm_content=Maximizing-AI-Revenue-Growth"&gt;sustained success&lt;/a&gt; and revenue growth in an ever-evolving landscape with Moesif.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Implementing Volume-Based Pricing</title>
      <dc:creator>Xing Wang</dc:creator>
      <pubDate>Fri, 15 Dec 2023 23:55:38 +0000</pubDate>
      <link>https://forem.com/moesif/implementing-volume-based-pricing-75</link>
      <guid>https://forem.com/moesif/implementing-volume-based-pricing-75</guid>
      <description>&lt;p&gt;When monetizing APIs, a popular approach is volume-based pricing. Of all the monetization models you can apply to your APIs, volume-based pricing is one of the easiest to implement. This blog will cover the basics of applying volume-based pricing to your APIs so your customers can be billed accordingly. Let’s start by looking at the finer details of volume-based pricing regarding monetized APIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is volume-based pricing?
&lt;/h2&gt;

&lt;p&gt;Volume-based pricing is a billing strategy commonly used to monetize APIs. This approach adjusts the cost based on the volume of usage. Typically, usage is measured by the number of API calls or data transferred. This pricing model is designed to cater to diverse user bases, from startups to large enterprises, by offering flexible pricing that aligns with their usage levels. With volume-based pricing, the unit price traditionally goes down as usage increases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Implement Volume-Based Pricing?
&lt;/h3&gt;

&lt;p&gt;As mentioned, in most volume-based pricing models for APIs, the price per API call typically decreases as the volume of usage increases. This pricing structure incentivizes higher usage while making the service more cost-effective for large-volume users. Here are a few facets and benefits of volume-based pricing.&lt;/p&gt;

&lt;h4&gt;
  
  
  Tiered Pricing Structure
&lt;/h4&gt;

&lt;p&gt;API providers often use a tiered pricing model. Each tier has a set range of API calls, and the cost per call decreases as you move to higher tiers. For example, the first 1,000 calls might cost 10 cents each, but calls 1,001 to 10,000 might cost only 8 cents each, and so on.&lt;/p&gt;

&lt;h4&gt;
  
  
  Encouraging Higher Usage
&lt;/h4&gt;

&lt;p&gt;Volume-based pricing encourages users to increase API usage since the unit cost becomes more economical at higher volumes. This can be particularly appealing for growing businesses anticipating increased API usage.&lt;/p&gt;

&lt;h4&gt;
  
  
  Cost Predictability
&lt;/h4&gt;

&lt;p&gt;While the per-call cost decreases, customers can also predict their expenses more accurately as they understand how much the cost will reduce as they scale up. This makes cost a very linear calculation, one that is easy to forecast.&lt;/p&gt;

&lt;h4&gt;
  
  
  Balancing Accessibility and Revenue
&lt;/h4&gt;

&lt;p&gt;For API providers, this model helps balance making their services accessible to smaller users (who might be sensitive to high costs at low volumes) while still generating significant revenue from larger customers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Custom Agreements for Very High Volumes
&lt;/h4&gt;

&lt;p&gt;Some API providers might negotiate custom pricing agreements for high-volume users, which could deviate from the standard tiered model to better suit large clients' specific needs. These agreements can still keep the essence of volume-based pricing but on a more customizable level.&lt;/p&gt;

&lt;p&gt;A massive amount of benefits can be derived from a relatively simple implementation using volume-based pricing models. It’s easy for customers to understand and a great revenue driver at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Moesif?
&lt;/h2&gt;

&lt;p&gt;Regarding API monetization and implementing volume-based pricing, Moesif is a go-to solution for making implementation easy. At its core, Moesif is an advanced API analytics and monetization platform. Within the platform, users can uncover critical insights into how their APIs are being used and efficiently monetize their existing APIs. It's designed to help companies optimize, troubleshoot, and secure their API infrastructures, with a strong emphasis on aiding in effective API monetization strategies. Let’s look at some functionality in greater detail:&lt;/p&gt;

&lt;h3&gt;
  
  
  API Usage Analytics
&lt;/h3&gt;

&lt;p&gt;Moesif allows businesses to track and analyze how their APIs are used. This includes detailed data on API calls, error rates, endpoint performance, and user behavior patterns.&lt;/p&gt;

&lt;h3&gt;
  
  
  API Monetization Capabilities
&lt;/h3&gt;

&lt;p&gt;A standout feature of Moesif is its robust support for API monetization, allowing users to track and charge users for API usage. API monetization with Moesif enables businesses to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Track Billing Metrics&lt;/strong&gt;: Understand and monitor API usage in the context of billing and revenue generation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Identify Revenue Opportunities&lt;/strong&gt;: Discover which features or endpoints are most popular and generate the most revenue.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Implement Volume-Based Pricing&lt;/strong&gt;: Easily track and manage usage-based billing, such as volume-based pricing models.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Customer Insights
&lt;/h3&gt;

&lt;p&gt;Moesif’s analytics capabilities provide deep insights into customer usage patterns, helping businesses understand which customers are the most active or potentially at risk of churning. This also helps companies determine which APIs they should charge for and how much they can charge for usage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Real-Time Monitoring and Alerts
&lt;/h3&gt;

&lt;p&gt;The platform offers real-time monitoring of API performance, alerting businesses to issues as they arise, such as an increase in latency, enabling quick response and resolution.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Moesif’s Product Catalog?
&lt;/h2&gt;

&lt;p&gt;Moesif's Product Catalog is a comprehensive tool designed to manage billing plans and prices for APIs. It serves as a centralized platform where business owners can create, manage, view, and archive various billing plans and prices, streamlining the product creation process within Moesif. This integration ensures API product compatibility with Moesif’s API metering and billing solution. The key benefits of using Moesif's Product Catalog include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Ease of Use&lt;/strong&gt;: Business owners can create plans and prices directly within the Moesif UI, eliminating the need to access multiple systems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration with Billing Systems&lt;/strong&gt;: Moesif integrates with your billing system, which remains the system of record. Any plan or price created in Moesif will also appear in your billing provider, ensuring seamless accounting and financial management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Support for Multiple Billing Systems&lt;/strong&gt;: The Product Catalog is designed to work with multiple billing systems simultaneously, such as Zuora for enterprise customers and Stripe for self-service options.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How to implement volume-based API pricing
&lt;/h2&gt;

&lt;p&gt;To implement volume-based pricing, we will use Stripe and Moesif. With Moesif’s Product Catalog and Billing Meter functionality, we can do all this right within the Moesif platform!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;NOTE: You need to enable and configure the Stripe plugin in Moesif for the steps below to work. The simple steps to set that up are available &lt;a href="https://www.moesif.com/docs/metered-billing/integrate-with-stripe/"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Create the Plan
&lt;/h3&gt;

&lt;p&gt;After logging into Moesif, navigate to the &lt;strong&gt;Product Catalog&lt;/strong&gt; by clicking the corresponding menu item in the left-side navigation. On the &lt;strong&gt;Plans&lt;/strong&gt; page, click the &lt;strong&gt;Create New&lt;/strong&gt; button in the top-right.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xAMnB8Jj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oerurz3udzz0kyzr8zl5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xAMnB8Jj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oerurz3udzz0kyzr8zl5.png" alt="Image description" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the &lt;strong&gt;Create Plan&lt;/strong&gt; screen, you’ll fill out the &lt;strong&gt;plan name&lt;/strong&gt; and select the billing provider. In this case, we will select &lt;strong&gt;Stripe&lt;/strong&gt;. Once done, click on the &lt;strong&gt;Create&lt;/strong&gt; button in the top-right.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oCwK0FOH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1tpkjmos8ykfjjuu0qzk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oCwK0FOH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1tpkjmos8ykfjjuu0qzk.png" alt="Image description" width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Price
&lt;/h3&gt;

&lt;p&gt;Next, we will create the &lt;strong&gt;Price&lt;/strong&gt; in Moesif by clicking on the &lt;strong&gt;Price&lt;/strong&gt; menu item under the &lt;strong&gt;Product Catalog&lt;/strong&gt; on the left-side menu. On the &lt;strong&gt;Price&lt;/strong&gt; screen, click the &lt;strong&gt;Create New&lt;/strong&gt; button in the top-right.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h6mTyWDr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/90jhbdhzayrq1mtijxvf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h6mTyWDr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/90jhbdhzayrq1mtijxvf.png" alt="Image description" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the next screen, you’ll add the &lt;strong&gt;price name&lt;/strong&gt; and select the &lt;strong&gt;Linked Plan&lt;/strong&gt; from the dropdown. In this case, we will select “My API Plan,” the plan we made in the previous step above.&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Pricing&lt;/strong&gt;, we will select the &lt;strong&gt;Pricing Model&lt;/strong&gt; as “Volume” and leave the &lt;strong&gt;Meter Usage as&lt;/strong&gt; with the defaults (unless you want to change it for your specific use case). You can add each of your tiers under the &lt;strong&gt;Price Structure&lt;/strong&gt; section. You can click the &lt;strong&gt;Add Tier&lt;/strong&gt; button at the bottom if you need another tier. You’ll see we’ve added four tiers in the example below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YXBWgiFI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xs8769gm9i58e84t0lbn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YXBWgiFI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xs8769gm9i58e84t0lbn.png" alt="Image description" width="800" height="703"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Billing Meter
&lt;/h3&gt;

&lt;p&gt;Now that the plan and price have been created, we will create the Billing Meter to record the usage and send it to the correct plan in Stripe. To do that, we will click on the &lt;strong&gt;Billing Meters&lt;/strong&gt; menu item in the left-side menu. On the &lt;strong&gt;Billing Meters&lt;/strong&gt; screen, click the &lt;strong&gt;Add Billing Meter&lt;/strong&gt; button in the top-right of the screen.&lt;/p&gt;

&lt;p&gt;This will bring you to the &lt;strong&gt;Create Billing Meter&lt;/strong&gt; screen. Here, you will give the meter a &lt;strong&gt;name&lt;/strong&gt;, link it to our plan and price under the &lt;strong&gt;Link to&lt;/strong&gt; section, and create our &lt;strong&gt;filter&lt;/strong&gt; and &lt;strong&gt;metrics&lt;/strong&gt;. Below is an example of a meter for a specific API endpoint (“/test-service”) that will count each API call and attribute it to the “Volume Pricing” price that we created earlier. Once you’ve created your meter, click &lt;strong&gt;Create&lt;/strong&gt; in the top-right to create and activate the meter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--r_3M_DYu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ee5y4iqn6yedtz52raa0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--r_3M_DYu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ee5y4iqn6yedtz52raa0.png" alt="Image description" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Profit!
&lt;/h3&gt;

&lt;p&gt;With this in place, API usage will be billed based on your volume-based pricing model. This example has shown you how easy it is to create a volume-based pricing model and track usage towards it with Moesif and Stripe. Of course, this could also be done with other billing providers, including Chargebee, Zuora, and many others.&lt;/p&gt;

</description>
      <category>api</category>
      <category>monetization</category>
    </item>
    <item>
      <title>Self-Serve and Sales-Led API Monetization — Unlocking Product Led Growth</title>
      <dc:creator>Xing Wang</dc:creator>
      <pubDate>Fri, 08 Dec 2023 23:39:51 +0000</pubDate>
      <link>https://forem.com/moesif/self-serve-and-sales-led-api-monetization-unlocking-product-led-growth-339o</link>
      <guid>https://forem.com/moesif/self-serve-and-sales-led-api-monetization-unlocking-product-led-growth-339o</guid>
      <description>&lt;p&gt;There are as many monetization pitfalls as there are methods of monetizing APIs. As the market has shifted over the years, selling APIs and creating product userbases has become more important than ever. Today, we’re going to discuss a strategy that can lead to explosive growth – let’s talk self-service and product led growth.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Shifting Product Landscape
&lt;/h2&gt;

&lt;p&gt;The reality is that selling APIs is more difficult than ever before. The modern landscape has shifted dramatically over the years, moving both the consumption model as well as the value proposition for most adoptees of API platforms. A monolithic API provider with structured UIs and full-service integration plans used to be the gold standard, delivering a value proposition of a full-service partnership with full-service integration. As the industry and community has shifted towards automated microservices and product led growth, which deliver value through specific and targeted use cases, the idea of buying into a traditional integration approach has diminished substantially for the average API consumer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Decline of Sales-Led Growth
&lt;/h2&gt;

&lt;p&gt;What this has ultimately resulted in is a shift of power between the provider of an API product and the consumer. When integrating a service requires bringing a team on premises to install proprietary systems and launch custom server clusters, the power &lt;strong&gt;and cost&lt;/strong&gt; of this API management overwhelmingly lands in the camp of the software. As this has shifted, the calculus has changed substantially, with the adoptee becoming the powerhouse authority.&lt;/p&gt;

&lt;p&gt;This has undermined the traditional pathway of sales-led API monetization and, subsequently, product growth. Sales-led is conceptually based on the idea that you have something someone else wants, and as such, you should set the price and API monetization strategy as close to the upper limit as possible to extract value and revenue. The problem with this growth strategy is that it assumes the product has a clear value proposition and is an outstanding choice. Today, the consumer has never had so much choice and flexibility, so the idea that a product can extract value through this pathway is not really true in most industries anymore.&lt;/p&gt;

&lt;p&gt;All of this adds up to a simple truth – adoptees are more empowered than ever, and they need to have a solution that meets their specific needs with as little friction in adoption as possible. Each hiccup along the way reduces the likelihood of success and long-term retention of API users. When the authority was in the product, things could afford to be complex or high-friction – after all, the product was what the product was. In today's API marketplace, the product has to be fast to deliver and easy to integrate with as few initial blockers as possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Self-Serve Model
&lt;/h2&gt;

&lt;p&gt;Enter the self-serve model. The basic idea of the self-serve model is to lower the friction to onboarding as much as possible by allowing the adoptee to decide on their API consumption and API usage: what they need, how they need it, and how they will pay for it. This can take several forms, but the core concept remains the same – low friction leads to high adoption. Self-serve delivers some major benefits – let’s take a look at a few of them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reduced Onboarding Friction
&lt;/h3&gt;

&lt;p&gt;Self-service delivers reduced initial friction through a variety of mechanisms. Using click-through terms of service can help reduce initial legal paperwork and hurdles for an API business and the API developers using a given service alike. Automated integration of an API gateway with a simple step-by-step process can reduce the time to market for integration of the product. Simple payment options can reduce the need for client approvals and expenses. Even something as simple as allowing the user to decide between pre-pay and post-pay agreements can be provided far more simply and effectively through a self-serve business model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enhanced User Experience
&lt;/h3&gt;

&lt;p&gt;The self-serve API monetization model is all about prioritizing the experience of the user, and as such, it delivers the best user experience for both adoptees and developers. Self-serve delivers ease and autonomy – not only is the product easy to use, the adoptee has autonomy to choose how they use it. This autonomy can result in long-term increases in adoption and retention, but even more importantly, it results in a product that is easy to use and easy to advocate for in a competitive API ecosystem.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scalability and Extensibility
&lt;/h3&gt;

&lt;p&gt;The self-serve model boosts scalability and extensibility by allowing adoptees to choose exactly what services they need when they need it, increasing revenue growth without losing direct monetization upsell opportunities. Standard systems of old might require a full rollout before any benefit can be gained, but with a self-serve approach, this barrier is lowered substantially. If a user only needs a low tier of functionality now, but is likely to use a higher tier later, they will prioritize a solution that allows them to increase their services at will over a system that requires manual deployment or support. With tiered pricing options, a flexible usage based pricing model is attractive to users who may not know their true product usage requirements, where they are granted API access, and which then results in the SaaS company to ensuring API revenue generation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost Effectiveness
&lt;/h3&gt;

&lt;p&gt;Self-serve systems are cost effective for all parties involved. Because you can tie business logic to the services being rendered at the developer level, costs can be controlled by scaling only to the demand your clients have placed on your self-serve product. For the adoptee, costs can be controlled by using only what API call volume (or whatever monetizable metric you've chosen) is needed at the moment and scaling over time. When an organization decides to open APIs up to the public, they not only join the API economy, but diversify their revenue recognition. This increased cost effectiveness positions your API to be a powerhouse option, as flexibility is almost always going to be valued quite highly on a ranked list of attributes for services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Accelerating the Flywheel - How Self-Serve Feeds Product Led Growth
&lt;/h2&gt;

&lt;p&gt;Self-serve models unlock a massive potential benefit in the form of product led growth acceleration. product led growth is a simple concept – the product should sit at the center of your growth, driving user acquisition, conversion, and retention. A self-serve model is the first big step towards accelerating this process.&lt;/p&gt;

&lt;p&gt;Consider what a typical self-serve model looks like in practical terms. First, your user onboards and decides to check out the product. From that moment, everything rides on the product – all the user needs to experience is their “Aha” moment, the moment where they realize the product is a good fit for their use case. Flexible adoption through a self-service API business model increases the chances of this happening, and a good product should provide ample opportunity for multiple “Aha” moments.&lt;/p&gt;

&lt;p&gt;Once the user has had that moment, however, the self-service models kicks the product led growth model into high gear. As the adoptee accelerates their use, the sales team should engage their efforts, identifying new use cases and leading the charge on new product iteration. As the adoptee becomes an evangelist for the product, pushing for adoption and further iteration, the product itself becomes better and more attractive, bringing additional users into the fray, while accelerating the use of the existing client base.&lt;/p&gt;

&lt;p&gt;This strategy has proven itself time and time again for exceptional growth – and all it takes is the creation of a strong product and a low-friction self-serve approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Evangelists
&lt;/h3&gt;

&lt;p&gt;One big benefit of this model is the creation of product evangelists. A product evangelist is a dream come true for many developers – an adoptee who falls in love with the product to such an extent that they evangelize its use and accelerate user awareness and acquisition. In order to effectively creative product evangelists, you have to think like a product evangelist – so what specifically does this type of developer want?&lt;/p&gt;

&lt;p&gt;Product evangelists need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a product they trust,&lt;/li&gt;
&lt;li&gt;a product they can share widely and&lt;/li&gt;
&lt;li&gt;a product others can validate their experience with.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Effective self-serve models and product led growth strategies should meet these demands. A product that is low-friction and transparent in its pricing and functionality is a product that can be trusted – this can be bolstered even further through adequate and effective documentation. A product that is shareable is one that has clarity as to the value proposition – a product should have clear benefits and should document these benefits through blog posts, analytics, customer case studies, white papers, etc. Finally, a product that can validate experience is one where a new user can be influenced into adoption and immediately see the value in the product – in other words, a validating product is a low-friction product.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Ultimately, a self-service model is an incredibly powerful way to enable product led growth in API monetization, and should be considered within the full context of the form and functionality of the API. With proper attention, developers can create an incredibly powerful product led growth flywheel that is powered by a self-serve model which compounds evangelists into new evangelists day by day – and all it takes is a great product and the right intent.&lt;/p&gt;

</description>
      <category>api</category>
      <category>productivity</category>
      <category>monetization</category>
      <category>apimonetization</category>
    </item>
    <item>
      <title>Keeping AI Infrastructure Costs Down with API Governance</title>
      <dc:creator>Xing Wang</dc:creator>
      <pubDate>Fri, 01 Dec 2023 23:08:53 +0000</pubDate>
      <link>https://forem.com/moesif/keeping-ai-infrastructure-costs-down-with-api-governance-d8g</link>
      <guid>https://forem.com/moesif/keeping-ai-infrastructure-costs-down-with-api-governance-d8g</guid>
      <description>&lt;p&gt;The growing importance of AI in business is undeniable, with more than &lt;a href="https://www.forbes.com/advisor/business/software/ai-in-business/"&gt;50% of businesses&lt;/a&gt; employing artificial intelligence for security and combating fraud. Additionally, beyond the practical applications for businesses externally, AI can be used internally to deliver better customer experiences through competitive tools and features. As the role of AI within an API business’ operations expands, so do the associated AI infrastructure costs. These expenses can quickly become a significant financial burden if left unchecked. Like all outward facing API-based tools, the key to success is API governance. &lt;/p&gt;

&lt;p&gt;That's where API governance steps in as a way of managing infrastructure costs and avoiding financial setbacks of monumental proportions. API governance allows an organization to regulate and optimize how AI resources and services are accessed and utilized, ensuring that businesses can offer an AI solution and features without breaking the bank. Governance serves as a strategic framework for controlling expenses in an effort to maintain the quality and reliability of offered AI implementation within services and solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding AI Infrastructure Cost
&lt;/h2&gt;

&lt;p&gt;AI infrastructure costs are a significant consideration for businesses considering offering artificial intelligence solutions. These costs typically stem from three primary factors: data storage and processing, model training and deployment, and infrastructure management.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Storage and Processing
&lt;/h3&gt;

&lt;p&gt;Data management can be a substantial financial burden, as AI models often require many datasets, requiring efficient or vast storage solutions and powerful processing capabilities. This can be a large cost investment for businesses, as non-AI based solutions pivoting towards AI likely will not inherently have the capability to manage the volume of data required for an AI project. &lt;/p&gt;

&lt;h3&gt;
  
  
  Model Training and Deployment
&lt;/h3&gt;

&lt;p&gt;Model training and deployment costs quickly add up, as they involve the computational power required to develop and deploy an AI model (or, more likely, multiple generative AI models). This process can strain a company's finances, especially if frequent model updates are needed due to data drift, bug fixes or optimizations, or even changes in regulations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure Management
&lt;/h3&gt;

&lt;p&gt;The need for infrastructure management adds to overall expense, as businesses must ensure that their chosen AI system(s) can handle an increase in queries. This requires an optimization of resource allocation, and maintaining infrastructure to support AI products, which can be costly to manage.&lt;/p&gt;

&lt;p&gt;In the realm of AI technology, cost optimization is not just a financial concern; it's a strategic imperative. As a business increasingly relies on AI features and products to drive growth, cost efficiency becomes a major business goal. Without a proper AI cost optimization strategy, AI development can quickly become unsustainable financial endeavors that can financially cripple an organization’s growth efforts.&lt;/p&gt;

&lt;p&gt;To maintain profitability, organizations must continually assess and streamline their AI infrastructure costs through budget allocation, efficient resource utilization, and a concerted focus on eliminating unnecessary expenditures. Cost optimization not only ensures that an AI application remain financially viable but also allows businesses to direct their resources towards new features/products, research, and other strategic initiatives, ultimately maximizing the value AI can bring to the organization. By understanding that computing power will add to your overall artificial intelligence cost, you can optimize your internal business processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is API Governance?
&lt;/h2&gt;

&lt;p&gt;API governance refers to a structured framework or set of practices that dictate how APIs are deployed and managed within an organization. Governance consists of policies, procedures, and standards that “govern” the use of APIs (internally and externally) to ensure consistency, security, and compliance around more than just AI initiatives. API governance allows businesses to regulate how their software components, data, and services interact with one another or with external integrations, providing a roadmap for API development and usage.&lt;/p&gt;

&lt;p&gt;API governance is of particular value in the realm of artificial intelligence. In an AI product, where data and models are often shared across applications and platforms, having a well-defined API governance strategy should be the cornerstone of any AI API product plan. Governance ensures that AI resources are used effectively and responsibly by employees internally and paying customers externally, enabling a financially-motivated approach to API development, deployment, and maintenance. By setting clear guidelines for AI APIs, businesses can foster interoperability, data security, and compliance with industry standards.&lt;/p&gt;

&lt;p&gt;Because the relationship between AI and APIs is symbiotic, robust API governance can help businesses strike a balance between offering an API product with AI capability, and controlling access to AI assets. It not only streamlines the integration of AI capabilities into external applications but also ensures that AI services are secure and align with the broader goals of your organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Benefits of API Governance for Cost Control
&lt;/h2&gt;

&lt;p&gt;API governance plays a major role in controlling the costs associated with AI infrastructure by providing several essential benefits through analytics insight. &lt;/p&gt;

&lt;p&gt;Governance frameworks ensure efficient resource allocation. AI models often require significant computational resources. With proper governance, organizations can allocate these resources optimally, preventing over-provisioning or underutilization. This process can be cost saving through means of eliminating wasteful spending on unnecessary infrastructure.&lt;/p&gt;

&lt;p&gt;API governance also is used to enable monitoring and management of API usage by customers. By closely tracking how APIs are utilized, businesses can identify &lt;a href="https://www.moesif.com/blog/engineering/api-monitoring/The-Difference-Between-Synthetic-API-Monitoring-and-API-Real-User-Monitoring/?utm_campaign=DevTo&amp;amp;utm_source=placed-article&amp;amp;utm_medium=body-cta&amp;amp;utm_content=AI-infrastructure-costs"&gt;usage patterns&lt;/a&gt;, bottlenecks, and potential areas for optimization. This real-time insight into API usage helps organizations make data-driven decisions, ensuring that they are investing in the right places to achieve their data science objectives effectively.&lt;/p&gt;

&lt;p&gt;Furthermore, API governance serves as a guard against unauthorized access and misuse. It establishes access controls, authentication mechanisms, and security measures that protect sensitive AI assets from unauthorized users and potential data breaches. &lt;/p&gt;

&lt;p&gt;API governance can be used to enhance AI product management. With facilitated version control, documentation, ensure that AI models remain up-to-date, reliable, and cost-efficient over time. Effective lifecycle management and user analytics prevent the accumulation of obsolete models that drain resources without delivering value and reduce the risk of costly errors resulting from poorly managed model updates. &lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for Implementing API Governance
&lt;/h2&gt;

&lt;p&gt;Implementing API governance is the &lt;a href="https://www.moesif.com/blog/api-monetization/api-strategy/How-to-Enforce-API-Usage-Policies/?utm_campaign=DevTo&amp;amp;utm_source=placed-article&amp;amp;utm_medium=body-cta&amp;amp;utm_content=AI-infrastructure-costs"&gt;key to maintaining control&lt;/a&gt;, security, and efficiency for any API product, but particularly for  AI operations. These policies should define who has access to AI resources, what they can do with them, and under what circumstances. Some guiding principles of API governance: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Establish Clear API Usage Policies&lt;/strong&gt;: Create transparent policies that define who can access AI resources, what they can do with them, and under what conditions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Utilize Rate Limiting and Throttling&lt;/strong&gt;: Set limits on API usage to prevent resource overconsumption and employ throttling to maintain consistent service quality internally and externally.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authentication and Access Controls&lt;/strong&gt;: Implement strong authentication mechanisms and access controls to protect AI data and resources from unauthorized access and misuse with API keys, OAuth2 tokens, etc. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring and Reporting Regularly&lt;/strong&gt;: Continuously monitor API usage, performance, and security, and generate reports to detect anomalies, spot trends, and address issues promptly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Versioning and Deprecation Strategies&lt;/strong&gt;: Develop strategies to manage API versions effectively, ensuring the &lt;a href="https://www.moesif.com/blog/api-product-management/deprecation/How-to-Properly-Deprecate-an-API-Using-Moesif?utm_campaign=DevTo&amp;amp;utm_source=placed-article&amp;amp;utm_medium=body-cta&amp;amp;utm_content=AI-infrastructure-costs"&gt;orderly transition&lt;/a&gt; from older, potentially inefficient APIs to newer, optimized ones while maintaining compatibility.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Challenges and Pitfalls
&lt;/h2&gt;

&lt;p&gt;API governance, while vital for the efficient management of AI infrastructure, is not without its challenges and potential pitfalls. There are two prominent issues that organizations face: data security and compliance concerns. Beyond the delicate equilibrium of cost control and performance optimization data security and compliance concerns are hugely important. Proper API governance must prioritize data protection, privacy, and compliance with regulatory standards, like GDPR or HIPAA. Striking the right balance between enabling access for AI-driven tasks and safeguarding data integrity is a challenge that demands extensive planning.&lt;/p&gt;

&lt;p&gt;Additionally, a balance between cost control and performance can be challenging to achieve. Organizations need to allocate resources optimally and strategically to ensure AI systems operate without overspending or under delivering. However, an overly cost-centric approach can compromise performance and severely impact user experience.Treading carefully is essential to maximize the value of an AI infrastructure while maintaining financial sustainability for your organization at large. To navigate these challenges, companies must adopt an approach to API governance that integrates security, compliance, cost-efficiency, and performance optimization.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>api</category>
    </item>
    <item>
      <title>The Secret to Building an Effective Customer Success Dashboard</title>
      <dc:creator>Xing Wang</dc:creator>
      <pubDate>Fri, 10 Nov 2023 22:16:07 +0000</pubDate>
      <link>https://forem.com/moesif/the-secret-to-building-an-effective-customer-success-dashboard-3fii</link>
      <guid>https://forem.com/moesif/the-secret-to-building-an-effective-customer-success-dashboard-3fii</guid>
      <description>&lt;p&gt;The secret to customer success is making your customers happy and supporting them to succeed. A simple concept – until you try to implement it. You’ll need a great product and a great customer service team, for starters. You’ll also need an in-depth understanding of the customer experience, including their pain points and how you can help solve them.&lt;/p&gt;

&lt;p&gt;This is where things get trickier, but the answer could already be tucked away in your data. All you need is an effective customer success dashboard to automate the identification of issues and then support you to solve them – as the dashboard example below demonstrates.&lt;/p&gt;

&lt;h2&gt;
  
  
  Customer Success KPI Dashboard Example
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;”A couple of years ago, when we encountered errors from sub-optimal client API practices, we might share a permalink with a team member so they could look at logs. That worked reasonably well, but you’re still sifting through logs. Now, events that belong together can be grouped and easily shared in a dashboard. Issues can then be proactively addressed, resulting in better customer experiences.” Conrad Caplin, Co-Founder, Pronto CX.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  What is a Customer Success Dashboard?
&lt;/h3&gt;

&lt;p&gt;When it comes to understanding and enabling customer success, implementing an effective customer success dashboard that enables swift key performance indicator (KPI) analysis should be at the heart of your approach.&lt;/p&gt;

&lt;p&gt;A customer success dashboard is a visual tool that helps you to track everything from how your product is being used to your retention rate, churn rate, monthly revenue and more.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why are Customer Success Dashboards Important?
&lt;/h3&gt;

&lt;p&gt;A good customer success KPI dashboard is a testament to the difference that such a dashboard can make to customer satisfaction. It enables an organization to implement both reactive and proactive measures to drive up its customer success levels.&lt;/p&gt;

&lt;p&gt;Retaining users starts with understanding the steps at which they get stuck when using your product and why. Generally, organizations have historically employed a reactive strategy to API problems, giving their customers an opportunity to grow impatient and become at risk for canceling their contracts. Through a customer success driven strategy, automate the monitoring and display of API usage and gain customer insights. Identify situations where your company can proactively intervene and help your customers be successful.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Do You Measure Customer Success KPI?
&lt;/h2&gt;

&lt;p&gt;Customer success dashboards can support the journey from new customer to loyal, retained customer. To monitor that journey, you need to measure various customer success KPIs. Some of the most important include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Customer churn rate&lt;/li&gt;
&lt;li&gt;Customer lifetime value&lt;/li&gt;
&lt;li&gt;Monthly recurring revenue&lt;/li&gt;
&lt;li&gt;Customer support tickets&lt;/li&gt;
&lt;li&gt;Net promoter score&lt;/li&gt;
&lt;li&gt;Customer satisfaction&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also add in any KPI that will support understanding your own customers’ success and monitoring it, such as API calls as in the customer success API dashboard example above.&lt;/p&gt;

&lt;h2&gt;
  
  
  Customer Success Overview Dashboard Template
&lt;/h2&gt;

&lt;p&gt;Your customer success overview dashboard template will be shaped around your specific product. That said, it’s possible to outline some broad examples of what such a dashboard should include.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Should Be on a Customer Success Dashboard?
&lt;/h3&gt;

&lt;p&gt;Start by understanding what success means to your customers and what your goals are. After all, customer success stories vary hugely from business to business.&lt;/p&gt;

&lt;p&gt;What you monitor is up to you. Say you have a sales rep who has implemented behavioral emails to try and accelerate API integration. You can monitor the impact on your dashboard.&lt;/p&gt;

&lt;p&gt;Your dashboard will be a mix of customer health indicators and KPIs, monitoring both positive and negative customer success metrics. Positive metric examples include number of new customers and monthly recurring revenue. Churn rate, meanwhile, is an example of a negative metric.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Do You Measure the Effectiveness of a Dashboard?
&lt;/h3&gt;

&lt;p&gt;You can measure the effectiveness of your customer success dashboard by monitoring your overall customer satisfaction and success levels. Another measure of effectiveness is the return on investment that your dashboard delivers.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are 3 Benefits of a Dashboard?
&lt;/h3&gt;

&lt;p&gt;When it comes to customer success, a dashboard delivers multiple benefits. Three of the most useful are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reducing customer churn&lt;/li&gt;
&lt;li&gt;Creating more stable revenue&lt;/li&gt;
&lt;li&gt;Increasing customer satisfaction&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By supporting your customers’ success, you’re supporting your own.&lt;/p&gt;

&lt;h2&gt;
  
  
  View Customer Success Metrics and Gain Insights
&lt;/h2&gt;

&lt;p&gt;We touched on customer success metrics when talking about KPIs above. Monitoring rates such as customer churn provides an overview of customer success that can drive unique insights into your customer experience and your own business.&lt;/p&gt;

&lt;p&gt;Key to identifying the customer success metrics that will provide the most useful and comprehensive oversight is understanding what your customer needs. You can then track your success in meeting those needs through relevant metrics.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Do You Track Customer Success?
&lt;/h3&gt;

&lt;p&gt;Being able to view customer success metrics and gain insights is hugely valuable. But how do you do it?&lt;/p&gt;

&lt;p&gt;To track customer success, you need to implement a range of metrics that can provide an overview of your progress. Let’s take a look at a few of these.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are the Important API Customer Success Metrics?
&lt;/h3&gt;

&lt;p&gt;The most important API customer success metrics for your business will be those that dive into the detail of what you’re achieving. Examples of such metrics include:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Customer retention rate&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It’s all very well winning new business, but if your customers don’t stick with your product, you have a problem. Tracking what proportion of your customers you retain over time is key not just to understanding your customer success performance but to supporting the long-term viability of your enterprise.&lt;/p&gt;

&lt;p&gt;_Customer health score&lt;br&gt;
_&lt;br&gt;
You can use a customer health score to determine what proportion of your customers are ‘healthy’ and what proportion are ‘at risk’. On an individual customer basis, this helps you identify which customers are likely to grow and which you are at risk of losing. At an organizational level, it can serve as an early warning system if you identify a growing proportion of customers at risk.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Net promoter score&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A long-established tool in market research, as well as in monitoring customer success and business growth, your net promoter score relates to the proportion of people who would recommend your business (or product) to a friend or colleague. Tracking your net promoter score can reveal a great deal about how positively (or otherwise!) your customers view your business.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Customer lifetime value&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Monitoring customer lifetime value tells you what a customer is worth over the entire period of their relationship with your business. Calculating your customer lifetime value and then monitoring it to ensure that it increases is an effective way to track your growing customer success.&lt;/p&gt;

&lt;p&gt;_Customer churn rate&lt;br&gt;
_&lt;br&gt;
The lower your customer churn rate, the better. After all, it costs your business more to go out and find new customers than it does to retain existing ones. As such, working to reduce your churn rate can deliver long-term rewards.&lt;/p&gt;

&lt;p&gt;_Monthly recurring revenue&lt;br&gt;
_&lt;br&gt;
Your monthly recurring revenue is the amount of income that you get from your customers each month. Tracking it is essential to understanding your customer success performance and to monitoring the financial viability of your business.&lt;/p&gt;

&lt;p&gt;_Customer retention costs&lt;br&gt;
_&lt;br&gt;
We mentioned above that it’s cheaper to retain customers than go out and find new ones, but retaining customers is far from cost-free. By measuring the cost of your customer success work and comparing it to your number of customers, you can monitor how much it costs to retain each customer.&lt;/p&gt;

&lt;p&gt;_Customer satisfaction score&lt;br&gt;
_&lt;br&gt;
A customer satisfaction score measures, quite simply, how satisfied your customers are with your business. It is an excellent indicator of customer success, particularly when combined with the metrics detailed above. Together, they provide a comprehensive overview of your customer success performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making a Success of Customer Success Dashboards for API Products
&lt;/h2&gt;

&lt;p&gt;Customer success dashboards for API products are a hugely important tool for understanding the impact that those products can deliver and the implications of that impact for the overall success of your business.&lt;/p&gt;

</description>
      <category>devrel</category>
      <category>productivity</category>
    </item>
    <item>
      <title>How to Get Customer and Application Context When Logging API Calls for 3scale API Gateway</title>
      <dc:creator>Xing Wang</dc:creator>
      <pubDate>Thu, 26 Oct 2023 21:39:36 +0000</pubDate>
      <link>https://forem.com/moesif/how-to-get-customer-and-application-context-when-logging-api-calls-for-3scale-api-gateway-9kh</link>
      <guid>https://forem.com/moesif/how-to-get-customer-and-application-context-when-logging-api-calls-for-3scale-api-gateway-9kh</guid>
      <description>&lt;p&gt;As APIs handle enormous amounts of data of a widely varying type, the critical question for any data provider is how specifically to secure this data. An authentication method that gives power to developers to build applications for all of their needs, determines who could access the APIs to protect sensitive data and ensure the request aren’t tempered. Authentication is when an entity proves an identity. Simply put, authentication is the act of verifying that you are who you claim to be. Without authentication, there wouldn’t be an easy way to associate requests with the specific user data and no way to protect requests from malicious users that might delete other user’s data. Authentication shouldn’t be an afterthought but rather built into the very fabric of your API.&lt;/p&gt;

&lt;h3&gt;
  
  
  Authentication Patterns
&lt;/h3&gt;

&lt;p&gt;Depending on our API we may need to use different authentication patterns to issue credentials for access to our API. These can range from API keys to custom configurations.&lt;/p&gt;

&lt;p&gt;3Scale supports the following authentication patterns:&lt;/p&gt;

&lt;h2&gt;
  
  
  Standard API Keys
&lt;/h2&gt;

&lt;p&gt;An authentication model where a single randomized strings or hashes acts as an identifier and a secret token. Each application with permissions on the API has a single unique string. By default the name of the key parameter is user_key. We can use this same label or choose another label before making the authorization calls to 3scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Identifier and Key pairs
&lt;/h2&gt;

&lt;p&gt;An authentication model where the immutable identifier — Application Id (App_Id) and mutable secret key strings — Application Keys (App_Keys) are separated into two tokens. The App_Id is constant and may or may not be secret. Each application may have 1-n Application Keys where each key is associated directly with App_Id and should be treated as secret.&lt;/p&gt;

&lt;p&gt;In 3Scale, each service could use a different authentication pattern but only one pattern could be used per service. In the authentication section, we could choose the required Authentication mode.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9v1svybr1h3gps34lydt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9v1svybr1h3gps34lydt.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Context
&lt;/h2&gt;

&lt;p&gt;3Scale provides an admin API endpoint to fetch application context associated for each user. Application context contains the details associated with an individual user about their interaction with 3Scale APIs like first_traffic_at, first_daily_traffic_at and other personally identifiable information data like - user_id, user_account_id, service_id, plan information and other details. With access to these details, it would be easy to associate requests with the specific user.&lt;/p&gt;

&lt;p&gt;Depends on the authentication method we use, we call the admin endpoint to fetch the application context. While using the standard API keys authentication method, we fetch the application context by calling this endpoint -&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -v -X GET "https://#{domain}/admin/api/applications.xml?access_token=#{ADMIN_ACCESS_TOKEN}&amp;amp;user_key=#{user_key}"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;While using the application identifier and key pairs authentication method, we fetch the application context by calling this endpoint -&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -v -X GET "https://#{domain}/admin/api/applications.xml?access_token=#{ADMIN_ACCESS_TOKEN}&amp;amp;app_id=#{app_id}&amp;amp;app_key=#{app_key}"&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting Up Moesif API Analytics with 3Scale
&lt;/h3&gt;

&lt;p&gt;Moesif has a plugin available in the Luarocks to capture API requests and responses and log to Moesif for easy inspecting and real-time debugging of your API traffic via 3Scale. The plugin captures metrics locally and queues them which enables the plugin to send metrics data to the Moesif collection network out of band without impacting your app.&lt;/p&gt;

&lt;p&gt;The recommended way to install Moesif is via Luarocks:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;luarocks install --server=http://luarocks.org/manifests/moesif lua-resty-moesif&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Authentication Credentials Location
&lt;/h3&gt;

&lt;p&gt;3Scale provides flexibility to an end-user to pass authentication credentials via HTTP_Headers or as query_parameters while calling an API. Moesif would look for the credentials in both headers and query parameters and fetch the application context for that particular user. Moesif provide a configuration option to set the field name which is the same name used while configuring the API authentication setting. By default, 3Scale uses user_key for standard API Key and app_id and app_key for App_Id and App_Key pair authentication method.&lt;/p&gt;

&lt;h2&gt;
  
  
  Identify User and Company (Account)
&lt;/h2&gt;

&lt;p&gt;The admin endpoint returns the application context as a XML entity. Moesif provides a configuration option to set the field name from 3Scale’s application XML entity which will be used to identify the user and the company (account). The field name are id and user_account_id by default for user and company, but other valid examples include user_key and service_id.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-- Function to parse 3Scale XML entity&lt;br&gt;
-- @param&lt;/code&gt;user_id_name&lt;code&gt;The 3scale field name from 3scale's application XML entity used to identify the user. Default&lt;/code&gt;id&lt;code&gt;.&lt;br&gt;
-- @param&lt;/code&gt;company_id_name&lt;code&gt;The 3scale field name from 3scale's application XML entity used to identify the company (account). Default&lt;/code&gt;user_account_id&lt;code&gt;.&lt;br&gt;
-- @param&lt;/code&gt;debug&lt;code&gt;A flag to print logs&lt;br&gt;
function parseXML(user_id_name, company_id_name, debug)&lt;br&gt;
    -- config_response is the response from an api call to fetch application context which is a XML entity &lt;br&gt;
    local response_body = config_response:match("(%&amp;lt;.*&amp;gt;)")&lt;br&gt;
    if response_body ~= nil then &lt;br&gt;
        local xobject = xml.eval(response_body)&lt;br&gt;
        local xapplication = xobject:find("application")&lt;br&gt;
        if xapplication ~= nil then&lt;br&gt;
            local xtable = {}&lt;br&gt;
            for k, v in pairs(xapplication) do&lt;br&gt;
                if v ~= nil and type(v) == "table" then &lt;br&gt;
                    xtable[v:tag()] = k&lt;br&gt;
                end&lt;br&gt;
            end&lt;br&gt;
            local key = xapplication[xtable[user_id_name]]&lt;br&gt;
            if key ~= nil then &lt;br&gt;
                if debug then&lt;br&gt;
                    ngx.log(ngx.DEBUG, "Successfully fetched the userId ")&lt;br&gt;
                end&lt;br&gt;
                -- Set the UserId&lt;br&gt;
                local user_id = key[1]&lt;br&gt;
            else &lt;br&gt;
                if debug then&lt;br&gt;
                    ngx.log(ngx.DEBUG, "The user_id_name provided by the user does not exist ")&lt;br&gt;
                end&lt;br&gt;
            end&lt;br&gt;
            local companyKey = xapplication[xtable[company_id_name]]&lt;br&gt;
            if companyKey ~= nil then &lt;br&gt;
                if debug then&lt;br&gt;
                    ngx.log(ngx.DEBUG, "[moesif] Successfully fetched the companyId (accountId) ")&lt;br&gt;
                end&lt;br&gt;
                -- Set the CompanyId (AccountId)&lt;br&gt;
                local company_id = companyKey[1]&lt;br&gt;
            else &lt;br&gt;
                if debug then&lt;br&gt;
                    ngx.log(ngx.DEBUG, "[moesif] The company_id_name provided by the user does not exist ")&lt;br&gt;
                end&lt;br&gt;
            end&lt;br&gt;
        else&lt;br&gt;
            if debug then&lt;br&gt;
                ngx.log(ngx.DEBUG, "Application tag does not exist ")&lt;br&gt;
            end&lt;br&gt;
        end&lt;br&gt;
    else&lt;br&gt;
        if debug then&lt;br&gt;
            ngx.log(ngx.DEBUG, "Xml response body does not exist ")&lt;br&gt;
        end&lt;br&gt;
    end&lt;br&gt;
end&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In this way, the plugin would link every event or action to an individual customer and behavioral trends can be discovered by looking at multiple events together to identify product issues such as why users stop using your API or which features or endpoints they engage with the most.&lt;/p&gt;

</description>
      <category>api</category>
      <category>apigateway</category>
      <category>howto</category>
    </item>
    <item>
      <title>What every web developer should know about CPU Arch: Threads vs Processes</title>
      <dc:creator>Xing Wang</dc:creator>
      <pubDate>Tue, 17 Oct 2023 00:27:26 +0000</pubDate>
      <link>https://forem.com/moesif/what-every-web-developer-should-know-about-cpu-arch-threads-vs-processes-5d57</link>
      <guid>https://forem.com/moesif/what-every-web-developer-should-know-about-cpu-arch-threads-vs-processes-5d57</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;There are a ton of blog posts discussing software development paradigms and best practices and there are a few on hardware. At the same time, I can probably count on one hand the number of articles discussing what every developer should know from a computer architecture perspective. The goal of this article is not to deep dive into CPU Architecture, Operating Systems, etc. I’ll give an overview of some principals that we can loose touch with during our busy days as a developer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Threads vs Processes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Security and reliability
&lt;/h3&gt;

&lt;p&gt;Before Docker Containers and before VM’s, CPU hardware has one of the oldest tricks in sandboxing: processes and virtual memory. As a developer, you should understand that a process will run within its own isolated memory space and can only talk to other processes through specific mechanisms such as Inter-Process Communication (IPC). Thus, if you have process A and process B and they both have a logical address 0x0, they actually point to two separate regions in physical memory. Modern CPU hardware has safe guards through paging that ensures the process can only read and write from its own memory space. The all so common Segmentation Fault is actually originally triggered by CPU hardware via a #PF (Page Fault) due to an illegal access. (Note: I’ll leave out x86 segmentation as segmentation registers are not really “used” anymore in 64-bit mode).&lt;/p&gt;

&lt;p&gt;On the other hand, threads within the same process share the same address space. Access to the same address space can be a blessing and a curse. If a thread crashes, the state of the application may be unknown. There are no security protections in hardware preventing rogue threads from accessing other thread’s data in an unintended way. In fact, Chrome moved to running every browser tab in its own process instead of just using multiple threads. If you ever see the “Oh Snap…” in a Chrome tab, that is a process that executed something unintentional. The failed tab crashed and no longer running, but the remaining tabs can continue with business as usual.&lt;/p&gt;

&lt;h3&gt;
  
  
  Doesn’t this mean processes have to replicate code and data in memory?
&lt;/h3&gt;

&lt;p&gt;Yes and no. Modern operating systems and hardware can do some clever things with something called copy-on-write (COW). When you call fork() in Linux, not everything is copied to a new section of memory space initially. The same paging hardware that ensures processes don’t write to out-of-process memory space can also throw a #PF if a write happens to a page that is marked as read only. This flag allows the operating system to take care of the fault which means copying over the old page to a new page in memory, if needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Doesn’t using processes require slow context switches?
&lt;/h3&gt;

&lt;p&gt;Yes, processes require context switches, but so do threads. A thread also has execution context attached to it which consists of it’s various CPU register values (such as EAX, EBX… in x86), among other things which needs to be stored before the next thread can start executing. In fact, modern SIMD code such as many of your video encoding and compression algorithms to watch your favorite Netflix shows in HD shows use some pretty large registers which would need to write to memory. In fact, the latest incarnation of AVX consists of 32 64 byte wide registers! When referring to processes being slower than threads, usually the reference is not just the context switch itself but flushing entries in the Translation Lookaside Buffers (TLBs). TLBs holds cached translations of the paging mechanism we were referring to earlier. Because a new process will execute in its own memory space, it cannot use the old processes’s translations and will start fresh. This means the TLB will be cold for the new process. If a translation is not in the TLB, before the memory access can complete, a Page Miss Handler (PMH) needs to walk the page tables level by level. Page table walks are heavy pointer chasing algorithms and can slow load latency. There are shortcuts to minimize the number of levels required to walk, but the end issue is that a cold TLB can result in load latencies far greater than a warm TLB. This occurs even if the accessed variable is already in a CPU Cache somewhere (which we talk about later)&lt;/p&gt;

&lt;p&gt;So the end conclusion is that the context switch for a process may not be a whole lot longer, but there can be lingering effects that slow down even post context.&lt;/p&gt;

&lt;h3&gt;
  
  
  Too many threads.
&lt;/h3&gt;

&lt;p&gt;Processes are not the only thing that can under go this cold TLB. Threads, if not scheduled on the same logical CPU (CPU Affinity), can also undergo this. Which brings up the next point: There are only a fixed number of logical CPUs that your application can run on. While a modern operating system has many processes and threads that can appear running simultaneously in various blocked and waits states, a CPU can only run a fixed number of threads at a time in reality. This is true regardless if the threads are in the same process or not. As you launch more threads then you can actively run, the operating system has to preemptively context switch. If you just spawn thread after thread for each task thinking it will run all in parallel, you may be surprised that you may be hurting performance more than running a small number of threads in a thread pool.&lt;/p&gt;

&lt;p&gt;In general, a large number of threads can cause something called thrashing. Thrashing is a generic term used when the CPU starts swapping or moving resources around more than performing actual execution. A CPU has limited resource sizes, the TLBs, the caches, even the page tables allocated in memory are all limited. As you starting switching between more threads, you can put pressure on these subsystems causing evictions of still hot data, which can be detrimental to performance. Many asynchronous web frameworks are by default configured to match their worker thread pool to the real number of logical cores, or they may add a few extra for blocking, but still within an order of magnitude. Think of Uber drivers. Sometimes it’s easier and quicker to pick up passengers for longer trips and just drive for awhile then to be constantly picking up new passengers and dropping old ones off. The number of passengers is your fixed resources such as logical processors, cache size, and TLBs. If you pick up new passengers, you have to get rid of the old ones first to make room for the new ones. This in and out behavior is thrashing if you start finding yourself waiting for unloading and loading rather than just driving (doing work or execution).&lt;/p&gt;

</description>
      <category>cpu</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Building a RESTful Minimal API with .NET Core 7</title>
      <dc:creator>Xing Wang</dc:creator>
      <pubDate>Mon, 09 Oct 2023 21:27:19 +0000</pubDate>
      <link>https://forem.com/moesif/building-a-restful-minimal-api-with-net-core-7-g6j</link>
      <guid>https://forem.com/moesif/building-a-restful-minimal-api-with-net-core-7-g6j</guid>
      <description>&lt;p&gt;.NET Core and ASP.NET Core are popular frameworks for creating powerful RESTful APIs. In this tutorial, we will use it to develop a simple Minimal API that simulates a credit score rating. Minimal APIs provide a streamlined approach to creating high-performing HTTP APIs using ASP.NET Core. They allow you to construct complete REST endpoints with minimal setup and code easily. Instead of relying on conventional scaffolding and controllers, you can fluently define API routes and actions to simplify the development process.&lt;/p&gt;

&lt;p&gt;We will create an endpoint allowing a user to retrieve a credit score rating by sending a request to the API. We can also save and retrieve credit scores using POST and GET methods. However, it is essential to note that we will not be linking up to any existing backend systems to pull a credit score; instead, we will use a random number generator to generate the score and return it to the user. Although this API is relatively simple, it will demonstrate the basics of REST API development using .NET Core and ASP.NET. This tutorial will provide a hands-on introduction to building RESTful APIs with .NET Core 7 and the Minimal API approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before we start, we must ensure that we have completed several prerequisites. To follow along and run this tutorial, you will need the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A working .NET Core installation&lt;/li&gt;
&lt;li&gt;An IDE or text editor of your choice&lt;/li&gt;
&lt;li&gt;Postman to test our endpoint&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Creating the Initial Project
&lt;/h2&gt;

&lt;p&gt;We'll be using the &lt;a href="https://learn.microsoft.com/en-us/dotnet/core/tools/" rel="noopener noreferrer"&gt;.NET cli&lt;/a&gt; tool to create our initial project. The .NET command line interface provides the ability to develop, build, run, and publish .NET applications.&lt;/p&gt;

&lt;p&gt;The .NET cli &lt;code&gt;new&lt;/code&gt; command provides many templates to create your project. You can also add the &lt;code&gt;search&lt;/code&gt; command to find community-developed templates from &lt;a href="https://nuget.org" rel="noopener noreferrer"&gt;NuGet&lt;/a&gt; or use &lt;code&gt;dotnet new list&lt;/code&gt; to see available templates provided by Microsoft.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsisws8reog7t4nrhi079.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsisws8reog7t4nrhi079.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We'll be creating a Minimal API and starting from as clean a slate as possible. We'll be using the empty ASP.NET Core template. In the directory of your choosing; enter the following in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dotnet new web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll notice that the directory structure will look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrlvb43hwyuti058wh8d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrlvb43hwyuti058wh8d.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We'll be doing all of our work in the &lt;code&gt;Program.cs&lt;/code&gt; file. Its starting code should look similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;builder&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;WebApplication&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CreateBuilder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Build&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;MapGet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s"&gt;"Hello World!"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Run&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see how concise and readable our starter code is. Let's break down the code provided by the template line by line:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The &lt;code&gt;WebApplication.CreateBuilder(args)&lt;/code&gt; method creates a new instance of the &lt;code&gt;WebApplicationBuilder&lt;/code&gt; class, which is used to configure and build the &lt;code&gt;WebApplication&lt;/code&gt; instance. The args parameter is an optional array of command-line arguments that can be passed to the application at runtime.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;builder.Build()&lt;/code&gt; method is called to create a new instance of the &lt;code&gt;WebApplication&lt;/code&gt; class, which represents the running application. This instance configures the application, defines routes, and handles requests.&lt;/li&gt;
&lt;li&gt;The third line defines a route for the root path ("/") of the application using the &lt;code&gt;app.MapGet()&lt;/code&gt; method. This means that when the root path is requested, the application will respond with the string "Hello World!".&lt;/li&gt;
&lt;li&gt;We start the application by calling the &lt;code&gt;app.Run()&lt;/code&gt; method.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Using the builder pattern, we can configure and customize the &lt;code&gt;WebApplication&lt;/code&gt; instance. This allows us to define the application's behavior, including middleware, routes, and other settings, in a structured and extensible way. For example, the &lt;code&gt;WebApplication&lt;/code&gt; instance created by the builder can be thought of as the "entry point" of the application, which handles requests and generates responses.&lt;/p&gt;

&lt;p&gt;Overall, this code block creates a simple Minimal API in .NET 7 that responds with a "Hello World!" message when the application's root path is requested.&lt;/p&gt;

&lt;p&gt;Next, we'll customize our API to mimic retrieving a credit score rating.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding in the Code
&lt;/h2&gt;

&lt;p&gt;In &lt;code&gt;Program.cs&lt;/code&gt;, we will house our endpoints and business logic. We'll define our &lt;code&gt;creditscore&lt;/code&gt; endpoint to provide GET and POST operations. We'll implement a list to store any credit score we would like. We'll also define an endpoint to retrieve the list of saved credit scores. We'll be utilizing a CreditScore &lt;code&gt;record&lt;/code&gt;, a new reference type in C# 10 similar to structs. A &lt;code&gt;record&lt;/code&gt; is a lightweight and immutable data object optimized for comparison and equality checking.&lt;/p&gt;

&lt;p&gt;Populate &lt;code&gt;Program.cs&lt;/code&gt; with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;builder&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;WebApplication&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CreateBuilder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Build&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;userAddedCreditScores&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;CreditScore&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;();&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;MapGet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/creditscore"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;CreditScore&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;Random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Shared&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;850&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;MapPost&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/creditscore"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CreditScore&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;userAddedCreditScores&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;score&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;MapGet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/userAddedCreditScores"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;userAddedCreditScores&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Run&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;record&lt;/span&gt; &lt;span class="nc"&gt;CreditScore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;Score&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="n"&gt;CreditRating&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;get&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Score&lt;/span&gt; &lt;span class="k"&gt;switch&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="p"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="m"&gt;800&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s"&gt;"Excellent"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="m"&gt;700&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s"&gt;"Good"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="m"&gt;600&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s"&gt;"Fair"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="m"&gt;500&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s"&gt;"Poor"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s"&gt;"Bad"&lt;/span&gt;
        &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As mentioned, our code first creates a builder object for the web application and then uses it to build an application instance. It also defines a &lt;code&gt;record&lt;/code&gt; type called &lt;code&gt;CreditScore&lt;/code&gt; with a single property called &lt;code&gt;Score&lt;/code&gt; and a read-only property called &lt;code&gt;CreditRating&lt;/code&gt;. This may look a little strange as we define our record after using it. However, this is due to namespaces, and the &lt;code&gt;record&lt;/code&gt; must be defined outside of the &lt;code&gt;WebApplication&lt;/code&gt; namespace.&lt;/p&gt;

&lt;p&gt;The application exposes multiple endpoints using &lt;code&gt;app.MapGet()&lt;/code&gt; and &lt;code&gt;app.MapPost()&lt;/code&gt; methods. The first endpoint, &lt;code&gt;/creditscore&lt;/code&gt; is a &lt;code&gt;GET&lt;/code&gt; method that generates a new random &lt;code&gt;CreditScore&lt;/code&gt; object with a score between 300 and 850. We'll define a &lt;code&gt;POST&lt;/code&gt; method for the same endpoint that accepts a &lt;code&gt;CreditScore&lt;/code&gt; object in the request body, adds it to a list called &lt;code&gt;userAddedCreditScores&lt;/code&gt;, and returns the same &lt;code&gt;CreditScore&lt;/code&gt; object to the caller. The other endpoint &lt;code&gt;/userAddedCreditScores&lt;/code&gt; is a GET method that returns a list of all the &lt;code&gt;CreditScore&lt;/code&gt; objects that have been added to &lt;code&gt;userAddedCreditScores&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Finally, the application starts running using &lt;code&gt;app.Run()&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running and Testing the API
&lt;/h2&gt;

&lt;p&gt;With our code written, run the following command to compile and run our project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dotnet run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The API is now operational and ready for testing. After running the previous command, you will see which port has been used to host your API in the console. You can define which port you would like to use by editing the &lt;code&gt;Properties &amp;gt; launchSettings.json&lt;/code&gt; file or by adding editing the &lt;code&gt;app.Run()&lt;/code&gt; command in &lt;code&gt;Program.cs&lt;/code&gt; like so, replacing 3000 with your desired port number:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"http://localhost:3000"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can use a tool like Postman to send an HTTP request to the API. For me, the endpoint to get a credit score is &lt;code&gt;localhost:5242/creditscore&lt;/code&gt;. When you send a request to this endpoint, you should receive a &lt;code&gt;200 OK&lt;/code&gt; status code, a credit score generated by the random number generator, and a credit rating.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxss1jb5oac67axl20ti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxss1jb5oac67axl20ti.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can save a credit score by sending a POST request to the &lt;code&gt;creditscore&lt;/code&gt; endpoint. We form the request's body with a &lt;code&gt;CreditScore&lt;/code&gt; object.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkpifkaerpika2l66jh6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkpifkaerpika2l66jh6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, we can retrieve all added scores by sending a GET request to the &lt;code&gt;/userAddedCreditScores&lt;/code&gt; endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpic7rw6mw14afz8bj92w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpic7rw6mw14afz8bj92w.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;In summary, we have developed a basic RESTful Minimal API using .NET Core 7 and ASP.NET. This code can be a foundation for creating more complex APIs for your application. As you continue to develop the API, you may want to consider implementing security measures such as an API key, integrating with an API gateway, &lt;a href="https://www.moesif.com/features/api-analytics?utm_campaign=DevTo&amp;amp;utm_source=placed-article&amp;amp;utm_medium=body-cta&amp;amp;utm_content=netcore7" rel="noopener noreferrer"&gt;monitoring the usage of the API&lt;/a&gt;, or generating revenue through &lt;a href="https://www.moesif.com/solutions/metered-api-billing?utm_campaign=DevTo&amp;amp;utm_source=placed-article&amp;amp;utm_medium=body-cta&amp;amp;utm_content=netcore7" rel="noopener noreferrer"&gt;API monetization&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>restapi</category>
      <category>tutorial</category>
      <category>coding</category>
      <category>javascript</category>
    </item>
    <item>
      <title>How to Debug an Unresponsive Elasticsearch Cluster</title>
      <dc:creator>Xing Wang</dc:creator>
      <pubDate>Fri, 29 Sep 2023 22:28:21 +0000</pubDate>
      <link>https://forem.com/moesif/how-to-debug-an-unresponsive-elasticsearch-cluster-2ge</link>
      <guid>https://forem.com/moesif/how-to-debug-an-unresponsive-elasticsearch-cluster-2ge</guid>
      <description>&lt;p&gt;Elasticsearch is an open-source search engine and analytics store used by a variety of applications from search in e-commerce stores, to internal log management tools using the ELK stack (short for "Elasticsearch, Logstash, Kibana"). As a distributed database, your data is partitioned into "shards" which are then allocated to one or more servers.&lt;/p&gt;

&lt;p&gt;Because of this sharding, a read or write request to an Elasticsearch cluster requires coordinating between multiple nodes as there is no "global view" of your data on a single server. While this makes Elasticsearch highly scalable, it also makes it much more complex to setup and tune than other popular databases like MongoDB or PostgresSQL, which &lt;em&gt;can&lt;/em&gt; run on a single server.&lt;/p&gt;

&lt;p&gt;When reliability issues come up, firefighting can be stressful if your Elasticsearch setup is buggy or unstable. Your incident could be impacting customers which could negatively impact revenue and your business reputation. Fast remediation steps are important, yet spending a large amount of time researching solutions online during an incident or outage is not a luxury most engineers have. This guide is intended to be a cheat sheet for common issues that engineers running that can cause issues with Elasticsearch and what to look for.&lt;/p&gt;

&lt;p&gt;As a general purpose tool, Elasticsearch has thousands of different configurations which enables it to fit a variety of different workloads. Even if published online, a data model or configuration that worked for one company may not be appropriate for yours. There is no magic bullet getting Elasticsearch to scale, and requires diligent performance testing and trial/error.&lt;br&gt;
{: .notice--warning}&lt;/p&gt;

&lt;h2&gt;
  
  
  Unresponsive Elasticsearch cluster issues
&lt;/h2&gt;

&lt;p&gt;Cluster stability issues are some of the hardest to debug, especially if nothing changes with your data volume or code base. &lt;/p&gt;

&lt;h3&gt;
  
  
  Check size of cluster state
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What does it do:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-state.html"&gt;Elasticsearch cluster state&lt;/a&gt; tracks the global state of our cluster, and is the heart of controlling traffic and the cluster. Cluster state includes metadata on nodes in your cluster, status of shards and how they are mapped to nodes, index mappings (i.e. the schema), and more. &lt;/li&gt;
&lt;li&gt;Cluster state usually doesn't change often. However, certain operations such as adding a new field to an index mapping can trigger an update.&lt;/li&gt;
&lt;li&gt;Because cluster updates broadcast to all nodes in the cluster, it should be small (&amp;lt;100MB).&lt;/li&gt;
&lt;li&gt;A large cluster state can quickly make the cluster unstable. A common way this happens is through a &lt;a href="https://www.elastic.co/blog/found-crash-elasticsearch#mapping-explosion"&gt;mapping explosion&lt;/a&gt; (too many keys in an index) or too many indices.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What to look for:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Download the cluster state using the below command and look at the size of the JSON returned.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  curl &lt;span class="nt"&gt;-XGET&lt;/span&gt; &lt;span class="s1"&gt;'http://localhost:9200/_cluster/state'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In particular, look at which indices have the most fields in the cluster state which could be the offending index causing stability issues. If the cluster state is large and increasing. You can also get an idea looking at individual index or match against an index pattern like so:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  curl &lt;span class="nt"&gt;-XGET&lt;/span&gt; &lt;span class="s1"&gt;'http://localhost:9200/_cluster/state/_all/my_index-*'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;You can also see the offending index's mapping using the following command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  curl &lt;span class="nt"&gt;-XGET&lt;/span&gt; &lt;span class="s1"&gt;'http://localhost:9200/my_index/_mapping'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  How to fix:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt; Look at how data is being indexed. A common way mapping explosion occurs is when high-cardinality identifiers are being used as a JSON key. Each time a new key is seen like "4" and"5", the cluster state is updated. For example, the below JSON will quickly cause stability issues with Elasticsearch as each key is being added to the global state.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ACTIVE"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"2"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ACTIVE"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"3"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DISABLED"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;To fix, flatten your data into something that is Elasticsearch friendly:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ACTIVE"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ACTIVE"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"3"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DISABLED"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Check Elasticsearch Tasks Queue
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What does it do:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;When a request is made against elasticsearch (index operation, query operation, etc), it’s first inserted into the task queue, until a worker thread can pick it up. &lt;/li&gt;
&lt;li&gt;Once a worker pool has a thread free, it will pick up a task from the task queue and process it. &lt;/li&gt;
&lt;li&gt;These operations are usually made by you via HTTP requests on the &lt;code&gt;:9200&lt;/code&gt; and &lt;code&gt;:9300&lt;/code&gt; ports, but they can also be internal to handle maintenance tasks on an index&lt;/li&gt;
&lt;li&gt;At a given time there may be hundreds or thousands of in-flight operations, but should complete very quickly (like microseconds or milliseconds).&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What to look for:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Run the below command and look for &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/tasks.html"&gt;tasks&lt;/a&gt; that are stuck running for a long time like minutes or hours.&lt;/li&gt;
&lt;li&gt;This means something is starving the cluster and preventing it from making forward progress.&lt;/li&gt;
&lt;li&gt;It's ok for certain long running tasks like moving an index to take a long time. However, normal query and index operations should be quick.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  curl &lt;span class="nt"&gt;-XGET&lt;/span&gt; &lt;span class="s1"&gt;'http://localhost:9200/_cat/tasks?detailed'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;With the &lt;code&gt;?detailed&lt;/code&gt; param, you can get more info on the target index and query. &lt;/li&gt;
&lt;li&gt;Look for patterns in which tasks are consistently at the top of the list. Is it the same index? Is it the same node? &lt;/li&gt;
&lt;li&gt;If so, maybe something is wrong with that index's data or the node is overloaded.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  How to fix:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;If the volume of requests is higher than normal, then look at ways to optimize the requests (such as using bulk APIs or more efficient queries/writes)&lt;/li&gt;
&lt;li&gt;If not change in volume and looks random, this implies something else is slowing down the cluster. The backup of tasks is just a symptom of a larger issue.&lt;/li&gt;
&lt;li&gt;If you don't know where the requests come from, add the &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/tasks.html#_identifying_running_tasks"&gt;&lt;code&gt;X-Opaque-Id&lt;/code&gt;&lt;/a&gt; header to your Elasticsearch clients to identify which clients are triggering the queries. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Checks Elasticsearch Pending Tasks
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What does it do:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-pending.html"&gt;Pending tasks&lt;/a&gt; are pending updates to the cluster state such as creating a new index or updating its mapping.&lt;/li&gt;
&lt;li&gt;Unlike the previous tasks queue, pending updates require a multi step handshake to broadcast the update to all nodes in the cluster, which can take some time.&lt;/li&gt;
&lt;li&gt;There should be almost zero in-flight tasks in a given time. Keep in mind, expensive operations like a snapshot restore can cause this to spike temporarily. &lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What to look for:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Run the command and ensure none or few tasks in-flight.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  curl curl curl &lt;span class="nt"&gt;-XGET&lt;/span&gt; &lt;span class="s1"&gt;'http://localhost:9200/_cat/pending_tasks'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;If it looks to be a constant stream of cluster updates that finish quickly, look at what might be triggering them. Is it a mapping explosion or creating too many indices? &lt;/li&gt;
&lt;li&gt;If it's just a few, but they seem stuck, look at the logs and metrics of the master node to see if there are any issues. For example, is the master node running into memory or network issues such that it can't process cluster updates?&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Hot Threads
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What does it do:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;The hot threads API is a valuable built-in profiler to tell you where Elasticseach is spending the most time. &lt;/li&gt;
&lt;li&gt;This can provide insights such as whether Elasticsearch is spending too much time on index refresh or performing expensive queries.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What to look for:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Make a call to the &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-hot-threads.html"&gt;hot threads API&lt;/a&gt;. To improve accuracy, it's recommended to capture many snapshots using the &lt;code&gt;?snapshots&lt;/code&gt; param
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  curl &lt;span class="nt"&gt;-XGET&lt;/span&gt; &lt;span class="s1"&gt;'http://localhost:9200/_nodes/hot_threads?snapshots=1000'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This will return stack traces seen when the snapshot was taken.&lt;/li&gt;
&lt;li&gt;Look for the same stack in many different snapshots. For example, you might see the text &lt;code&gt;5/10 snapshots sharing following 20 elements&lt;/code&gt;. This means a thread spending time in that area of the code during 5 snapshots.&lt;/li&gt;
&lt;li&gt;You should also look at the CPU %. If an area of code had both high snapshot sharing and also high CPU %, this is a hot code path.&lt;/li&gt;
&lt;li&gt;By looking at the code module, disassemble what Elasticseach is doing.&lt;/li&gt;
&lt;li&gt;If you see wait or park state, this is usually ok. &lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  How to fix:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;If a large amount of CPU time is spent on index refresh, then try increasing the refresh interval beyond the default 1 second.&lt;/li&gt;
&lt;li&gt;If you see a large amount in cache, maybe your default caching settings are suboptimal and causing heavy miss. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Memory Issues
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Check Elasticsearch Heap / Garbage Collection
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What does it do:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;As a JVM process, the heap is the area of memory where a lot of Elasticsearch data structures are stored and requires garbage collection cycles to prune old objects.&lt;/li&gt;
&lt;li&gt;For typical production setups, Elasticsearch locks all memory using &lt;code&gt;mlockall&lt;/code&gt; on boot and disables swapping. If you're not doing this, do it now. &lt;/li&gt;
&lt;li&gt;If Heap is consistently above 85% or 90% for a node, this means we are coming close to out of memory. &lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What to look for:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Search for &lt;code&gt;collecting in the last&lt;/code&gt; in Elasticsearch logs. If these are present, this means Elasticsearch is spending higher overhead on garbage collection (which takes time away from other productive tasks).&lt;/li&gt;
&lt;li&gt;A few of these every now and then ok as long as Elasticsearch is not spending majority of it’s CPU cycles on garbage collection (calculate the percentage of time spent on collecting relative to the overall time provided)&lt;/li&gt;
&lt;li&gt;A node that is spending 100% time on garbage collection is stalled and cannot make forward progress.&lt;/li&gt;
&lt;li&gt;Nodes that appear to have network issues like timeouts may actually be due to memory issues. This is because a node can't respond to incoming requests during a garbage collection cycle. &lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  How to fix:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;The easiest is to add more nodes to increase the heap available for the cluster. However, it takes time for Elasticsearch to rebalance shards to the empty nodes. &lt;/li&gt;
&lt;li&gt;If only a small set of nodes have high heap usage, you may need to better balance your customer. For example, if your shards vary in size drastically or have different query/index bandwidths, you may have allocated too many hot shards to the same set of nodes. To move a shard, use the &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-reroute.html"&gt;reroute API&lt;/a&gt;. Just adjust the &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cluster.html#shard-allocation-awareness"&gt;shard awareness sensitivity&lt;/a&gt; to ensure it doesn't get moved back.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  curl &lt;span class="nt"&gt;-XPOST&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; localhost:9200/_cluster/reroute &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'
  {
  "commands": [
      {
        "move": {
          "index": "test", "shard": 0,
          "from_node": "node1", "to_node": "node2"
        }
      }
    ]
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;If you are sending large bulk requests to Elasticsearch, try reducing the batch size so that each batch is under 100MB. While larger batches help reduce network overhead, they require allocating more memory to buffer the request which cannot be freed until after both the request is complete and the next GC cycle. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Check Elasticsearch Old Memory Pressure
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What does it do:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;The old memory pool contains objects that have survived multiple garbage collection cycles and are long living objects.&lt;/li&gt;
&lt;li&gt;If the &lt;a href="https://www.elastic.co/blog/found-understanding-memory-pressure-indicator"&gt;old memory&lt;/a&gt; is over 75%, you might want to pay attention to it. As this fills up beyond 85%, more GC cycles will happen but the objects can't be cleaned up. &lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What to look for:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Look at the old pool used / old pool max. If this is over 85%, that is concerning&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  How to fix:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Are you eagerly loading a lot of fielddata. These reside in memory for a long time. &lt;/li&gt;
&lt;li&gt;Are you performing many long running analytics tasks? Certain tasks should be offloaded to a distributed computing framework designed for map/reduce operations like Apache Spark.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Check Elasticsearch FieldData Size
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What does it do:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;FieldData is used for computing aggregations on a field such as &lt;code&gt;terms&lt;/code&gt; aggregation&lt;/li&gt;
&lt;li&gt;Usually fielddata for a field is not loaded in memory until the first time an aggregation is performed on it. &lt;/li&gt;
&lt;li&gt;However, this can also be precomputed on index refresh if &lt;code&gt;eager_load_ordinals&lt;/code&gt; is set.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What to look for:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Look at an index or all indices fielddata size like so:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  curl &lt;span class="nt"&gt;-XGET&lt;/span&gt; &lt;span class="s1"&gt;'http://localhost:9200/index_1/_stats/fielddata'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;An index could have very large field data structures if we are using it on the wrong type of data. Are you performing aggregations on very high-cardinality fields like a UUID or trace id? Fielddata is not suited for very high-cardinality fields as they will create massive fielddata structures. &lt;/li&gt;
&lt;li&gt;Do you have a lot of fields with &lt;code&gt;eager_load_ordinals&lt;/code&gt; set or allocate a large amount to the fielddata cache. This causes the fielddata to be genreated at refresh time vs query time. While it can speed up aggregations, it's not optimal if you're computing the fielddata for many fields at index refresh and never consume it in your queeries.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  How to fix:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Make adjustments to your queries or mapping to not aggregate on very-high cardinality keys.&lt;/li&gt;
&lt;li&gt;Audit your mapping to reduce the number that have &lt;code&gt;eager_load_ordinals&lt;/code&gt; set to true.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Elasticsearch Networking issues
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Node left or node disconnected
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What does it do:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;A node will be eventually be removed from the cluster if it does not respond to requests. &lt;/li&gt;
&lt;li&gt;This allows shards to be replicated to other nodes to meet the replication factor and ensure high availability even if a node was removed. &lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What to look for:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Look at the master node logs. Even though there are multiple masters, you should look at the master node that is currently elected. You can use the nodes API or a tool like Cerebro to do this.&lt;/li&gt;
&lt;li&gt;Look if there is a consistent node that times out or has issues. For example, you can see which nodes are still pending for a cluster update by looking for the phrase &lt;code&gt;pending nodes&lt;/code&gt; in the master node’s logs.&lt;/li&gt;
&lt;li&gt;If you see the same node keep getting added but then removed, it may imply the node is overloaded or unresponsive. &lt;/li&gt;
&lt;li&gt;If you can't reach the node from your master node, it could imply a networking issue. You could also be running into the NIC or CPU bandwidth limitations&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  How to fix:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Test with the setting &lt;code&gt;transport.compression&lt;/code&gt; set to true This will compress traffic between nodes (such as from ingestion nodes to data nodes) reducing network bandwidth at the expense of CPU bandwidth.&lt;/li&gt;
&lt;li&gt;Note: Earlier versions called this setting &lt;code&gt;transport.tcp.compression&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;If you also have memory issues, try increasing memory. A node may become unresponsive due to large time spent on garbage collection. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Not enough master node issues
&lt;/h2&gt;

&lt;h4&gt;
  
  
  What does it do:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;The master and other nodes need to discover each other to formulate a cluster. &lt;/li&gt;
&lt;li&gt;On first boot, you must provide a static set of &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.0/discovery-settings.html#initial_master_nodes"&gt;master nodes&lt;/a&gt; so you don't have a &lt;a href="https://en.wikipedia.org/wiki/Split-brain_(computing)"&gt;split brain problem&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Other nodes will then discover the cluster automatically as long as the master nodes are present. &lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What to look for:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.elastic.co/blog/elasticsearch-logging-secrets"&gt;Enable Trace logging&lt;/a&gt; to review discovery related activities.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  curl &lt;span class="nt"&gt;-XPUT&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; localhost:9200/_cluster/_settings &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'
  {
    "transient": {"logger.discovery.zen":"TRACE"}
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Review configuration such as &lt;code&gt;minimum_master_nodes&lt;/code&gt; (if older than 6.x).&lt;/li&gt;
&lt;li&gt;Look at whether all master nodes in your initial master nodes list can ping each other. &lt;/li&gt;
&lt;li&gt;Review whether you have &lt;em&gt;quorum&lt;/em&gt;, which should be &lt;code&gt;number of master nodes / 2 +1&lt;/code&gt;. If you have less than quorum, no updates to cluster state will occur to protect data integrity. &lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  How to fix:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Sometimes network or DNS issues can cause the original master nodes to not be reachable. &lt;/li&gt;
&lt;li&gt;Review that you have at least &lt;code&gt;number of master nodes / 2 +1&lt;/code&gt;  master nodes currently running.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Shard allocation errors
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Elasticsearch in Yellow or Red State (Unassigned shards)
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What does it do:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;When a node reboots or a cluster restore is started, the shards are not immediaty available. &lt;/li&gt;
&lt;li&gt;Recovery is throttled to ensure the cluster does not get overwhelmed.&lt;/li&gt;
&lt;li&gt;Yellow state means primary indices are allocated, but secondary (replica) shards have not been allocated yet. While yellow indices are both readable and writable, availability is decreased. Yellow state is usually self-healable as the cluster replicates shards.&lt;/li&gt;
&lt;li&gt;Red indices means primary shards are not allocated. This could be transient such as during a snapshot restore operation, but can also imply major problems such as missing data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What to look for:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;See reason behind why allocation has stopped
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  curl &lt;span class="nt"&gt;-XGET&lt;/span&gt; &lt;span class="s1"&gt;'http://localhost:9200/_cluster/allocation/explain'&lt;/span&gt;

  curl &lt;span class="nt"&gt;-XGET&lt;/span&gt; &lt;span class="s1"&gt;'http://localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Get a list of red indices, to understand which indices are contributing to red state. The cluster state will be in the red state as long as at least one index is red.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  curl &lt;span class="nt"&gt;-XGET&lt;/span&gt; &lt;span class="s1"&gt;'http:localhost:9200/_cat/indices'&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;red
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;For more detail on a single index, you can see recovery status for the offending index
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  curl &lt;span class="nt"&gt;-XGET&lt;/span&gt; &lt;span class="s1"&gt;'http:localhost:9200/index_1/_recovery'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  How to fix:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;If you see a timeout from max_retries (maybe the cluster was busy during allocation), you can temporarily increase the circuit breaker threshold (Default is 5). Once the number is above the circuit breaker, Elasticsearch will start to initialize the unassigned shards.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  curl &lt;span class="nt"&gt;-XPUT&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; localhost:9200/index1,index2/_settings &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'
  {
    "index.allocation.max_retries": 7
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Elasticsearch Disk issues
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Index is read-only
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What does it do:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Elasticsearch has three &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cluster.html#disk-based-shard-allocation"&gt;disk based watermarks&lt;/a&gt; that influences shard allocation. The &lt;code&gt;cluster.routing.allocation.disk.watermark.low&lt;/code&gt; watermark prevents new shards from being allocated to a node with disk filling up. By default, this is 85% of the disk used.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;cluster.routing.allocation.disk.watermark.high&lt;/code&gt; watermark will force the cluster to start moving shards off of the node to other nodes. By default, this is 90%. This will start to move data around until below the high watermark.
If Elasticsearch disk exceeds the flood stage watermark &lt;code&gt;cluster.routing.allocation.disk.watermark.flood_stage&lt;/code&gt;, is when the disk is getting so full that moving might not be fast enough before the disk runs out of space. When reached, indices are placed in a read-only state to avoid data corruption. &lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What to look for:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Look at your disk space for each node&lt;/li&gt;
&lt;li&gt;Review logs for nodes for a message like below:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  high disk watermark &lt;span class="o"&gt;[&lt;/span&gt;90%] exceeded on XXXXXXXX free: 5.9gb[9.5%], shards will be relocated away from this node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Once the flood stage reached, you'll see logs like so:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  flood stage disk watermark &lt;span class="o"&gt;[&lt;/span&gt;95%] exceeded on XXXXXXXX free: 1.6gb[2.6%], all indices on this node will be marked read-only
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Once this happens, the indices on that node are read-only.&lt;/li&gt;
&lt;li&gt;To confirm, see which indices have &lt;code&gt;read_only_allow_delete&lt;/code&gt; set to true.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  curl &lt;span class="nt"&gt;-XGET&lt;/span&gt; &lt;span class="s1"&gt;'http://localhost:9200/_all/_settings?pretty'&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;read_only
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  How to fix:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;First, clean up disk space such as by deleting local logs or tmp files. &lt;/li&gt;
&lt;li&gt;To remove this block of read-only, make the command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  curl &lt;span class="nt"&gt;-XPUT&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; localhost:9200/_all/_settings &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'
  {
    "index.blocks.read_only_allow_delete": null
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Troubleshooting stability and performance issues can be challenging. The best way to find the root cause is by using the scientific method of hypothesis and proving it correct or incorrect. Using these tools and the Elasticsearch management API, you can gain a lot of insights into how Elasticsearch is performing and where issues may be. &lt;/p&gt;

</description>
      <category>elasticsearch</category>
      <category>debug</category>
      <category>api</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>API-First Product Managers’ Popular API Tools and API Metrics</title>
      <dc:creator>Xing Wang</dc:creator>
      <pubDate>Wed, 27 Sep 2023 23:14:30 +0000</pubDate>
      <link>https://forem.com/moesif/api-first-product-managers-popular-api-tools-and-api-metrics-3kn5</link>
      <guid>https://forem.com/moesif/api-first-product-managers-popular-api-tools-and-api-metrics-3kn5</guid>
      <description>&lt;p&gt;We interviewed the product managers at a number of the larger API-first companies that are based in San Francisco. The companies are all publicly traded, have TTM revenue of more than $100M and are in the fields of billing, security, communications and workflow automation.&lt;/p&gt;

&lt;p&gt;The PMs were asked what were their &lt;strong&gt;favorite tools&lt;/strong&gt; and what &lt;strong&gt;API metrics&lt;/strong&gt; they cared most about. Where possible we identified tools and metrics that were common across all market segments, excluding the (many) edge cases that you’d expect when your customer base numbered in the 1,000s.&lt;/p&gt;

&lt;p&gt;Not surprisingly, our answers nicely segmented down into three classic areas: adoption, engagement and retention. Before we dive into those areas, we need to get the data into our analytics ecosystem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started is Tough — Huge Amounts of Data
&lt;/h2&gt;

&lt;p&gt;One of the most consistent take-aways is that the storage and data processing required to analyze billions of APIs calls is huge. Data lakes often grow so large that retroactive analyses have to be limited to just a few days, or even the last few hours.&lt;/p&gt;

&lt;p&gt;In many cases, the first step companies take is to dump the unstructured API data, or the entire raw dump of their syslog, into a data lake in &lt;a href="https://aws.amazon.com/redshift/" rel="noopener noreferrer"&gt;Amazon Redshift&lt;/a&gt; or &lt;a href="https://www.splunk.com/" rel="noopener noreferrer"&gt;Splunk&lt;/a&gt;. From there, the data infrastructure team pulls out the syslog events the PM is interested in and passes them to the data warehouse, often in &lt;a href="https://www.snowflake.com/en/" rel="noopener noreferrer"&gt;Snowflake&lt;/a&gt;, where it’s more easily queryable. Here, the actual processing and aggregating of metrics takes place, often under the auspices of the Business Intelligence team, the PMs and perhaps even engineers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adoption: Tools and Metrics
&lt;/h2&gt;

&lt;p&gt;For the majority of api-first companies we talked to, one of the first, and arguably most important metrics that PMs track, is &lt;strong&gt;developer activation&lt;/strong&gt;. In general, the steps in product adoption are straightforward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sign up for an account&lt;/li&gt;
&lt;li&gt;First API Call&lt;/li&gt;
&lt;li&gt;Deploy a Working API&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8uo1dtskgy8xprnqxjnr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8uo1dtskgy8xprnqxjnr.png" alt="Adoption Funnel"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our cohort of established API-first companies use a &lt;a href="https://www.tableau.com/" rel="noopener noreferrer"&gt;Tableau&lt;/a&gt; or &lt;a href="https://cloud.google.com/looker/" rel="noopener noreferrer"&gt;Looker&lt;/a&gt; dashboard that displays how many people are signing up, of those sign ups how many are logging in, of those logins how many are creating an app, and of those apps how many mint API tokens.&lt;/p&gt;

&lt;p&gt;Predominantly PMs’ OKRs are devoted towards increasing the &lt;strong&gt;developer activation rate&lt;/strong&gt; and making sure the &lt;strong&gt;time to activation&lt;/strong&gt; is decreased. Since devs could stay in a single funnel stage for days or longer, it’s important to track both the conversion rate for each step, and also the time it takes to reach the next step.&lt;/p&gt;

&lt;p&gt;If the normal sales cycle is 90 days, the PMs like to look at the quartiles: what’s the fiftieth quartile doing, what’s the seventy-fifth quartile doing, and then they use that as a proxy to determine how useful are their SDKs and documentation.&lt;/p&gt;

&lt;p&gt;Once the API is adopted, PMs want to see usage increase leading to a paid plan, a highlighting of popular endpoints and an ability to identify missing features. At this stage the buying motion of the customer bifurcates depending on their company size: large enterprises or SMBs/startups.&lt;/p&gt;

&lt;h2&gt;
  
  
  Engagement: Enterprise Customer Tools and Metrics
&lt;/h2&gt;

&lt;p&gt;For the most part, the majority of developers are asked by their leadership to evaluate the API product as a possibility. Somethings they create a developer org and try out all of the features. And then when their company decides to sign on the dotted line, they actually end up provisioning a separate account. Mapping of the developer org to the paid, and tying the revenue accounts in Salesforce, is not always super clear. So instead of trying to solve that mapping problem, PMs sometimes just focus on more adoption, since adoption is a really good proxy for whether or not customers are going to use the product.&lt;/p&gt;

&lt;h2&gt;
  
  
  API Tokens By Acquisition Channel
&lt;/h2&gt;

&lt;p&gt;Most companies see utility in tracking activity in the user facing console to help grow usage and engagement. When customers are signing up, configuring their account, managing which APIs are available, or switching on and off features, they go through the management web interface. If your API monitoring tool is not user-centric (i.e. it doesn’t have the ability to drill down into the API call and identify which user and company it’s attributed to), then PMs have to deploy analytics tools like &lt;a href="https://www.heap.io/" rel="noopener noreferrer"&gt;Heap&lt;/a&gt; or &lt;a href="https://marketingplatform.google.com/about/analytics-360/" rel="noopener noreferrer"&gt;Google Analytics 360&lt;/a&gt;. These tools are then configured to associate users on the web interface with the API calls others in their organizations might be making.&lt;/p&gt;

&lt;p&gt;PMs can then track &lt;strong&gt;marketing channel attribution&lt;/strong&gt; to the respective Google or Facebook ad. They can track from account creation, through when the customer converted into a paid plan, and onto when they first started making API calls.&lt;/p&gt;

&lt;p&gt;In a user-centric tool, UTM parameters are monitored in effectively the same way as HTTP status response codes. This enables grouping of API tokens by UTM source or UTM campaign, to better understand which marketing channels contribute to engagement.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajthr38l853k7u9q1nxc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajthr38l853k7u9q1nxc.png" alt="API Activation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Weekly Active API Tokens
&lt;/h2&gt;

&lt;p&gt;The number of distinct tokens accessing the API in a given week, the &lt;strong&gt;Weekly Active Tokens&lt;/strong&gt; (WAT), is one of the best north star metrics that PMs use to track their products. Unlike infrastructure metrics like Uptime, SLOs or Requests per Minute that are aligned with engineering goals, WAT is directly aligned with the business goal of driving adoption and increasing engagement. To calculate WAT the data infrastructure team needs to pull out the relevant syslog event from Redshift and pass it into Snowflake. Once there, the BI team writes the SQL query and visualizes it in Tableau.&lt;/p&gt;

&lt;p&gt;Since a single developer account can create multiple API tokens, such as for sandbox and production environments, a more accurate measurement would be &lt;strong&gt;Weekly Active Users&lt;/strong&gt; or &lt;strong&gt;Weekly Active Companies&lt;/strong&gt;. However, this requires analytics infrastructure that can link API tokens to a respective user or company account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Number of Users
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;”Make it easy to invite others”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Some PMs found that there’s a &lt;strong&gt;direct correlation&lt;/strong&gt; between accounts converting and number of users. More users often meant that the customer was more serious about the project. So PMs made a push to invite others to join in the sign-up flow, by saying things like “invite someone else to this project to help you do the work.” Often, an added bonus was that that’s another chance to get a corporate email from a user, since the inviter might not know the invitee’s Gmail, but would know their work email.&lt;/p&gt;

&lt;h2&gt;
  
  
  Engagement: SMB/Startup Self-Service Customers
&lt;/h2&gt;

&lt;p&gt;In a self-service buying motion the customer’s an independent developer, at a five or 10 person startup or at an SMB, where he can just put in his CTOs credit card and start using the paid service right away.&lt;/p&gt;

&lt;p&gt;It’s difficult to get additional insights from this group over and above what PMs do for enterprise accounts, since most devs vastly prefer the self-service route.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“It’s not an absolute statement but for the most part developers don’t want to talk to you, they’re averse to talk to sales and they don’t want to respond to emails. In fact, they often sign up with personal emails to try and hide who they’re working for,” PM in San Francisco.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;However, a proxy for developer sentiment can somewhat be gleaned by looking at what the devs are using in the product, what they click on, what API calls they make and what their usage stats are from the API’s SDKs in GitHub.&lt;/p&gt;

&lt;h2&gt;
  
  
  Retention
&lt;/h2&gt;

&lt;p&gt;Once the PMs had a good understanding of adoption and engagement, they looked to &lt;strong&gt;API product retention&lt;/strong&gt; to find areas that required improvement. Product retention is a concept born out of revenue retention and requires segmenting the user base into cohorts such as via sign-up date. The PM tracks the percentage of each cohort returning to engage with your platform. In the example below API retention is grouped by user’s SDK. You can see that PHP has a far lower retention percentage than the other SDKs, implying that PHP is buggy, or it has a performance issue that requires fixing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx42zasprunhf9p5w3izq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx42zasprunhf9p5w3izq.jpg" alt="API Retention"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another way to determine which product features to add or deprecate is to watch &lt;strong&gt;billing SKUs&lt;/strong&gt;. Many APIs are divided into a set of SKUs, with each different activity type assigned its own single SKU. By looking at who is paying for what, it’s possible to identify which ones are being used and which are not.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Metric Tracking is a Bear
&lt;/h2&gt;

&lt;p&gt;The velocity for monitoring business intelligence is definitely a problem from the PMs’ point of view.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;”It just takes way too long after putting in a request for a new metric to you get the statistics out the other end,” said a disgruntled PM.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It’s a five-step process to set up tracking of a metric. It involves making a request to the separate BI team who then have to triage the request, then pull it in and there’s often negotiations and politics involved. Representative steps include: 1) Does the data in question have an event? 2) If the answer is yes, then is it in the data warehouse? If the answer is no, then someone in the data infrastructure team needs to create a new syslog event and then pull it in. 3) Create the requirements for how the metric is to visualized in Tableau or change the reports. 4) The BI data team has to execute the request. 5) If BI can’t visualize it because they’re too busy or its outside their capabilities, then the PM will have to ask engineering to do a custom SQL query on the database itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  User-Centric Agile Reporting
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;”PMs are never going to turn down a more actionable, or sort of more agile reporting toolset,” PM at a leader in security.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In many of the companies we interviewed they formed the DevEx group from scratch, just adopting whatever tools the BI team was using. Custom queries into data warehouses were built because there weren’t any off the shelf options. But since then tooling’s come a long way.&lt;/p&gt;

&lt;p&gt;Today, API analytics tools, help everyone at API-driven organizations learn from their API data and make smarter decisions that drive growth.&lt;/p&gt;

&lt;p&gt;We’re at the stage where product managers at API-first companies recognize that good tools can give them unique insights into developer success, and can be just as important to corporate success as reliable SDKs or complete documentation.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>api</category>
      <category>productivity</category>
    </item>
    <item>
      <title>How to Maximize Product Led Growth with Customer Success Best Practices</title>
      <dc:creator>Xing Wang</dc:creator>
      <pubDate>Tue, 26 Sep 2023 22:27:26 +0000</pubDate>
      <link>https://forem.com/moesif/how-to-maximize-product-led-growth-with-customer-success-best-practices-1fb6</link>
      <guid>https://forem.com/moesif/how-to-maximize-product-led-growth-with-customer-success-best-practices-1fb6</guid>
      <description>&lt;p&gt;If you want to turbocharge your Product Led Growth (PLG), it’s time to take a long, hard look at your Customer Success Management practices. The way you approach your customer success activities can make a significant difference in how fast and how seamlessly you scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Product Led Growth?
&lt;/h2&gt;

&lt;p&gt;A product led growth strategy puts software, guides, demos and documentation at the center of a business’ growth strategy. The company’s success relies on the product to do much of the selling itself, through its ease of adoption, usability, features and performance. As your adoption funnel is developed and optimized, customer experience will become more consistent, leading to less friction in your PLG company’s growth.&lt;/p&gt;

&lt;p&gt;An experienced product manager often runs businesses so that users are able to discover, experiment with, and adopt the product on their own terms and on their own timescale. That’s not to say that sales teams and customer support services aren’t required, they just tend to be &lt;a href="https://www.moesif.com/blog/developer-platforms/self-service/What-Is-Product-Led-Growth-and-Why-Is-It-Critical-for-API-First-Companies/?utm_campaign=DevTo&amp;amp;utm_source=placed-article&amp;amp;utm_medium=body-cta&amp;amp;utm_content=plg-csm-best-practices"&gt;layered in&lt;/a&gt; at a later stage of the customer journey.&lt;/p&gt;

&lt;p&gt;A product led approach can enable businesses to scale rapidly, particularly when they implement customer success management best practices alongside a product led growth mindset.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Customer Success Management and How Does it Underpin Product Let Growth?
&lt;/h2&gt;

&lt;p&gt;Customer Success Management (CSM) is a bridge between a business’ product and its customers. When used &lt;a href="https://www.moesif.com/blog/customer-success/monitoring/Why-Data-Driven-Customer-Success-is-Essential-in-Today-s-COVID-19-World/?utm_campaign=Int-site&amp;amp;utm_source=blog&amp;amp;utm_medium=body-cta&amp;amp;utm_content=maximize-plg-with-csm-best-practices%22&amp;amp;utm_campaign=DevTo&amp;amp;utm_source=placed-article&amp;amp;utm_medium=body-cta&amp;amp;utm_content=plg-csm-best-practices"&gt;proactively&lt;/a&gt;, CSM best practices can ensure you understand your customers’ businesses, helps them stand up your product with minimal effort, and guarantees that your product provides real value at every stage of the customer journey.&lt;/p&gt;

&lt;p&gt;For product led growth companies, the customer success team acts as a guide to help customers achieve their goals. Engagement and improving the customer experience is at the heart of this approach, as engagement is the best indicator that a customer journey will end successfully — with happy customers, positive customer feedback and ideally a subscription plan.&lt;/p&gt;

&lt;p&gt;Regular, proactive engagement also means that customers can realize greater value from the product — if they engage with your customer success team to define the scope of their project, your team can help them with tools that achieves their goals faster and more efficiently. Sharing those insights with your product team results in a more cohesive PLG company and one where product led content will invariably improve customer experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Top Tips on How to Maximize Product Led Growth with Customer Success Best Practices
&lt;/h2&gt;

&lt;p&gt;A customer success manager uses a range of tools and techniques that empowers customers to be more efficient and realize additional value from the product. The first of these is to focus on data points, a focus which often reveals issues which customers can be helped with. By reviewing customers’ goals, SDKs, UTMs, and other hyper-personalized data points, the customer success team can quickly build up a deep understanding of each customer. The team then analyzes every step in customer onboarding, from discovery and signing up, through to growing usage and feature expansion. These instances serve as touch points to reach out to customers, cement relationships and ensure customer retention.&lt;/p&gt;

&lt;p&gt;Another important tip is to use a tool that supports detailed customer insights. With an in-depth analytics dashboard in place, it’s easy to assess account health and discover product usage trends. That data then forms the basis of conversations with customers, as &lt;a href="https://www.moesif.com/blog/ebooks/pverify-quickly-scales-their-healthcare-platform-with-hipaa-compliant-api-analytics-during-covid/?utm_campaign=DevTo&amp;amp;utm_source=placed-article&amp;amp;utm_medium=body-cta&amp;amp;utm_content=plg-csm-best-practices"&gt;pVerify&lt;/a&gt; observed when it deployed Moesif’s HIPAA-compliant API analytics for Covid testing:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“It’s completely pointless if you cannot pass your analytics data onto your clients. Embedded dashboards solved that for us. As a scientist, I’m all about data analytics. Finding, displaying and sharing API metrics like 400/500 errors, when you have thousands of customers and millions of API calls, is very difficult. Moesif solved that for us.” Rob Dejournett, CTO, COO, pVerify.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With a proactive approach to CSM, analytics and monitoring, it’s possible to &lt;a href="https://www.moesif.com/blog/customer-success/monitoring/Data-Driven-Customer-Success-and-How-API-Data-Provide-the-Leading-Indicator-of-Account-Health/?utm_campaign=DevTo&amp;amp;utm_source=placed-article&amp;amp;utm_medium=body-cta&amp;amp;utm_content=plg-csm-best-practices"&gt;identify&lt;/a&gt; many customer problems at an early stage. This is another key tip for maximizing product led growth, as it can open up productive conversations with customers at just the right time in their growth journey, helping them overcome any unexpected bumps in the road. That’s not to say that upset customers won’t pop up, but when they do, a robust CSM approach means that you can immediately access key account metrics and take a laser-focused approach to the problem:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“It’s handy to have immediate access to key account metrics. You can see what’s happening and say, ‘Yes, you did this wrong. See all these failed calls? Let’s look into the body of those calls and see if we can reproduce that — make an autopsy of those fails.’” Oliver Burt, Customer Success Manager, Moesif&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A dashboard that aggregates your accounts’ health is the final CSM best practice for maximizing product led growth. An average CSM professional will manage perhaps 15 accounts, some doing well, potentially some badly and some with errors. A dashboard provides both oversight and peace of mind. Being able to produce dashboards by account or by cohorts such as company or vertical, and with a global perspective, enables CSM teams to share customer acquisition and customer retention statistics with their line managers, product management teams and across the company.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Automation as Part of Your Product Led CSM Strategy
&lt;/h2&gt;

&lt;p&gt;Automating certain CSM functions is advantageous to both the user and your CSM team. When certain conditions, such as onboarding difficulties, are found, it may make sense to show an in-app notification or email pointing users towards helpful content on behalf of the CSM team. This benefits the internal CSM team by taking that load off of their plate and potentially helping the user in a “hands-off” fashion. For the user, this means that help is dispatched immediately and can be used at their leisure, as needed. This approach lends itself well to the “product bumper” methodology that many PLG advocates point to.&lt;/p&gt;

&lt;p&gt;By using automation, users can be helped more immediately versus waiting for a call with the CSM team. By strategically pointing users to docs, tutorials, and new features based on events within the application or service, user experience is improved. Of course, there is always the fallback of users sidestepping the automation and engaging with the CSM team anyways. Having a highly-available CSM team plus automation is a great way to augment your PLG strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Role of CSM Best Practices in Achieving Product Led Growth
&lt;/h2&gt;

&lt;p&gt;CSM best practices can underpin product led growth in multiple ways. They can help customers stand up your product faster, reduce customer churn and accelerate product upsell. They can deliver greater recurring revenue and happier customers. And they can ensure that any growth trends are identified at an early stage, so that your business can support its customers and ensure they are ready and on the right path when usage explodes.&lt;/p&gt;

</description>
      <category>startup</category>
      <category>productivity</category>
      <category>discuss</category>
      <category>management</category>
    </item>
    <item>
      <title>Building a RESTful API with Java Spring Boot</title>
      <dc:creator>Xing Wang</dc:creator>
      <pubDate>Fri, 07 Apr 2023 20:10:36 +0000</pubDate>
      <link>https://forem.com/moesif/building-a-restful-api-with-java-spring-boot-38j4</link>
      <guid>https://forem.com/moesif/building-a-restful-api-with-java-spring-boot-38j4</guid>
      <description>&lt;p&gt;Spring Boot is a popular framework for creating powerful RESTful APIs, and in this tutorial, we will use it to develop a simple API that simulates a credit score rating. The API endpoint we will create will allow a user to retrieve a credit score rating by sending a request to the API. However, it is important to note that we will not be linking up to any actual backend systems to pull a real credit score, instead we will use a random number generator to generate the score and return it to the user. Although this API is relatively simple, it will demonstrate the basics of REST API development using Java and Spring Boot. This tutorial will provide a hands-on introduction to building RESTful APIs with Java Spring Boot.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before we start, we must ensure that a couple of prerequisites are completed. To follow along and run this tutorial, you will need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A working Java installation&lt;/li&gt;
&lt;li&gt;Gradle or Maven build tools installed (we'll be using Gradle)&lt;/li&gt;
&lt;li&gt;An IDE or text editor of your choice&lt;/li&gt;
&lt;li&gt;Postman to test our endpoint&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Creating the Initial Project
&lt;/h2&gt;

&lt;p&gt;We'll be using the &lt;a href="https://start.spring.io"&gt;Spring Initializr&lt;/a&gt; tool to create our initial project. The Spring Initializr generator provides many options allowing for quick customizations to our starter project.&lt;/p&gt;

&lt;p&gt;The options include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Project&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;The type of build tool to be used with the project&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Language&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;The language the project will use&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spring Boot&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;The Spring Boot version to use&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Project Metadata&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Group name&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Uniquely identifies your project across all projects&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Artifact name&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;The name of the jar without version&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Name&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Name of the project&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Description&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;A description for the project&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Package name&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;The name for the package, typically a combination of the group and artifact name&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Packaging type&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;The type of package that is created when building your project&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Java Version&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;The version of Java to be used in the project&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Dependencies&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select and include many dependencies from developer tools and databases to web and security frameworks&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We'll be using the following settings for our project:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nvL_sB1F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1vug6rs4p26pxwp5873x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nvL_sB1F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1vug6rs4p26pxwp5873x.png" alt="Image description" width="800" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Selecting &lt;em&gt;Generate&lt;/em&gt; will initiate a download for the sample project. Unzip and open the folder generated by Spring Initializr, in our case called RESTDemo, in your favorite editor. This is where we will add our API code. Run the following command to ensure our project builds correctly out of the gate.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./gradlew build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Adding in The Code
&lt;/h2&gt;

&lt;p&gt;Under the path &lt;code&gt;/src/main/java/com/example/RESTDemo&lt;/code&gt; we'll create a file called &lt;code&gt;APIController.java&lt;/code&gt;. This is where we will house our &lt;code&gt;creditScore&lt;/code&gt; endpoint as well as the code to randomly generate the return value.&lt;/p&gt;

&lt;p&gt;Populate the newly created file with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kn"&gt;package&lt;/span&gt; &lt;span class="nn"&gt;com.example.RESTDemo&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;java.util.Random&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;org.springframework.web.bind.annotation.ResponseBody&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;org.springframework.web.bind.annotation.RequestMapping&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;org.springframework.web.bind.annotation.RestController&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="nd"&gt;@RestController&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;APIController&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

  &lt;span class="nd"&gt;@RequestMapping&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/creditscore"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="nd"&gt;@ResponseBody&lt;/span&gt;
  &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;creditscore&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;creditScoreMin&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;creditScoreMax&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;900&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="nc"&gt;Random&lt;/span&gt; &lt;span class="n"&gt;rand&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Random&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;randomCreditScore&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;rand&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;nextInt&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;creditScoreMin&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;creditScoreMax&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"{ \"credit_score\": \"%d\" }"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;randomCreditScore&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;@RestController&lt;/code&gt; annotation is a specialized version of the &lt;code&gt;Controller&lt;/code&gt; annotation in Spring MVC. It's used to create RESTful web services using Spring. This annotation tells Spring that this class is a controller class that should handle incoming web requests.&lt;/p&gt;

&lt;p&gt;Inside this class, there's a single method called &lt;code&gt;creditscore()&lt;/code&gt; which is annotated with &lt;code&gt;@RequestMapping("/creditscore")&lt;/code&gt;. The &lt;code&gt;@RequestMapping&lt;/code&gt; annotation maps the method to a specific URI, in this case, it's &lt;code&gt;localhost:8080/creditscore&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This method is also annotated with &lt;code&gt;@ResponseBody&lt;/code&gt;. The &lt;code&gt;@ResponseBody&lt;/code&gt; annotation tells Spring to bind the return value of this method to the body of the HTTP response.&lt;/p&gt;

&lt;p&gt;The method body of &lt;code&gt;creditscore()&lt;/code&gt; declares two integers &lt;code&gt;creditScoreMin&lt;/code&gt; and &lt;code&gt;creditScoreMax&lt;/code&gt;. By utilizing the &lt;code&gt;Random&lt;/code&gt; object instance we created, we will generate a number between &lt;code&gt;creditScoreMin&lt;/code&gt; and &lt;code&gt;creditScoreMax&lt;/code&gt;. Finally returns a JSON string representation of the generated credit score.&lt;/p&gt;

&lt;p&gt;Also, it's worth noting that this code does not handle any request parameters as it does not take any input from the user, this make this end-point an example of a simple end-point.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running and Testing the API
&lt;/h2&gt;

&lt;p&gt;With our code written run the following command in order to compile our project into an executable &lt;code&gt;.jar&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./gradlew build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let’s run our simple web API by using the following command in the root directory of the app.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;java &lt;span class="nt"&gt;-jar&lt;/span&gt; build/libs/RESTDemo-0.0.1-SNAPSHOT.jar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The API is now operational and ready for testing. You can use a tool like Postman to send an HTTP request to the API. The endpoint to get a credit score is &lt;code&gt;localhost:8080/creditscore&lt;/code&gt;. When you send a request to this endpoint, you should receive a &lt;code&gt;200 OK&lt;/code&gt; status code as well as a credit score generated by the random number generator.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ivdLUTjv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b107tr4dy6vs4nmqr18z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ivdLUTjv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b107tr4dy6vs4nmqr18z.png" alt="Image description" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;In summary, we have developed a basic RESTful API using Java Spring Boot. This code can serve as a foundation for creating more complex APIs for your application. As you continue to develop the API, you may want to consider implementing security measures such as an API key, integrating with an API gateway, &lt;a href="https://www.moesif.com/features/api-analytics?utm_campaign=DevTo&amp;amp;utm_source=placed-article&amp;amp;utm_medium=body-cta&amp;amp;utm_content=jaca-boot-rest-api"&gt;monitoring the usage of the API&lt;/a&gt;, or generating revenue through &lt;a href="https://www.moesif.com/solutions/metered-api-billing?utm_campaign=DevTo&amp;amp;utm_source=placed-article&amp;amp;utm_medium=body-cta&amp;amp;utm_content=jaca-boot-rest-api"&gt;API monetization&lt;/a&gt;. If you are interested in exploring options for API analytics and monetization check out &lt;a href="https://www.moesif.com/wrap?onboard=true&amp;amp;utm_campaign=DevTo&amp;amp;utm_source=placed-article&amp;amp;utm_medium=body-cta&amp;amp;utm_content=jaca-boot-rest-api"&gt;Moesif&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>rest</category>
      <category>api</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
