<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Intuit Developers</title>
    <description>The latest articles on Forem by Intuit Developers (@intuitdev).</description>
    <link>https://forem.com/intuitdev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/intuitdev"/>
    <language>en</language>
    <item>
      <title>Making Accessible Links in SwiftUI</title>
      <dc:creator>Harris Borawski</dc:creator>
      <pubDate>Wed, 18 Dec 2024 21:41:02 +0000</pubDate>
      <link>https://forem.com/intuitdev/making-accessible-links-in-swiftui-3l5g</link>
      <guid>https://forem.com/intuitdev/making-accessible-links-in-swiftui-3l5g</guid>
      <description>&lt;p&gt;SwiftUI has streamlined Apple platform UI development, but it's not without its flaws, especially regarding accessibility. A notable concern is the inaccessibility of links within the &lt;code&gt;Text&lt;/code&gt; view—screen readers fail to recognize these inline links as actionable elements. In this post, we'll dissect this issue, exploring its impact on user experience and providing guidelines on how developers can enhance accessibility (&lt;a href="https://www.a11yproject.com/" rel="noopener noreferrer"&gt;a11y&lt;/a&gt;) in their SwiftUI applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem
&lt;/h2&gt;

&lt;p&gt;Let's start with some simple code to illustrate the problem. Here is a simple link within text, rendered using Markdown.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;string&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"https://developer.apple.com/"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;
&lt;span class="kt"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Please review our [Terms and Conditions](&lt;/span&gt;&lt;span class="se"&gt;\(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;absoluteString&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="s"&gt;)"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hfr4g3k4rc6ijtuhptu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hfr4g3k4rc6ijtuhptu.png" alt="A screenshot from a iPhone simulator that shows a line of text saying 'Please review our Terms and Conditions'. 'Terms and Conditions' is styled as a link in a different color." width="656" height="70"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can imagine, inlining links with text is a common use case across apps. They're used for user agreements, deep-linking other pages, and simply linking to external articles.&lt;/p&gt;

&lt;p&gt;However, when links are added like this, it is not clear to screen-readers that they are tappable. Using the Accessibility Inspector, we can see this issue in action for the code above:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbr3067ooeqxzugjv5vt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbr3067ooeqxzugjv5vt.png" alt="The inspected element is 'Please review our Terms and Conditions, text'. It is of type 'text'." width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;"Please review our Terms and Conditions" is one label (good) with type 'text' (not good). The screen-reader will not announce this as interactive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solutions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Solution 1: Add a Button Trait
&lt;/h3&gt;

&lt;p&gt;Adding a button trait to the &lt;code&gt;Text&lt;/code&gt; element will ensure the screen-reader knows the element is interactive. Here’s how you could implement it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;string&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"https://developer.apple.com/"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;
&lt;span class="kt"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Please review our [Terms and Conditions](&lt;/span&gt;&lt;span class="se"&gt;\(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;absoluteString&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="s"&gt;)"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;accessibilityAddTraits&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;isButton&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, when a text segment contains multiple links, differentiating between them becomes a challenge. The entire text block is treated as a single UI element, which isn't ideal when you want users to choose between multiple links. This solution can make the text look like a button to screen-readers, but it doesn't provide the granularity needed for multiple links within the same text.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution 2: Redesign the User Interface
&lt;/h3&gt;

&lt;p&gt;A more robust but labor-intensive solution is to redesign the user interface so that links do not have to be appended with additional text. By isolating links from other text, you can define each link as a SwiftUI &lt;code&gt;Link&lt;/code&gt;, which is inherently more accessible. (A &lt;code&gt;Link&lt;/code&gt; will be presented to screen-readers as a &lt;code&gt;button&lt;/code&gt;.)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="kt"&gt;Link&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Terms and Conditions"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;destination&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;string&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"https://developer.apple.com/"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach maintains the visual and functional distinction of each link, improving accessibility by making each link individually selectable and actionable.&lt;/p&gt;

&lt;p&gt;However, redesigning the user interface may not always be feasible. It can involve significant changes to the app's design and layout, which may affect the project timeline, resources, or even the overall user experience.&lt;/p&gt;

&lt;p&gt;(For products that expect public usage, ensuring accessibility should be a priority from the outset of the design process, as retrofitting solutions often leads to compromises in user experience and functionality.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Both solutions offer ways to make inline links more accessible in SwiftUI, though each comes with its own set of trade-offs. The choice between adding a button trait to the entire text block and redesigning the UI to isolate links should be made based on the specific needs of the project, the frequency and nature of the links in question, and the available resources. Of course, there are probably many other solutions as well, and I hope this article helps you on the path to exploring them!&lt;/p&gt;




&lt;h2&gt;
  
  
  About the Author
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/KVSRoyal" rel="noopener noreferrer"&gt;Koriann&lt;/a&gt; (like the "corian" in "coriander") South is an accomplished iOS developer with a passion for crafting seamless and accessible user interfaces. Since joining Intuit in 2022, she has played a pivotal role in the development of the company's design system, ensuring that it meets the highest standards of accessibility and usability. With a keen eye for detail and a commitment to inclusivity, Koriann's work helps ensure that Intuit's products are accessible to millions of users worldwide. In her spare time, Koriann enjoys exploring the latest advancements in technology and sharing her knowledge with the developer community.&lt;/p&gt;




&lt;h4&gt;
  
  
  AI Acknowledgement
&lt;/h4&gt;

&lt;p&gt;This was written with the help of ChatGPT. I wrote the draft, then I used ChatGPT to rephrase my sentences, and then I rewrote the rephrasing I was unsatisified with and added a few more sentences for flavour.&lt;/p&gt;

</description>
      <category>swiftui</category>
      <category>ios</category>
      <category>a11y</category>
    </item>
    <item>
      <title>Join KubeCrash 2024: Your Crash Course on Platform Engineering the Cloud Native Way</title>
      <dc:creator>Lisa-Marie Namphy</dc:creator>
      <pubDate>Thu, 03 Oct 2024 16:59:20 +0000</pubDate>
      <link>https://forem.com/intuitdev/join-kubecrash-2024-your-crash-course-on-platform-engineering-the-cloud-native-way-43jl</link>
      <guid>https://forem.com/intuitdev/join-kubecrash-2024-your-crash-course-on-platform-engineering-the-cloud-native-way-43jl</guid>
      <description>&lt;p&gt;Platform Engineers let’s unite! Calling all Kubernetes and cloud native enthusiasts to join us at &lt;a href="http://KubeCrash.io" rel="noopener noreferrer"&gt;KubeCrash.io&lt;/a&gt;! &lt;/p&gt;

&lt;p&gt;Mark your calendars for Wednesday, October 9th, and prepare for a day packed with actionable insights and practical takeaways. We are so excited to participate in the 6th KubeCrash event with the amazing group of platform engineer software experts and open source community members.&lt;/p&gt;

&lt;p&gt;Register here → &lt;a href="http://KubeCrash.io" rel="noopener noreferrer"&gt;http://KubeCrash.io&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is KubeCrash?
&lt;/h2&gt;

&lt;p&gt;Now in its sixth round, KubeCrash is a female-created and -led, open source conference and is hosted virtually just a few weeks prior to the KubeCon event by six cloud native companies who teamed up to bring you top-notch, crash courses in cloud native tech. Check out the talks from &lt;a href="https://www.kubecrash.io/past-events-spring-2024" rel="noopener noreferrer"&gt;KubeCrash Spring 2024&lt;/a&gt; to get a sense of what to expect. &lt;/p&gt;

&lt;h2&gt;
  
  
  What's New in Platform Engineering circa 2024
&lt;/h2&gt;

&lt;p&gt;At KubeCrash Fall 2024 you'll learn how to navigate the hype around platform engineering solutions and explore best practices for building and integrating new tools, with deep dives into optimizing and simplifying Kubernetes for developers using AI. &lt;/p&gt;

&lt;h2&gt;
  
  
  Giving Back with Open Source and Deaf Kids Code
&lt;/h2&gt;

&lt;p&gt;KubeCrash continues its commitment to the open source community and giving back. All KubeCrash sessions focus on open source projects, empowering you to leverage freely available, secure, and customizable technologies.&lt;/p&gt;

&lt;p&gt;And the KubeCrash (and sponsors) commitment goes beyond tech. As with previous KubeCrash events, for every registration KubeCrash will donate $1 to &lt;a href="https://www.deafkidscode.org/" rel="noopener noreferrer"&gt;Deaf Kids Code&lt;/a&gt;, supporting their mission of empowering deaf and hard-of-hearing children through coding education.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why open source
&lt;/h2&gt;

&lt;p&gt;Intuit believes very strongly in open source technology, and supports and contributes significantly to projects within the CNCF landscape and beyond! When it comes to enterprise-grade cloud native tooling, DevOps teams often drive the technology choices to build Kubernetes production environments. In short, open source has become critical to any modern technology stack. &lt;/p&gt;

&lt;p&gt;KubeCrash aims to help engineering teams develop the needed skill sets to leverage these technologies in their production environments effectively. During these knowledge-sharing, virtual learning sessions, developers, reliability engineers, cloud security specialists, and platform engineers will learn directly from the maintainers of some of the most popular open source projects. &lt;/p&gt;

&lt;h2&gt;
  
  
  The KubeCrash Program:
&lt;/h2&gt;

&lt;p&gt;Get ready for a jam-packed schedule featuring expert talks from leading companies like The New York Times, Intuit, and members of the CNCF Deaf &amp;amp; Hard of Hearing and Blind and Visually Impaired Working Groups. &lt;/p&gt;

&lt;p&gt;You'll also hear directly from industry leaders and gain practical knowledge you can implement in your own work. Here's a sneak peek at some of the exciting sessions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Opening keynote: Platform Engineering at The New York Times&lt;/li&gt;
&lt;li&gt;Lighting talk: Introducing the CNCF's Blind and Visually Impaired Working Group&lt;/li&gt;
&lt;li&gt;A Lightboard Talk About Platform Engineering&lt;/li&gt;
&lt;li&gt;A Lighting Talk Introducing the CNCF Cloud Native AI Working Group&lt;/li&gt;
&lt;li&gt;An End User Panel: Avoiding Big Mistakes - Navigating Platform Engineering Challenges&lt;/li&gt;
&lt;li&gt;Speed Up Security: Unleash DevOps Productivity with Zero Trust Network Access&lt;/li&gt;
&lt;li&gt;Mastering Kubernetes: Strategies for Optimal Performance and Scalability&lt;/li&gt;
&lt;li&gt;Fail Smarter, Fix Faster: Gen AI's Role in Streamlining Build Debugging &lt;/li&gt;
&lt;li&gt;Closing keynote: Accelerating Developer Velocity at Intuit: Our Platform Engineering Journey with GenAI and Argo&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Block your calendars for these Intuit sessions
&lt;/h2&gt;

&lt;p&gt;The team at Intuit was thrilled to be asked to deliver the closing keynote, and share the many learnings we’ve discovered while using GenAI to unlock developer potential. Here are some other sessions you can expect from the Intuit team: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Simplifying Kubernetes for Developers - a talk that will give our end-user perspective on using Kubernetes. Learn why abstracting networking compute infrastructure and utilizing AI-powered application autoscaling, enables developers to focus more on what they excel at: writing code. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An end user panel covering the do’s and don'ts we’ve learned while building our AI-driven platform. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Hear how Argo and other open source technology can be integrated to build your AI-native application platform, increasing developer velocity by 25%. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Don't miss out!
&lt;/h2&gt;

&lt;p&gt;It’s virtual. It’s free! And it’s for a great cause! Register today and join us on October 9th for a day of learning, networking, and staying ahead of the curve in platform engineering.&lt;/p&gt;

&lt;p&gt;See you there!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>devops</category>
    </item>
    <item>
      <title>Examining Approaches and Patterns for Debuggability: Ephemeral Containers and Argo Workflows</title>
      <dc:creator>Anusha Ragunathan</dc:creator>
      <pubDate>Tue, 01 Oct 2024 21:25:00 +0000</pubDate>
      <link>https://forem.com/intuitdev/examining-approaches-and-patterns-for-debuggability-ephemeral-containers-and-argo-workflows-3224</link>
      <guid>https://forem.com/intuitdev/examining-approaches-and-patterns-for-debuggability-ephemeral-containers-and-argo-workflows-3224</guid>
      <description>&lt;p&gt;&lt;em&gt;This blog is co-authored by &lt;/em&gt;&lt;a href="https://medium.com/@anusharagunathan" rel="noopener noreferrer"&gt;&lt;em&gt;Anusha Ragunathan &lt;/em&gt;&lt;/a&gt;&lt;em&gt;, &lt;/em&gt;&lt;a href="https://medium.com/@kevdowney" rel="noopener noreferrer"&gt;&lt;em&gt;Kevin Downey &lt;/em&gt;&lt;/a&gt;&lt;em&gt;and &lt;/em&gt;&lt;a href="https://medium.com/@jasonmjohl" rel="noopener noreferrer"&gt;&lt;em&gt;Jason Johl &lt;/em&gt;&lt;/a&gt;&lt;em&gt;and is part of a series by the Intuit platform engineering team examining approaches and patterns for debuggability:&lt;/em&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://medium.com/numaproj/reduce-alert-fatigue-with-cluster-golden-signals-2e88afc3651b" rel="noopener noreferrer"&gt;&lt;em&gt;Reduce alert fatigue with cluster golden signals&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/intuit-engineering/genai-experiments-monitoring-and-debugging-kubernetes-cluster-health-e8597454a85c" rel="noopener noreferrer"&gt;&lt;em&gt;GenAI Experiments: Monitoring and Debugging Kubernetes Cluster Health&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Intuit’s scale is vast, operating over 325+ Kubernetes clusters, encompassing 28,000 namespaces that serve around 2,500 production services and a multitude of pre-production services. This colossal infrastructure supports a development community of 8,000 engineers across 1,000 development teams. Consequently, debugging distributed systems at such a scale is a non-trivial task.&lt;/p&gt;
&lt;br&gt;
&lt;img alt="" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2APsi3DqOhohWNJimV"&gt;&lt;p&gt;In this blog, we will explore the journey behind building a “Debuggability Paved Road” at Intuit. &lt;a href="https://learning.oreilly.com/videos/oscon-2017/9781491976227/9781491976227-video306724/" rel="noopener noreferrer"&gt;Paved Roads&lt;/a&gt; are a concept in the platform engineering space that define the common set of standardized tools, practices, and processes to streamline development workflows. Our endeavor was to create a standardized debugging experience across our development ecosystem to improve developer efficiency and reduce mean time to resolution (MTTR).&lt;/p&gt;
&lt;p&gt;We’ll delve into the background of our infrastructure, the challenges we faced, and the two solutions we implemented:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Interactive debugging using ephemeral containers&lt;/li&gt;
&lt;li&gt;One-click debugging with Argo Workflows&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Finally, we’ll share our key learnings and takeaways for others looking to tackle similar challenges.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The problem: bugs&lt;/strong&gt;&lt;/p&gt;
&lt;img alt="" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F916%2F0%2ANX9eVoeI_cmMwTVh"&gt;&lt;em&gt;Everyone, look under your seat&lt;/em&gt;&lt;p&gt;With a development community as extensive as ours, one thing remains constant — bugs. These can range from esoteric 5xx errors to transient timeouts, and debugging distributed systems is notoriously challenging. Traditional methods like metrics, logs, and distributed traces often fall short when deeper debugging and observability are required. Whether it’s running a memory profiler, capturing network packets, or deploying custom debugging tools, friction in the debugging process increases MTTR (mean time to resolution) and can negatively impact business-critical applications and end customers.&lt;/p&gt;
&lt;p&gt;We identified three primary challenges impeding efficient debugging at Intuit:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Debug Tooling Setup: &lt;/strong&gt;Most developers have a set of favorite debugging tools, but security best practices harden app container images, preventing direct access to shells, package managers, or any sort of debug tooling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Abstraction:&lt;/strong&gt; Platform abstractions, although simplifying deployment, abstract away Kubernetes complexities, posing a challenge when developers need to debug their applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fragmented Expertise&lt;/strong&gt;: Debugging expertise was siloed, with experts in Java, networking, or Linux not cross-pollinating their knowledge across teams, causing inconsistent and prolonged issue resolution timelines.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To address these challenges, we aimed to create a Debuggability Paved Road, extending the concept of a developer paved road — which usually focuses on best practices and guidelines for development and deployment. First we tackled the problem of creating an automated workflow using a built-in feature of Kubernetes — ephemeral containers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Interactive Debugging Using Ephemeral Containers&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/" rel="noopener noreferrer"&gt;Ephemeral containers&lt;/a&gt; were introduced in Kubernetes 1.23 as a beta feature and became generally available in version 1.25. Unlike traditional containers, ephemeral containers can be added to a running pod for debugging purposes without restarting it. They share the same process, IPC (inter-process communication), and UTS (UNIX time-sharing) namespaces with the main application container, making them ideal for diagnostics.&lt;/p&gt;
&lt;img alt="" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2ASsCjV8c40nFaVhuO"&gt;&lt;em&gt;Ephemeral containers were added to Kubernetes in 1.23 and onwards.&lt;/em&gt;&lt;p&gt;We used ephemeral containers to develop a workflow that allows service personas to easily launch a debug shell by selecting their workspace, region, and pod name. Upon clicking “Connect,” an ephemeral container is launched in the pod, targeting the main application container, thereby enabling deep inspection.&lt;/p&gt;
&lt;img alt="" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F700%2F0%2AD7rEhplOYgN0NgRd"&gt;&lt;em&gt;One-click debugging experience to spin up a debug shell for any service at Intuit&lt;/em&gt;&lt;p&gt;Here’s an example workflow and how it works in the backend:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;User Request: &lt;/strong&gt;The service persona requests a debug shell specifying the namespace, region, and pod name through the UI.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Cluster Orchestrator:&lt;/strong&gt; Our orchestrator receives this request, authenticates it, and uses Kubernetes API (client-go) to launch an ephemeral container in the targeted pod.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bidirectional Streaming:&lt;/strong&gt; A secure HTTP connection is upgraded to a WebSocket, providing the user with a debug shell. All interactions are streamed back and forth between the debug shell and the ephemeral container.&lt;/li&gt;
&lt;/ul&gt;
&lt;img alt="" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AXs8ov-Fn1i9aoa2E"&gt;&lt;em&gt;Debug shell request and response flow&lt;/em&gt;&lt;p&gt;We also ensured our debug container image included a comprehensive toolbox, from general Linux debugging tools to language-specific packages. Security measures, such as session timeouts, RBAC controls, OPA policies, and thorough auditing, further fortify our solution.&lt;/p&gt;
&lt;p&gt;One surprise we found is that the debug shell is not just used during incidents, but also during service onboarding. Use cases like testing database connectivity from services, checking secrets configuration, and verifying connectivity to itself and downstream services are a few of the many popular ways we have seen it used by our developer community.&lt;/p&gt;
&lt;p&gt;Once we had our debug shell workflow defined, we tackled the next problem: a workflow engine for language-specific debugging.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;One-Click Debugging with Argo Workflows&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We knew there was a universal need for developers to debug running code, but often the specific debugging techniques and tools varied quite a bit depending on the language and framework. Intuit primarily uses Java Spring for web services, though we often see services in Python, Golang, and others. We knew we needed to customize the debuggability experience per language to provide a good user experience. For purposes of this blog, we will take examples from our Java debugging workflows, but we have built debugging workflows for Python and Golang as well. Let us know in the comments if you’re interested in learning more about those workflows!&lt;/p&gt;
&lt;p&gt;For our Java developers, we implemented the ability to take thread or heap dumps via a simplified UI, select the target environment and pod, and trigger a workflow. The workflow captures the required information, sanitizes it to mask any sensitive data, and uploads it to S3 for download. This approach automates complex debugging actions and provides auditable, downloadable artifacts for further analysis.&lt;/p&gt;
&lt;p&gt;A debug workflow involves:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Triggering a Workflow:&lt;/strong&gt; From the UI, the developer selects an action (e.g., take a thread dump) and the relevant pod.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Cluster Orchestrator: &lt;/strong&gt;The orchestrator determines the target cluster and namespace for the relevant pod and prepares to launch an Argo workflow in a privileged namespace of the target cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Executing the Workflow:&lt;/strong&gt; The workflow is launched in the debug namespace and captures the required data by hitting the application’s &lt;a href="https://docs.spring.io/spring-boot/3.3-SNAPSHOT/reference/actuator/endpoints.html#page-title" rel="noopener noreferrer"&gt;Java Spring Boot actuator &lt;/a&gt;endpoints.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sanitizing and Uploading:&lt;/strong&gt; The payload is sanitized and uploaded to S3. The developer gets a pre-signed URL to download the artifact.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analyzing the Data:&lt;/strong&gt; The developer can analyze the downloaded thread or heap dump using their preferred tools, such as yCrash.&lt;/li&gt;
&lt;/ul&gt;
&lt;img alt="" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AVgya5WgRi975yBAN"&gt;&lt;em&gt;Debugging workflow architecture&lt;/em&gt;&lt;p&gt;With the combination of the ephemeral containers for debug shells and ability to spin up these workflows on demand with Argo Workflows, we were able to realize our goal of a consistent Debuggability Paved Road for all Intuit services. From our development portal, any Intuit engineer at any level of expertise can debug their service without needing to set up complex local debug tooling.&lt;/p&gt;
&lt;img alt="" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2A0B8pfs-2Vvl7Anxo"&gt;&lt;em&gt;Debuggability UI that an Intuit engineer sees for debugging a Java service. They can take a thread or heap dump of the application with one simple click.&lt;/em&gt;&lt;p&gt;Finally, we also wanted to ensure the stability of the underlying infrastructure during debugging. By incorporating managed endpoints and network policies, we ensure that debugging actions do not compromise the integrity and performance of the application. For example, this helps guard against crashes in the case of taking a heap dump during a memory leak or otherwise overloaded applications. Intuit engineers can debug without fear of negatively impacting the underlying infrastructure.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Takeaways and Next Steps&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The journey of building a Debuggability Paved Road at Intuit has been enlightening and full of learnings. By leveraging ephemeral containers and Argo Workflows, we’ve managed to enhance our debugging capabilities, reduce MTTR, and boost developer productivity. We came away with a few larger takeaways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced Developer Velocity and Reduced MTTR&lt;/strong&gt;: Streamlining debugging actions with tools like ephemeral containers and automated workflows accelerates issue resolution.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated and Secure Sensitive Data Access:&lt;/strong&gt; Implement efficient audit mechanisms, RBAC controls, and session management to protect sensitive debugging data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Facilitate Seamless Collaboration:&lt;/strong&gt; Provide consistent, auditable workflows that enable all team members to contribute to debugging efforts effectively.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Democratize Tooling:&lt;/strong&gt; Ensure all developers have access to the tools needed to resolve issues, breaking down silos of expertise.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;By establishing a Debuggability Paved Road, we’ve been able to uphold our commitment to improving developer experiences, maintaining security, and ensuring the reliability of our services. We hope our experience and solutions inspire others to create more efficient and secure debugging processes for other organizations facing similar challenges in debugging distributed systems at scale.&lt;/p&gt;
&lt;p&gt;Stay tuned here for more updates on our Intuit development platform journey!&lt;/p&gt;
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedium.com%2F_%2Fstat%3Fevent%3Dpost.clientViewed%26referrerSource%3Dfull_rss%26postId%3Debd0c415a4ad" alt=""&gt;
&lt;p&gt;&lt;a href="https://medium.com/intuit-engineering/intuits-debuggability-paved-road-a-look-at-ephemeral-containers-and-argo-workflows-ebd0c415a4ad" rel="noopener noreferrer"&gt;Intuit’s Debuggability Paved Road: A Look at Ephemeral Containers and Argo Workflows&lt;/a&gt; was originally published in &lt;a href="https://medium.com/intuit-engineering" rel="noopener noreferrer"&gt;Intuit Engineering&lt;/a&gt; on Medium, where people are continuing the conversation by highlighting and responding to this story.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>argoworkflows</category>
      <category>devops</category>
    </item>
    <item>
      <title>Migrating CI/CD from Jenkins to Argo</title>
      <dc:creator>Bertrand</dc:creator>
      <pubDate>Tue, 17 Sep 2024 18:41:37 +0000</pubDate>
      <link>https://forem.com/intuitdev/migrating-cicd-from-jenkins-to-argo-1km4</link>
      <guid>https://forem.com/intuitdev/migrating-cicd-from-jenkins-to-argo-1km4</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.linkedin.com/in/bquenin/" rel="noopener noreferrer"&gt;Bertrand Quenin&lt;/a&gt; is a Staff Software Engineer at Intuit and &lt;a href="https://www.linkedin.com/in/caelan-urquhart/" rel="noopener noreferrer"&gt;Caelan Urquhart&lt;/a&gt; is the co-founder of Pipekit. This article is based on a presentation Caelan and Bertrand gave on November 6, 2023 at ArgoCon North America 2023. You can watch the entire session video &lt;a href="https://youtu.be/kKLIuAEc5Zw?si=sWMXjZS-rPQNN8G_" rel="noopener noreferrer"&gt;here&lt;/a&gt;. For more information on this session, you can read the &lt;a href="https://colocatedeventsna2023.sched.com/event/1SJQJ/migrating-cicd-from-jenkins-to-argo-bertrand-quenin-intuit-caelan-urquhart-pipekitio" rel="noopener noreferrer"&gt;event details&lt;/a&gt; or download the &lt;a href="https://docs.google.com/presentation/d/1nFIl-lsZSVSAKKLYOwuwN8KKn7qtdIkfUKYrj0KQPDA/edit?usp=sharing" rel="noopener noreferrer"&gt;slide deck&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This post is the first in a two-part series that looks at scaling cloud-native CI/CD with Jenkins and Argo. In this first post, we focus on &lt;a href="https://argoproj.github.io/workflows/" rel="noopener noreferrer"&gt;Argo Workflows&lt;/a&gt; and the CI side of the pipeline. In Part Two, we’ll look at how to use &lt;a href="https://argoproj.github.io/cd/" rel="noopener noreferrer"&gt;Argo CD&lt;/a&gt; for the CD side of the pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Here in Part One, we will:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Help you understand the challenges of running Jenkins on top of Kubernetes at scale.&lt;/li&gt;
&lt;li&gt;Show you how to use Argo Workflow alongside Argo CD to run your CI/CD pipelines. We'll cover Jenkins and Argo Workflows, see how they map, and also look briefly at an example to show the difference between the two.&lt;/li&gt;
&lt;li&gt;Look at key considerations if you decide to migrate from Jenkins to Argo Workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s start by introducing Intuit. The goal for Intuit is to help and empower our customers to make the best financial decisions using our AI-driven platform. What does CI/CD at Intuit look like? We are currently running Jenkins on top of Kubernetes at scale. This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We have nearly 6,000 developers running 100,000 build jobs daily.&lt;/li&gt;
&lt;li&gt;To support this, we run a Kubernetes cluster with about 150 nodes, on which are 200 Jenkins controllers.&lt;/li&gt;
&lt;li&gt;We’re ranging between 1,000 to 1,500 build agents at any given point in time to run those builds.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  The Challenges of Running Jenkins on Kubernetes at Scale
&lt;/h1&gt;

&lt;h2&gt;
  
  
  At Intuit
&lt;/h2&gt;

&lt;p&gt;Working with Jenkins at Intuit has been successful but also has its challenges. One of the most common complaints that we get is that it’s hard to figure out what's going on when your build fails. The UI is not very easy to use, and it can be slow. Definitely, improvements can be made on the &lt;strong&gt;user experience&lt;/strong&gt; side.&lt;/p&gt;

&lt;p&gt;What about operational considerations, such as &lt;strong&gt;high availability and disaster recovery&lt;/strong&gt;? We use the open-source version of Jenkins, which doesn't come with those features built in. So, we had to implement our own. Unfortunately, for the big Jenkins servers, it can take up to an hour to fail over one Jenkins to another region. This definitely doesn’t meet our SLAs.&lt;/p&gt;

&lt;p&gt;There is &lt;strong&gt;no unified control plane&lt;/strong&gt; with Jenkins. We’re running about 200 Jenkins servers. Even though we’ve automated as much as possible, what happens every time we need to roll out a new Jenkins upgrade or a new plugin upgrade? It's a tedious task because we have 200 servers that we need to take care of like pets.&lt;/p&gt;

&lt;p&gt;When it comes to cost and efficiency, Jenkins is not a cloud-native product. So, to run on top of Kubernetes, the execution model that was adopted was to have one pod per build, and this pod will have multiple containers. However, this pod and its containers will run for the whole duration of your build. This means that when the build is idle—such as waiting for user input to proceed to the next stage—the pod and its containers continue running, &lt;strong&gt;wasting cluster resources&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fog3xji5939uwdtbgr0ri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fog3xji5939uwdtbgr0ri.png" alt="Image description" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  At Pipekit
&lt;/h2&gt;

&lt;p&gt;At Pipekit, as a startup, we face similar challenges. As a startup, our focus is on having a lean and adaptable CI approach. Since Pipekit is a control plane for Argo Workflows, our value proposition is delivering a centralized place for customers to manage their workflows and any tools or integrations that they plug into those workflows. We manage multi-cluster workflows and even integrate with several SSO providers for RBAC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyhdjhurkxug5yvo44rd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyhdjhurkxug5yvo44rd.png" alt="Image description" width="800" height="745"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we've shipped more and more integrations and features, our CI quickly expanded. We wanted lean and adaptable CI pipelines. We wanted the ability to iterate and remix pipelines easily. We wanted to autoscale to minimize our costs as a startup. From a maintenance standpoint, we wanted a CI approach that was a bit more “set it and forget it”, reducing the amount of work we were doing to tune our Jenkins pipelines. Finally, since we were going to deploy with Argo CD, we wanted a tool that would easily integrate with rollouts and events.&lt;/p&gt;

&lt;p&gt;With Jenkins, the challenges at Pipekit were similar to those at Intuit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcmomrikwee56jn42kiy1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcmomrikwee56jn42kiy1.png" alt="Image description" width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our builds were running too long, with CI and test pipelines taking quite a while for each PR, slowing down our team. We wanted to get PRs reviewed faster.&lt;/p&gt;

&lt;p&gt;Also, because we're running all our containers in a pod, this limited the size of some of our pipelines and wasted cloud resources. We didn't feel like we could completely leverage spot nodes to drive down costs.&lt;/p&gt;

&lt;p&gt;Finally, although getting started with plugins was easy, the maintenance costs increased over time. Whenever we ran into an issue with our pipeline, we had to deal with trying to figure out which plugin caused it or whether a plugin update had a security vulnerability. All of these complexities started to add up.&lt;/p&gt;

&lt;p&gt;As a team, we were already using Argo Workflows for data processing and other infrastructure automation. So, we asked: What could we accomplish by using Argo Workflows as our CI engine?&lt;/p&gt;

&lt;h1&gt;
  
  
  Why Argo Workflows for CI
&lt;/h1&gt;

&lt;p&gt;At Pipekit, we took a hard look at what Argo has to offer and what we needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyd09uhkwzqo4bk2j2h51.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyd09uhkwzqo4bk2j2h51.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, the big benefits stemmed from running each step in a pod—by default. That unlocked some downstream benefits, like dynamically provisioning resources for each of the steps in our pipeline. This was a big win for us. We could get more granular with each pipeline step, provisioning the right resources and autoscaling it down once it's done. If a build is waiting on someone for approval, we can just spin that down until their approval is there, and then spin up a pod to complete the pipeline.&lt;/p&gt;

&lt;p&gt;The other significant benefit we had with Argo was parallelism by default. We could define dependencies whenever they exist throughout the pipeline, and Argo would automatically run steps that don't have dependencies and run in parallel. That helped us speed up our pipelines without much effort. If we were to do this in Jenkins, we would have to be a bit more prescriptive about where we would use parallelism; and if you change your mind about that down the road, you end up with some tech debt you’ll need to refactor. With Argo, we just declare dependencies whenever we see them, and Argo runs it as it sees fit.&lt;/p&gt;

&lt;p&gt;On the maintenance side, Argo was lightweight to deploy as you had another Kubernetes resource on our cluster. Without so many plugin dependencies, it was a lot easier to maintain.&lt;/p&gt;

&lt;p&gt;Of course, being in the Argo ecosystem was a benefit, as we wanted to transition seamlessly into deployment or running rollouts.&lt;/p&gt;

&lt;p&gt;Finally, not everybody on the Pipekit team was familiar with Groovy and writing Jenkins pipelines. So, it helped to have a tool that we could write with YAML. Also, Python developers could use the &lt;a href="https://github.com/argoproj-labs/hera" rel="noopener noreferrer"&gt;Hera SDK&lt;/a&gt; to spin up CI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migration Considerations
&lt;/h2&gt;

&lt;p&gt;This brings us to consider the pros and cons of Jenkins and Argo Workflows for CI.&lt;/p&gt;

&lt;p&gt;We’d still like to give Jenkins its due. It's a 10-year tool, and the community is really strong. There are a lot of great resources out there, so getting questions answered can be very quick. Argo has a strong community now, but there's still not as much of that documentation online.&lt;/p&gt;

&lt;p&gt;From a UI/UX standpoint, Jenkins is great. It’s built for a CI experience. We were used to some of those primitives, whereas Argo Workflows is more generic. We were also using Argo Workflows for data and other infrastructure automation. If we were going to migrate, we would encounter a UX difference.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrv8curjnybzq8g3omu4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrv8curjnybzq8g3omu4.png" alt="Image description" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For us at Pipekit, we felt like migrating to Argo led to great autoscaling and parallelism benefits. Granted, we needed to think about how we would pass objects between steps in our pipelines. However, that extra effort in the beginning—to figure that out by using volumes or artifacts—ends up benefiting you, as you can achieve better scale and efficiency in the pipeline. &lt;/p&gt;

&lt;h2&gt;
  
  
  Mapping between Jenkins and Argo Workflows
&lt;/h2&gt;

&lt;p&gt;Before we dive into an example, let’s briefly cover how a Jenkins pipeline maps to an Argo Workflows pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vy36q2ro297qlldvitb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vy36q2ro297qlldvitb.png" alt="Image description" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To start, we have the Jenkins file (Groovy) that maps to the Argo Workflow definition, which is either YAML or Python, if you use one of the SDKs like Hera.&lt;/p&gt;

&lt;p&gt;A step in Jenkins maps to a task in Argo.&lt;/p&gt;

&lt;p&gt;A stage in Jenkins maps to a template in Argo. Templates come in different flavors:&lt;/p&gt;

&lt;p&gt;The DAG template, which is the most popular.&lt;br&gt;
A steps template, which declares a sequence of linear steps&lt;br&gt;
A script template, where you can pass a quick testing script in Python to be run as part of a step in the pipeline.&lt;/p&gt;

&lt;p&gt;The shared library in Jenkins maps well to what's called &lt;a href="https://argo-workflows.readthedocs.io/en/latest/workflow-templates/" rel="noopener noreferrer"&gt;WorkflowTemplate&lt;/a&gt; in Argo Workflows. With this, we can parameterize and remix our pipelines better than we could with Jenkins.&lt;/p&gt;

&lt;p&gt;With Jenkins plugins, there isn’t much of a one-to-one mapping to Argo. Yes, there are Argo Workflows plugins to be aware of, but they're not built like Jenkins plugins. Argo Workflows does have exit handlers, which we can use to integrate with third-party tools.&lt;/p&gt;
&lt;h2&gt;
  
  
  A CI/CD Pipeline Example
&lt;/h2&gt;

&lt;p&gt;Now, let’s demonstrate what a standard CI-CD pipeline can look like with Jenkins and Argo Workflows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm41nmbswa886oglnjzzn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm41nmbswa886oglnjzzn.png" alt="Image description" width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our example is fairly straightforward. It starts with building a container. Then, you want to publish that container in your container registry. That stage has three steps which can run in parallel&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Publish test coverage&lt;/li&gt;
&lt;li&gt;Run security scans on the container&lt;/li&gt;
&lt;li&gt;Perform static code analysis&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After these are completed, this is where you usually want to deploy. At Intuit, we like to keep track of our deployments. So, we would first create a Jira ticket for that deployment. Then, we would deploy the new container using Argo CD. If that is successful, then we close the Jira ticket.&lt;/p&gt;

&lt;p&gt;What does this look like concretely in Jenkins, and then in an Argo Workflow? Let’s compare by looking at some code snippets from &lt;a href="https://github.com/pipekit/talk-demos/tree/main/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo" rel="noopener noreferrer"&gt;this repository&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  The Jenkinsfile
&lt;/h3&gt;

&lt;p&gt;We’ll start by looking at this &lt;a href="https://github.com/pipekit/talk-demos/blob/main/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/Jenkinsfile" rel="noopener noreferrer"&gt;Jenkinsfile&lt;/a&gt;. The first line references the shared library that we're going to use along that pipeline.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;@Library(value = 'cicd-shared-lib', changelog = false) _&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In the first section here, we instruct Jenkins where to run the agent. In this case, it's a Kubernetes agent, and this is typically where you would define your pod specification. This is related to what we mentioned above, in which you would have multiple containers running for the whole duration of your build.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;agent {
    kubernetes {
        yamlFile 'KubernetesPods.yaml'
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Through the rest of the Jenkinsfile, we define our stages, and stages can be nested. Notice that there's no need to specify a &lt;code&gt;git clone&lt;/code&gt; or &lt;code&gt;git checkout&lt;/code&gt;—it's already part of your Jenkins pipeline. It's been set up for you.&lt;/p&gt;

&lt;p&gt;In our &lt;code&gt;stage steps&lt;/code&gt;, we have functions such as &lt;code&gt;podmanBuild&lt;/code&gt; or &lt;code&gt;podmanMount&lt;/code&gt;. These functions will be defined in the shared library.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;steps {
    container('podman') {
        podmanBuild("--rm=false --build-arg=\"build=${env.BUILD_URL}\" --build-arg appVersion=${config.version} -t ${config.image_full_name} .")
        podmanMount(podmanFindImage([image: 'build', build: env.BUILD_URL]), {steps,mount -&amp;gt;
            sh(label: 'copy outputs to workspace', script: "cp -r ${mount}/usr/src/app/target ${env.WORKSPACE}")
        })
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the first stage (&lt;a href="https://github.com/pipekit/talk-demos/blob/864454c245db788d3a7f28661d6964f989ac70fb/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/Jenkinsfile#L12" rel="noopener noreferrer"&gt;lines 12-19&lt;/a&gt;), we are basically building the container. We invoke the &lt;code&gt;podmanBuild&lt;/code&gt; and &lt;code&gt;podmanMount functions&lt;/code&gt;. The &lt;code&gt;podmanMount&lt;/code&gt; call (&lt;a href="https://github.com/pipekit/talk-demos/blob/864454c245db788d3a7f28661d6964f989ac70fb/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/Jenkinsfile#L16" rel="noopener noreferrer"&gt;line 16&lt;/a&gt;) is just to extract the test coverage. Something that is nice with Jenkins is that all the files you have in your workspace are available to any of the steps of your pipeline. These files are going to be reused later.&lt;/p&gt;

&lt;p&gt;In the next stage, (&lt;a href="https://github.com/pipekit/talk-demos/blob/864454c245db788d3a7f28661d6964f989ac70fb/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/Jenkinsfile#L23" rel="noopener noreferrer"&gt;lines 23-30&lt;/a&gt;), we publish our image, again using &lt;code&gt;podman&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Finally, we have a “Container Checks” stage (&lt;a href="https://github.com/pipekit/talk-demos/blob/864454c245db788d3a7f28661d6964f989ac70fb/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/Jenkinsfile#L32" rel="noopener noreferrer"&gt;lines 32-58&lt;/a&gt;). In the case of Jenkins, if you want to run something in parallel, it must be explicit. It looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stage('Container Checks') {
    parallel {
        stage('Report Coverage &amp;amp; Unit Test Results') {
            steps {
                junit '**/surefire-reports/**/*.xml'
                jacoco()
                codeCov(config)
            }
        }

        stage('Security Scan') {
            steps {
                container('cpd2') {
                    intuitCPD2Podman(config, "-i ${config.image_full_name} --buildfile Dockerfile")
                }
             }
         }

         stage('Static Code Analysis') {
             steps {
                 container('test') {
                     echo 'Running static Code analysis: from JenkinsFile'
                     reportSonarQube(config)
                 }
             }
         }
     }
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have a &lt;code&gt;parallel&lt;/code&gt; section where we will actually reuse the test coverage that we have extracted from the &lt;code&gt;podman&lt;/code&gt; call in line 16. They're still in the workspace, and we can use them here. Then, we run some security scans, and then some static code analysis.&lt;/p&gt;

&lt;p&gt;Finally, we have the deployment stage (&lt;a href="https://github.com/pipekit/talk-demos/blob/864454c245db788d3a7f28661d6964f989ac70fb/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/Jenkinsfile#L63" rel="noopener noreferrer"&gt;lines 63-85&lt;/a&gt;) where we start by creating a Jira ticket. Then, we deploy using Argo CD.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`stage('Deploy') {
    steps {
        container('cdtools') {
            gitOpsDeploy(config, 'qa-usw2', config.image_full_name)
        }
    }
}`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We use a &lt;code&gt;gitOpsDeploy&lt;/code&gt; function, which uses Argo CD under the hood, on our QA environment using the new image name.&lt;/p&gt;

&lt;p&gt;Once the deployment is completed, we close the Jira ticket.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Argo Workflow
&lt;/h2&gt;

&lt;p&gt;For the Argo Workflow, let’s look at &lt;a href="https://github.com/pipekit/talk-demos/blob/main/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/argo-workflow.yaml" rel="noopener noreferrer"&gt;argo-workflow.yaml&lt;/a&gt;, breaking it out into a few sections.&lt;/p&gt;

&lt;p&gt;First, we have all of the arguments that we want to pass through the workflow (starting at &lt;a href="https://github.com/pipekit/talk-demos/blob/864454c245db788d3a7f28661d6964f989ac70fb/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/argo-workflow.yaml#L7" rel="noopener noreferrer"&gt;lines 7-20&lt;/a&gt;), including the Git branch, our container tag, Jira ticket number, etc.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`parameters:
  - name: app_repo
    value: ""
  - name: git_branch
    value: ""
  - name: target_branch
    value: ""
  - name: is_pr
    value: ""
  - name: container_tag
    value: ""
  - name: jira_ticket_number
    value: ""`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we have the volume claim (&lt;a href="https://github.com/pipekit/talk-demos/blob/864454c245db788d3a7f28661d6964f989ac70fb/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/argo-workflow.yaml#L24" rel="noopener noreferrer"&gt;lines 24-32&lt;/a&gt;), since we need to be a bit more intentional about setting up object passing with our workflow. We define the directory on the cluster where we’ll store objects as we pass between steps. Then, we set the access mode here to &lt;code&gt;ReadWriteMany&lt;/code&gt; so that we can enable parallel read-writes throughout the workflow and achieve a higher level of parallelism as we run our pipeline.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`volumeClaimTemplates:
  - metadata:
      name: workdir
    spec:
      accessModes: [ "ReadWriteMany" ]
      storageClassName: nfs
      resources:
        requests:
          storage: 1Gi`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When it comes to the actual pipeline and how this maps compared to Jenkins, this is the &lt;code&gt;templates&lt;/code&gt; section (&lt;a href="https://github.com/pipekit/talk-demos/blob/864454c245db788d3a7f28661d6964f989ac70fb/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/argo-workflow.yaml#L35" rel="noopener noreferrer"&gt;lines 35-115&lt;/a&gt;), where our DAG lives and the pipeline is defined. Here, we set up a DAG template with several tasks.&lt;/p&gt;

&lt;p&gt;We use what's called the &lt;code&gt;templateRef&lt;/code&gt; to reference &lt;code&gt;WorkflowTemplates&lt;/code&gt;. If we've applied all those &lt;code&gt;WorkflowTemplates&lt;/code&gt; to our cluster, workflows will automatically reference those. We have a &lt;a href="https://github.com/pipekit/talk-demos/tree/main/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/Argo%20WorkflowTemplates" rel="noopener noreferrer"&gt;directory with our Argo Workflow templates&lt;/a&gt;, where we have everything defined, and we simply reference that in our workflow manifest in this definition. So, for example, in our DAG, we define a &lt;code&gt;git-checkout&lt;/code&gt;task and a &lt;code&gt;get-git-info&lt;/code&gt; task to get our SHAs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`- name: git-checkout
  templateRef:
    name: git-checkout
    template: main
- name: get-git-info
  templateRef:
    name: get-git
    template: main`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These each reference a &lt;code&gt;templateRef&lt;/code&gt;, which we have defined in our Argo Workflow templates directory. For example, the &lt;code&gt;get-git templateRef&lt;/code&gt; is defined in this &lt;a href="https://github.com/pipekit/talk-demos/blob/main/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/Argo%20WorkflowTemplates/get-git.yml" rel="noopener noreferrer"&gt;workflow template&lt;/a&gt;. This approach makes it much easier to iterate on a new pipeline by just referring to the workflow templates and passing in different parameters depending on what that pipeline needs.&lt;/p&gt;

&lt;p&gt;Moving on to the &lt;code&gt;container-build&lt;/code&gt; task, we see that it's also using a workflow template, and it depends on the &lt;code&gt;git-checkout&lt;/code&gt; and &lt;code&gt;get-git-info&lt;/code&gt; tasks to run. That's how we declare the shape of the pipeline and the order.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`- name: container-build
  arguments:
    parameters:
      - name: container_tag
        value: "{{workflow.parameters.container_tag}}-{{tasks.get-git.outputs.parameters.release-version}}"
      - name: container_image
        value: "pipekit-internal/foo"
      - name: dockerfile
        value: "Dockerfile"
      - name: path
        value: "/"
      templateRef:
        name: container-build
        template: main
      depends: git-checkout &amp;amp;&amp;amp; get-git-info`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If a task doesn’t use that &lt;code&gt;depends&lt;/code&gt; key, then it'll just run in parallel automatically.&lt;/p&gt;

&lt;p&gt;Next, we run our unit tests (&lt;a href="https://github.com/pipekit/talk-demos/blob/864454c245db788d3a7f28661d6964f989ac70fb/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/argo-workflow.yaml#L62" rel="noopener noreferrer"&gt;lines 62-65&lt;/a&gt;), container scan (&lt;a href="https://github.com/pipekit/talk-demos/blob/864454c245db788d3a7f28661d6964f989ac70fb/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/argo-workflow.yaml#L67" rel="noopener noreferrer"&gt;lines 67-71&lt;/a&gt;), and code analysis (&lt;a href="https://github.com/pipekit/talk-demos/blob/864454c245db788d3a7f28661d6964f989ac70fb/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/argo-workflow.yaml#L72" rel="noopener noreferrer"&gt;lines 72-87&lt;/a&gt;). All of these tasks ultimately depend on &lt;code&gt;git-checkout&lt;/code&gt; and &lt;code&gt;get-git-info&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Finally, we create the Jira ticket (&lt;a href="https://github.com/pipekit/talk-demos/blob/864454c245db788d3a7f28661d6964f989ac70fb/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/argo-workflow.yaml#L88" rel="noopener noreferrer"&gt;lines 88-92&lt;/a&gt;). Similar to previous tasks, we have an &lt;a href="https://github.com/pipekit/talk-demos/blob/main/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/Argo%20WorkflowTemplates/update-jira.yml" rel="noopener noreferrer"&gt;update-jira workflow template&lt;/a&gt;. In that template, we define how we want to update Jira, pass in parameters (for opening, updating, closing, etc.), and use that template throughout the pipeline whenever it's needed.&lt;/p&gt;

&lt;p&gt;Then, we have our deployment task.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`- name: deploy-application-preprod
  templateRef:
    name: deploy-application
    template: deploy-application
  arguments:
    parameters:
      - name: app_type
        value: "preprod"
      - name: gh_tag
        value: "{{tasks.get-git.outputs.parameters.release-version}}"
  depends: create-jira-ticket
  when: "{{workflow.parameters.is_pr}} == true &amp;amp;&amp;amp; {{workflow.parameters.target_branch}} == master"`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This, of course, points to our &lt;a href="https://github.com/pipekit/talk-demos/blob/main/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/Argo%20WorkflowTemplates/deploy-application.yml" rel="noopener noreferrer"&gt;deploy-application workflow template&lt;/a&gt;. We pass in our arguments here for Argo CD to use to then run the deploy. That's where we effectively have a nice, seamless integration from Argo Workflows to Argo CD in our deployment.&lt;/p&gt;

&lt;p&gt;Lastly, we wrap it up with updating JIRA again at the end (&lt;a href="https://github.com/pipekit/talk-demos/blob/864454c245db788d3a7f28661d6964f989ac70fb/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/argo-workflow.yaml#L105" rel="noopener noreferrer"&gt;lines 105-115&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Optional: Workflow metrics and Prometheus&lt;br&gt;
We also have the optional ability to add in a simple, native way to emit Prometheus metrics from our Argo Workflow. We've done that (in &lt;a href="https://github.com/pipekit/talk-demos/blob/864454c245db788d3a7f28661d6964f989ac70fb/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo/jenkins-vs-argo-workflows-comparison/argo-workflow.yaml#L115" rel="noopener noreferrer"&gt;lines 117-157&lt;/a&gt;) by adding a duration metric for how long our pipeline's running, as well as counters for successful and failed runs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Tips and Wrap-up
&lt;/h2&gt;

&lt;p&gt;As a reminder, take a look at the &lt;a href="https://github.com/pipekit/talk-demos/tree/main/argocon-demos/2023-migrating-cicd-from-jenkins-to-argo" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;, which has the workflow templates and structure, so you can see how Argo Workflows works for this use case. You can also check out a working example of the code to run it locally or even on your cluster.&lt;/p&gt;

&lt;p&gt;Migrating your Jenkins pipelines to Argo might seem a bit daunting, but have no fear! You don’t have to take the approach of a full weekend with no sleep to make it happen. We recommend taking it in a piecemeal approach.&lt;/p&gt;

&lt;p&gt;At Pipekit, our approach was first to just trigger Argo jobs with Jenkins, not completely migrating off Jenkins all at once. This enabled us to get a feel for how it would work, how stable it would be, and get that buy-in to then have confidence that we can move over larger pipelines.&lt;/p&gt;

&lt;p&gt;When we did start moving over, we started with some of the simpler tasks and pipelines. That made it easier to figure out how we would want to migrate everything else—like our complex Jenkins pipelines.&lt;/p&gt;

&lt;p&gt;Other tips to use when you're migrating include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adopt workflow templates and parameterization from the very beginning.&lt;/li&gt;
&lt;li&gt;Think of each step as something that you will reuse down the road, and this will enable you to accelerate your migration as you get more familiar with Argo Workflows.&lt;/li&gt;
&lt;li&gt;Don't forget to tap into the Argo community.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7lrqlitbphtmmylftv06.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7lrqlitbphtmmylftv06.png" alt="Image description" width="620" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At Pipekit, we also have &lt;a href="https://pipekit.io/signup?qr-source=argocon-jenkins-talk" rel="noopener noreferrer"&gt;a virtual cluster where you can set up a pipeline of your own&lt;/a&gt; with pre-built examples. So, you won’t need to worry about configuring your local cluster or anything like that. Just spin up a virtual cluster and run that on your own.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bit.ly/intuit-oss" rel="noopener noreferrer"&gt;Intuit loves open source&lt;/a&gt; and is a major contributor to &lt;a href="https://argo-cd.readthedocs.io/en/stable/" rel="noopener noreferrer"&gt;Argo CD&lt;/a&gt; and &lt;a href="https://argoproj.github.io/rollouts/" rel="noopener noreferrer"&gt;Argo Rollouts&lt;/a&gt;, We encourage you to check out those projects as well.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>cicd</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Building an AI-Powered, Paved Road Platform with Cloud-Native Open source technologies</title>
      <dc:creator>Avni Sharma</dc:creator>
      <pubDate>Mon, 09 Sep 2024 23:47:03 +0000</pubDate>
      <link>https://forem.com/intuitdev/building-an-ai-powered-paved-road-platform-with-cloud-native-open-source-technologies-4b2f</link>
      <guid>https://forem.com/intuitdev/building-an-ai-powered-paved-road-platform-with-cloud-native-open-source-technologies-4b2f</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://www.linkedin.com/in/tekenstam/" rel="noopener noreferrer"&gt;Todd Ekenstam&lt;/a&gt; is a Principal Software Engineer at Intuit, and &lt;a href="https://www.linkedin.com/in/16avnisharma/" rel="noopener noreferrer"&gt;Avni Sharma&lt;/a&gt; is a Product Manager at Intuit. This article is based on Todd and Avni's March 19, 2024 presentation at KubeCon + CloudNativeCon Europe 2024. You can watch the entire session video &lt;a href="https://www.youtube.com/watch?v=z6ItgXM4RxE" rel="noopener noreferrer"&gt;here&lt;/a&gt;. You can read the event details or download the &lt;a href="https://static.sched.com/hosted_files/colocatedeventseu2024/03/Intuit%20-%20Building%20a%20Paved%20Road%20Platform%20%28PlatformEngineeringDay%20EU%202024%29.pdf" rel="noopener noreferrer"&gt;slide deck&lt;/a&gt; for more information on this session.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Intuit is a global fintech company that builds many financial products like QuickBooks and TurboTax. We’re an AI-driven expert platform. How much so? To give one metric, we make more than 40 million AIOps inferences per day.&lt;/p&gt;

&lt;p&gt;We also have a huge Kubernetes platform, with approximately 2,500 production services and even more pre-prod services. We have approximately 315 Kubernetes clusters and more than 16,000 namespaces.&lt;/p&gt;

&lt;p&gt;Our developer community is large, too, with around 1,000 teams and 7,000 developers at Intuit working on end-user products. The &lt;strong&gt;service developer&lt;/strong&gt; is among the personas who deal with the platform on a day-to-day basis. Service developers build the app logic for the product, that is then shipped to the end user. They’re focused on code and shipping fast.&lt;/p&gt;

&lt;p&gt;Then, we have the &lt;strong&gt;platform persona&lt;/strong&gt;, or who we call platform experts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges for Developers on the Platform
&lt;/h2&gt;

&lt;p&gt;The overarching goal of platform engineering is to drive developer autonomy. Platform engineers focus on enabling service developers by providing capabilities through several interfaces. For example, if a developer needs a database, they should be able to access one—whether a Node.js developer or a database administrator.&lt;/p&gt;

&lt;p&gt;Using these capabilities should be frictionless and easy. If a developer wants to deploy or manage an application on Kubernetes, they shouldn’t need to know the nitty-gritty of the platform and the infrastructure. They should be able to do it seamlessly.&lt;/p&gt;

&lt;p&gt;But what are the challenges that service developers face today?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8o7aa8ox22phk58rn8al.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8o7aa8ox22phk58rn8al.png" alt="Image description" width="704" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Steep learning curve
&lt;/h3&gt;

&lt;p&gt;Our developers often deal with many Kubernetes internals and APIs, as well as infra-related configurations, on a daily basis. When something is misconfigured, they need more troubleshooting help.&lt;/p&gt;

&lt;h3&gt;
  
  
  Local development with dependencies
&lt;/h3&gt;

&lt;p&gt;The second friction point revolves around the developer experience. Developers need help managing environments, local testing, and a lengthy onboarding process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tech refreshes need migrations
&lt;/h3&gt;

&lt;p&gt;The third challenge concerns tech refreshes that require migration. For example, if we upgrade Kubernetes or replace the CloudWatch agent with Fluentd, we must migrate deprecated APIs. These kinds of migrations require support from the service developer team.&lt;/p&gt;

&lt;p&gt;Consider a sample workflow our service developer goes through on our internal developer portal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5z7d0eq62nakc17g7inj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5z7d0eq62nakc17g7inj.png" alt="Image description" width="512" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, the developer creates an asset on our dev portal. An asset is the atomic form of deployment on our Kubernetes layer. Next, the developer develops and deploys the app.&lt;/p&gt;

&lt;p&gt;After that, they configure specific Kubernetes primitives in their deployment repo—like PDBs, HPA configuration, or Argo rollout analysis templates. They might be onboarded to an API gateway to expose their app to the internet and onboarded to a service mesh for configuring rate limiting. Next, they have end-to-end testing of their application.&lt;/p&gt;

&lt;p&gt;Finally, if they have any performance tests to run, they perform load testing by configuring their min-max HPA. And, of course, this is all intertwined with perpetual platform migrations—quarterly—to stay up to date.&lt;/p&gt;

&lt;p&gt;This kind of workflow can undoubtedly drive your service developer crazy. They must focus on the business logic and deal with infrastructure concerns.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Target State
&lt;/h2&gt;

&lt;p&gt;Now that we’ve examined the challenges let’s look at our target state and where we would like to be. We want to translate all these application needs into platform means. Service developers can focus on developing the code, deploying it seamlessly without knowing the platform’s nitty-gritty details, and then performing end-to-end testing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4hxq2zng43jhtgm5o4t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4hxq2zng43jhtgm5o4t.png" alt="Image description" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The platform should address all other concerns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Addressing the Challenges
&lt;/h2&gt;

&lt;p&gt;To set some context, let’s describe Intuit's development platform, which we call &lt;strong&gt;Modern SaaS AIR&lt;/strong&gt;. At the top, our developer portal provides the developer experience for all of our engineers and manages an application's complete lifecycle.&lt;/p&gt;

&lt;p&gt;From there, our platform is based on these four pillars:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AI-powered app experiences&lt;/li&gt;
&lt;li&gt;GenAI-assisted development&lt;/li&gt;
&lt;li&gt;App-centric runtime&lt;/li&gt;
&lt;li&gt;Smart operations&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Our operational data lake supports all of this. It provides a rich data set for visibility into how all our applications are developed, deployed, and run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6vu141i0c9ei8r0bejud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6vu141i0c9ei8r0bejud.png" alt="Image description" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  IKS AIR
&lt;/h3&gt;

&lt;p&gt;Let’s focus on the &lt;strong&gt;runtime and traffic management component we call IKS AIR&lt;/strong&gt;. IKS is Intuit’s Kubernetes Service layer.  IKS AIR is a simplified deployment and management platform for containerized applications running on Kubernetes. It provides everything an engineer needs to build, run, and scale an application. The main components of IKS AIR are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;An abstracted application-centric runtime environment&lt;/li&gt;
&lt;li&gt;Unified traffic management&lt;/li&gt;
&lt;li&gt;Developer-friendly debug tools&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbn2u6laro3azvgoz7ubi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbn2u6laro3azvgoz7ubi.png" alt="Image description" width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you build and run your own platform on Kubernetes, you likely have many of these same concerns.&lt;/p&gt;

&lt;h3&gt;
  
  
  Application-centric runtime
&lt;/h3&gt;

&lt;p&gt;The application-centric runtime relates to two main concerns: abstracting the Kubernetes details and intelligently recommending scaling solutions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Abstraction and simplification
&lt;/h4&gt;

&lt;p&gt;The application specification abstracts and simplifies the Kubernetes details (see the left side of the diagram below) and provides an app-centric specification (see the right side of the diagram below). The platform is responsible for generating the actual Kubernetes resources submitted to the clusters.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhfedsjae1j7v8kvqi0z3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhfedsjae1j7v8kvqi0z3.png" alt="Image description" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our application spec is heavily influenced by the &lt;a href="https://oam.dev/" rel="noopener noreferrer"&gt;Open Application Model (OAM)&lt;/a&gt;, which describes the application in terms of components that are customized or extended using traits. Components represent individual pieces of functionality, such as a web service or a worker. Traits can modify or change the behavior of a component that they're applied to.&lt;/p&gt;

&lt;p&gt;Through this system of components and traits, developers can define what their applications require from the platform—without needing a complete understanding of how it's implemented in Kubernetes.&lt;/p&gt;

&lt;p&gt;Let’s consider an example of this complexity: the progressive delivery solution utilizing &lt;a href="https://argoproj.github.io/rollouts/" rel="noopener noreferrer"&gt;Argo&lt;/a&gt; Rollouts and &lt;a href="https://numaflow.numaproj.io/" rel="noopener noreferrer"&gt;Numaflow&lt;/a&gt;, created by the platform to enable the automatic rollback of buggy code. When a new version of an application is rolled out, canary pods with that new version are first created, and then some percentage of the traffic is sent to those new pods.&lt;/p&gt;

&lt;p&gt;Numaflow pipelines analyze the metrics of those canary pods, generating an anomaly score. If the anomaly score is low, then the rollout will continue. However, if it is high—for example, above seven or eight—then Argo Rollouts will stop the deployment and automatically revert to the prior revision.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3nlwboowb0x20p2w0ru9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3nlwboowb0x20p2w0ru9.png" alt="Image description" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is an essential aspect of how we help our developers deploy confidently without knowing how to set up this complex solution.&lt;/p&gt;

&lt;h4&gt;
  
  
  Intelligent autoscaling
&lt;/h4&gt;

&lt;p&gt;IKS AIR  also automatically recommends scaling solutions for applications. To do this, the application resource sizing, such as the memory and CPU, must be determined, and unexpected events like OOMKilled or eviction must be handled.&lt;/p&gt;

&lt;p&gt;The platform must also handle horizontal scaling to ensure the applications operate correctly at scale and with varying load levels. It needs to identify the metrics the application should scale on and the minimum and maximum number of replicas. Naturally, these are all primarily data-driven problems.&lt;/p&gt;

&lt;p&gt;AI will significantly impact capacity planning and autoscaling, helping us to be more efficient with our computing resources. So we're building an intelligent autoscaling recommendation system to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduce the burden on our developers&lt;/li&gt;
&lt;li&gt;Help us ensure our workloads have the resources they need&lt;/li&gt;
&lt;li&gt;Improve the efficiency of our platform&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The basic underlying idea is this: We have components in the cluster handling short-window scaling operations and emitting metrics. Subsequently, these metrics are analyzed by a group of ML models that make long-window capacity and scaling recommendations. The solutions to different scaling problems are then applied back to the clusters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Traffic management
&lt;/h3&gt;

&lt;p&gt;Another big challenge we've identified for our developers is the configuration and management of network traffic.&lt;/p&gt;

&lt;p&gt;While some applications need to use particular capabilities in our networking environment, we found that most only need to change a few common configurations. Our solution simplifies endpoint management, unifying the configuration of our API gateway and service mesh while providing graduated complexity as required. Here is an example of the traffic configuration of a service on our developer platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj52795p6hko9ov1685uk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj52795p6hko9ov1685uk.png" alt="Image description" width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most applications only need to configure routes and throttling. However, if required, they can toggle on &lt;strong&gt;Advanced Configs&lt;/strong&gt;, which gives them access to CORS and OAuth scopes. They can edit the underlying YAML configuration for even more complex use cases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7pq18ubd6600lrhk26l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7pq18ubd6600lrhk26l.png" alt="Image description" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Debug Tools
&lt;/h3&gt;

&lt;p&gt;With the platform abstracted, our service developers faced another challenge: troubleshooting their services. And we know that abstraction and debuggability don’t go hand in hand. Therefore, it was critical for us to not only provide a paved path but also to service their debuggability needs.&lt;/p&gt;

&lt;p&gt;To accomplish this, we provided an extremely developer-friendly debugging experience in the developer portal. Service developers don't need to know about Kubernetes primitives or have any historical knowledge of the platform. Our aim is to democratize debug tooling across teams, as we saw many developers already juggling many different tools. This approach will help us reduce MTTR and friction in debugging workflows.&lt;/p&gt;

&lt;h4&gt;
  
  
  Interactive debugging using ephemeral containers
&lt;/h4&gt;

&lt;p&gt;First, we've provided our developers with an interactive debugging shell experience using ephemeral containers. Ephemeral containers are a GA feature in the Kubernetes 1.25 release. They’re ideal for running a custom-built transient container and troubleshooting the main app container in the service pod. This is great for introspection and debugging. So now you can launch a new debug container into an existing pod.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1s8liyhl8dye61ax4bns.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1s8liyhl8dye61ax4bns.png" alt="Image description" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This ephemeral debug container will share the same process namespace, PID, IPC, and UTS namespaces of the target container. Given that the containers in a pod already share their network namespaces, this debug container is set up perfectly to debug issues in the app container.&lt;/p&gt;

&lt;p&gt;This is a demo of how this looks on our development platform:&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://player.vimeo.com/video/1000711859" width="710" height="399"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on the shell icon.&lt;/li&gt;
&lt;li&gt;Select a host, which is a pod. Select a pod and click Connect.&lt;/li&gt;
&lt;li&gt;From here, an ephemeral container will try to connect to the particular app container.&lt;/li&gt;
&lt;li&gt;Once it attaches and a connection is established, a session is started.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this way, we have hidden the complexity of using kubectl exec to access a pod or even get a kubeconfig. A developer can quickly use this frictionless experience to debug their service.&lt;/p&gt;

&lt;h4&gt;
  
  
  One-click debugging
&lt;/h4&gt;

&lt;p&gt;Another feature we have provided is one-click debugging. We have used &lt;a href="https://argoproj.github.io/workflows/" rel="noopener noreferrer"&gt;Argo Workflows&lt;/a&gt; for the workflow implementation, which is ideal for defining the sequence of steps needed for a debugging workflow. Specific debugging techniques are required based on the language and framework. We also want to preserve as much application context, structures, and code references as possible while debugging a service.&lt;/p&gt;

&lt;p&gt;At Intuit, we determined that our top two languages are Java and Golang. These are some of the language-specific debugging tools that we might use:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gtd4ly2mhvyc7ikpmry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gtd4ly2mhvyc7ikpmry.png" alt="Image description" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What does this look like in our developer portal? The user interacts with the UI to take a thread or heap dump on a target pod for a Java service. A workflow is executed in the background when they hit Add to Debug List and add that specific pod or host. This will run certain steps in sequence to perform the thread dump and heap dump workflow. The developer can later download the artifact and use their preferred analysis tool. These artifacts are available for download for only 24 hours.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbadtcfxf4cuzqmoj6s9h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbadtcfxf4cuzqmoj6s9h.png" alt="Image description" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;To conclude this post, let’s go over the key takeaways of building this paved road:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Increases developer release velocity&lt;/li&gt;
&lt;li&gt;Streamlines the process of performing platform migrations with very little friction for service developers and their teams&lt;/li&gt;
&lt;li&gt;Reduces the time to get a service into production, since the platform handles much of the heavy lifting, taking that burden off the developer.&lt;/li&gt;
&lt;li&gt;Helps to reduce potential incidents caused by infrastructure misconfigurations.&lt;/li&gt;
&lt;li&gt;Provides a better developer experience by abstracting infrastructure network connectivity and providing intelligent autoscaling for managing service availability&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>opensource</category>
      <category>kubernetes</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Getting Started with GraphQL</title>
      <dc:creator>Lucy</dc:creator>
      <pubDate>Wed, 31 Jul 2024 14:00:00 +0000</pubDate>
      <link>https://forem.com/intuitdev/getting-started-with-graphql-am7</link>
      <guid>https://forem.com/intuitdev/getting-started-with-graphql-am7</guid>
      <description>&lt;p&gt;Welcome to the world of GraphQL! At its core, GraphQL is both a query language for your API and a server-side runtime for executing those queries. It uses a type system you define for your data. GraphQL isn’t just another tool—it represents a new way of thinking about how data is loaded and managed in applications. If you’re coming from a background in RESTful APIs, you are in for a treat.&lt;/p&gt;

&lt;p&gt;Most of us are familiar with companies like Meta (Facebook), who &lt;a href="https://engineering.fb.com/2015/09/14/core-infra/graphql-a-data-query-language/" rel="noopener noreferrer"&gt;developed GraphQL&lt;/a&gt;, or others who have adopted GraphQL to streamline and power their data interactions. Those companies include &lt;a href="https://docs.github.com/en/graphql" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, &lt;a href="https://www.apollographql.com/blog/how-wayfair-achieved-graphql-success-with-federated-graphql" rel="noopener noreferrer"&gt;Wayfair&lt;/a&gt;, &lt;a href="https://developer.atlassian.com/platform/atlassian-graphql-api/graphql/#:~:text=The%20Atlassian%20platform%20GraphQL%20API%20allows%20you%20to%20access%20to,or%20cross%2Dproduct%20work%20activities." rel="noopener noreferrer"&gt;Atlassian&lt;/a&gt;, and &lt;a href="https://www.apollographql.com/blog/how-intuit-handled-their-busiest-time-of-year-with-apollo-router" rel="noopener noreferrer"&gt;Intuit&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;What makes GraphQL stand out? Ultimately, GraphQL gives you the ability to request exactly the data you need—nothing more and nothing less—and to retrieve that from a single endpoint.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll cover the essentials: what GraphQL is, how it differs from REST, and how to start integrating it into your projects. We’ll walk you through this step by step. Let’s begin with the question most asked by API developers who are new to GraphQL.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does GraphQL Differ from REST?
&lt;/h2&gt;

&lt;p&gt;When working with APIs, you're probably used to REST, where each resource, like users or products, has its own URL. This setup can lead to a lot of endpoints, and sometimes, fetching simple information requires multiple requests. For instance, imagine you need to generate a summary of an order for a customer. You might need the date the order was placed and the names of the products in that order. With REST, this would typically involve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sending a request to the /orders endpoint to get the order dates.&lt;/li&gt;
&lt;li&gt;Sending another request to the /products endpoint to fetch the product names.&lt;/li&gt;
&lt;li&gt;Possibly sending another request to the /user endpoint to confirm customer details.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In each of these calls, REST might send back much more data than you need—like product prices, descriptions, or user addresses—which you won’t use for your summary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Selective data retrieval&lt;/strong&gt;&lt;br&gt;
With GraphQL, you avoid this inefficiency by specifying exactly what you need: just the order date and product names, nothing more. This not only cuts down on the data transferred over the network but also significantly speeds up response times, making applications quicker and more responsive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Single endpoint approach&lt;/strong&gt;&lt;br&gt;
Unlike REST, which uses multiple endpoints, GraphQL operates through a single endpoint. This simplifies the workflow because you don’t have to manage multiple URLs; everything happens in one place.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Efficiency and performance&lt;/strong&gt;&lt;br&gt;
One of the biggest advantages of using GraphQL is its efficiency. Since you can fetch all the required data in a single request, it reduces the number of requests sent to the server. This is particularly beneficial in complex systems or mobile applications where network performance is crucial.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strongly typed schema&lt;/strong&gt;&lt;br&gt;
GraphQL uses a strongly typed system where each operation is backed by a schema defined on the server. This schema acts as a contract between the client and the server, ensuring only valid queries are allowed. It helps catch errors early in the development process, improving the quality of your code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding the Basics of GraphQL&lt;/strong&gt;&lt;br&gt;
As you start your journey with GraphQL, it’s important to grasp its fundamental concepts. These include the ways you can query data, modify it, and define the structure of data using GraphQL’s type system. Let’s break these down:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core operations&lt;/strong&gt;&lt;br&gt;
At its heart, GraphQL is built around three main operations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Queries: These are used to read or fetch data. Unlike REST, where you might receive more information than you requested, GraphQL queries are designed to return exactly what you specify, no more and no less. This precision not only makes data retrieval more efficient but also easier to predict and handle in your applications.&lt;/li&gt;
&lt;li&gt;Mutations: While queries fetch data, mutations change data. Whether you’re adding new data, modifying existing data, or deleting it, mutations are your go-to operations. They are clearly defined in the schema, so you know exactly what kind of data modification is allowed and what isn’t.&lt;/li&gt;
&lt;li&gt;Subscriptions: Although we won't delve deeply into subscriptions here, it's useful to know they exist for real-time data updates. Like listening to live updates from a server, subscriptions allow your application to get immediate data as soon as it changes on the server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Type system&lt;/strong&gt;&lt;br&gt;
GraphQL’s type system is like a blueprint for your data. Here’s how it works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://www.graphql-java.com/documentation/schema/" rel="noopener noreferrer"&gt;Schema&lt;/a&gt; definition&lt;/strong&gt;: This is where you define every type of data that can be queried or mutated through your GraphQL API. It outlines the structure of both input data (what you send) and output data (what you receive).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strong typing&lt;/strong&gt;: Every piece of data has a type, like String, Integer, or a custom type like User or Product. This strict typing helps catch errors during development, as GraphQL will only process queries and mutations that fit the schema.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resolvers&lt;/strong&gt;: For every field in your schema, there's a function called a resolver that's responsible for fetching the data for that field. This means whenever a query or mutation requests a specific field, the resolver for that field runs to retrieve or update the data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Understanding these basics sets the foundation for effectively using GraphQL in your projects. You’ll find that with a strong grasp of queries, mutations, and the type system, transitioning from traditional REST approaches becomes much smoother and more intuitive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting Up a GraphQL Environment&lt;/strong&gt;&lt;br&gt;
In this section, we’ll set up a GraphQL server to handle a dataset involving users, orders, and products. This will give you a practical feel for how GraphQL manages relationships and queries. We'll use Node.js for our server setup, and we’ll take advantage of a package from Apollo Server.&lt;/p&gt;

&lt;p&gt;Apollo GraphQL is widely recognized as a leading provider of open-source tools for GraphQL. It offers a range of user-friendly solutions, such as Apollo Server and Apollo Studio, that help developers create and manage their GraphQL applications more efficiently. It has strong community support and straightforward documentation, making it easy for anyone to start working with GraphQL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installation and initial configuration&lt;/strong&gt;&lt;br&gt;
Whether you’re working in an IDE or directly from the command line, getting started with GraphQL simply requires that you have Node.js and npm installed on your machine. The easiest way to do this is through nvm (Node Version Manager), which you can use to install the latest version of Node.js and npm. You can follow these &lt;a href="https://nodejs.org/en/download/package-manager" rel="noopener noreferrer"&gt;installation instructions&lt;/a&gt; for the simple steps to do this.&lt;/p&gt;

&lt;p&gt;With Node.js and npm installed, you’re ready to begin working with GraphQL.&lt;/p&gt;

&lt;p&gt;Step 1: Project initialization&lt;br&gt;
First, we’ll start by creating our project directory and initializing it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~$ mkdir project &amp;amp;&amp;amp; cd project
~/project$ npm init -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Step 2: Install dependencies&lt;br&gt;
Next, install &lt;a href="https://www.apollographql.com/docs/apollo-server/" rel="noopener noreferrer"&gt;Apollo Server&lt;/a&gt; and GraphQL:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/project$ npm install apollo-server graphql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Step 3: Data preparation&lt;br&gt;
Prepare the data model with some dummy data by creating a file called &lt;code&gt;data.json&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/project$ touch data.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This file will contain a mock initial data set, which makes it easier to get up and running with GraphQL, as opposed to needing to set up an entire database backend. The &lt;code&gt;data.json&lt;/code&gt; file should have the following contents:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "users": [
    { "id": "1", "name": "Alice" },
    { "id": "2", "name": "Bob" }
  ],
  "orders": [
    { "id": "1", "userId": "1", "products": [{"productId": "1", "quantity": 1}, {"productId": "2", "quantity": 2}] },
    { "id": "2", "userId": "2", "products": [{"productId": "3", "quantity": 1}, {"productId": "4", "quantity": 1}] }
  ],
  "products": [
    { "id": "1", "name": "Laptop", "description": "High performance laptop", "price": 1249.99 },
    { "id": "2", "name": "Smartphone", "description": "Latest model smartphone", "price": 575.00 },
    { "id": "3", "name": "Tablet", "description": "Portable and powerful tablet", "price": 410.99 },
    { "id": "4", "name": "Headphones", "description": "Noise cancelling headphones", "price": 199.00 }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Server setup&lt;/p&gt;

&lt;p&gt;Step 4: Define GraphQL schema and resolvers&lt;br&gt;
In a &lt;code&gt;server.js&lt;/code&gt; file, define our GraphQL schema (with Apollo Server, the convention is to define this in &lt;code&gt;typeDefs&lt;/code&gt;), which includes types for &lt;code&gt;User&lt;/code&gt;, &lt;code&gt;Order&lt;/code&gt;, &lt;code&gt;Product&lt;/code&gt;, and their relationships. Also, define the &lt;code&gt;resolvers&lt;/code&gt; to specify how the data for each field in your schema is fetched from the data source.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/project$ touch server.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The server.js file should look like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// server.js
const { ApolloServer, gql } = require('apollo-server');
const data = require('./data.json');

// Type definitions define the "shape" of your data and specify which ways the data can be fetched from the GraphQL server.
const typeDefs = gql`
  type User {
    id: ID
    name: String
    orders: [Order]
  }
  type Order {
    id: ID
    userId: ID
    products: [OrderProduct]
  }
  type OrderProduct {
    productId: ID
    quantity: Int
    product: Product
  }
  type Product {
    id: ID
    name: String
    description: String
    price: Float
  }
`;

// Resolvers define the technique for fetching the types defined in the schema.
const resolvers = {
  User: {
    orders: (user) =&amp;gt; data.orders.filter(
        order =&amp;gt; order.userId === user.id
    ),
  },
  OrderProduct: {
    product: (orderProduct) =&amp;gt; data.products.find(
        product =&amp;gt; product.id === orderProduct.productId
    ),
  }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We’ve defined two &lt;code&gt;resolvers&lt;/code&gt; above. Notice how in our &lt;code&gt;typeDefs&lt;/code&gt; that a &lt;code&gt;User&lt;/code&gt; has an array of &lt;code&gt;Orders&lt;/code&gt;, and an &lt;code&gt;Order&lt;/code&gt; has an array of &lt;code&gt;OrderProducts&lt;/code&gt;. In our &lt;code&gt;resolvers&lt;/code&gt;, we make it clear that a user’s &lt;code&gt;orders&lt;/code&gt; are, naturally, those where the &lt;code&gt;userId&lt;/code&gt; matches the user’s &lt;code&gt;id&lt;/code&gt;. A similar approach applies to an &lt;code&gt;Order&lt;/code&gt; and its &lt;code&gt;OrderProducts&lt;/code&gt;. We use &lt;code&gt;resolvers&lt;/code&gt; to help define the relationship between data models.&lt;/p&gt;

&lt;p&gt;Step 5: Initialize and launch Apollo Server&lt;br&gt;
Next, we initialize Apollo Server with the schema and resolvers, and then we start it up to listen for requests. Add the following to the bottom of &lt;code&gt;server.js&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const server = new ApolloServer({ typeDefs, resolvers });
server.listen().then(({ url }) =&amp;gt; {
  console.log(`🚀 Server ready at ${url}`);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This setup gives us a fully functional GraphQL server using Apollo. Next, we’ll dive into how to construct and execute queries to interact with this server setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Working with Queries&lt;/strong&gt;&lt;br&gt;
With our Apollo Server up and running, we can build out the &lt;code&gt;typeDefs&lt;/code&gt; and &lt;code&gt;resolvers&lt;/code&gt; to handle basic queries.&lt;/p&gt;

&lt;p&gt;Define &lt;code&gt;Query&lt;/code&gt; in &lt;code&gt;typeDefs&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let’s add a &lt;code&gt;Query&lt;/code&gt; type to our &lt;code&gt;typeDefs&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const typeDefs = gql`
  type Query {
    users: [User]
    user(id: ID!): User
    orders: [Order]
    order(id: ID!): Order
    products: [Product]
    product(id: ID!): Product
  }
  type User {...}
  type Order {...}
  …
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;These definitions establish the type of data that will be returned for each given query. For example, the &lt;code&gt;users&lt;/code&gt; query will return an array of &lt;code&gt;Users&lt;/code&gt;, while the &lt;code&gt;user&lt;/code&gt; query—which will take a single &lt;code&gt;id&lt;/code&gt;—will return a single &lt;code&gt;User&lt;/code&gt;. Of course, it would make sense that the single &lt;code&gt;User&lt;/code&gt; returned is the one with the matching &lt;code&gt;id&lt;/code&gt;. However, we use &lt;code&gt;resolvers&lt;/code&gt; to make that explicit.&lt;/p&gt;

&lt;p&gt;Add &lt;code&gt;Query&lt;/code&gt; to &lt;code&gt;resolvers&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let’s update our &lt;code&gt;resolvers&lt;/code&gt; object to look like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const resolvers = {
  Query: {
    users: () =&amp;gt; data.users,
    user: (_, { id }) =&amp;gt; data.users.find(user =&amp;gt; user.id === id),
    orders: () =&amp;gt; data.orders,
    order: (_, { id }) =&amp;gt; data.orders.find(order =&amp;gt; order.id === id),
    products: () =&amp;gt; data.products,
    product: (_, { id }) =&amp;gt; data.products.find(product =&amp;gt; product.id === id),
  },
  User: { … },
  OrderProduct: { … }
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We see from how we define each &lt;code&gt;Query&lt;/code&gt; in our &lt;code&gt;resolvers&lt;/code&gt; that the &lt;code&gt;users&lt;/code&gt; query will map to our &lt;code&gt;data.users&lt;/code&gt; array. Meanwhile, a specific &lt;code&gt;product&lt;/code&gt; query will search the &lt;code&gt;data.products&lt;/code&gt; array for the &lt;code&gt;product&lt;/code&gt; with the matching &lt;code&gt;id&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Start server&lt;br&gt;
Now, we’re ready to start our server. In the terminal, we execute the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/project$ node server.js
🚀 Server ready at http://localhost:4000/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Test basic queries&lt;/p&gt;

&lt;p&gt;To interact with our GraphQL API from the command line, we can use curl to send HTTP requests to the /graphql endpoint. If you were to visit &lt;a href="http://localhost:4000" rel="noopener noreferrer"&gt;http://localhost:4000&lt;/a&gt; directly in your browser, you would see the Apollo Studio web interface (more on that &lt;a href="https://docs.google.com/document/d/1FFywd4LNIsJko3YoVr4nOk7MIZZnYDdak9JMa0FkAYY/edit#heading=h.s0mu8yrwycy" rel="noopener noreferrer"&gt;below&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Let's say we want to fetch the name and orders for the user with id 1. The GraphQL query in JSON format would look like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~$ curl \
   -X POST \
   -H "Content-Type: application/json" \
   -d '{"query": "{ users { id name } }"}' \
   http://localhost:4000/graphql

{"data":{"users":[{"id":"1","name":"Alice"},{"id":"2","name":"Bob"}]}}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above request, we are sending a &lt;code&gt;users&lt;/code&gt; query, which we defined in our &lt;code&gt;resolvers&lt;/code&gt; to be “all the users in the &lt;code&gt;data.users&lt;/code&gt; array.” Based on the &lt;code&gt;User&lt;/code&gt; schema defined in &lt;code&gt;typeDefs&lt;/code&gt;, we can ask for an &lt;code&gt;id&lt;/code&gt;, &lt;code&gt;name&lt;/code&gt;, and &lt;code&gt;orders&lt;/code&gt; for each &lt;code&gt;User&lt;/code&gt;. However, in GraphQL, we get to decide how much information we want to receive. In the case of this request, we only want the &lt;code&gt;id&lt;/code&gt; and &lt;code&gt;name&lt;/code&gt; for each &lt;code&gt;User&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let’s send another request. This time, we will retrieve all the &lt;code&gt;orders&lt;/code&gt; for a specific &lt;code&gt;User&lt;/code&gt;, and for each &lt;code&gt;Order&lt;/code&gt;, we also want to fetch certain information about the associated &lt;code&gt;OrderProducts&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~$ curl \
   -X POST \
   -H "Content-Type: application/json" \
   -d '{"query": "{ user(id:1) { orders { id products { quantity product { name price } } } } }"}' \
   http://localhost:4000/graphql | json_pp

{
   "data" : {
      "user" : {
         "orders" : [
            {
               "id" : "1",
               "products" : [
                  {
                     "product" : {
                        "name" : "Laptop",
                        "price" : 1249.99
                     },
                     "quantity" : 1
                  },
                  {
                     "product" : {
                        "name" : "Smartphone",
                        "price" : 575
                     },
                     "quantity" : 2
                  }
               ]
            }
         ]
      }
   }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Notice that both of our requests were &lt;code&gt;POST&lt;/code&gt; requests to the same /graphql endpoint. All our requests will be &lt;code&gt;POST&lt;/code&gt; requests, and they’ll all go to this endpoint. Our data payload notes that we are sending a &lt;code&gt;query&lt;/code&gt;, and then we specify our query parameters as a JSON string.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mutations and Updating Data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://graphql.org/learn/queries/#mutations" rel="noopener noreferrer"&gt;Mutations&lt;/a&gt; are essential for performing write operations in GraphQL. Unlike queries, which are used to fetch data, mutations change data—anything from creating and updating to deleting records. Let’s define some common mutations for your GraphQL API and show how to implement them.&lt;/p&gt;

&lt;p&gt;Define &lt;code&gt;Mutation&lt;/code&gt; in &lt;code&gt;TypeDefs&lt;/code&gt;&lt;br&gt;
First, we’ll extend our GraphQL schema to include mutation type definitions:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const typeDefs = gql`
  type Query { … }
  type Mutation {
    addUser(name: String!): User
    updateUser(id: ID!, name: String!): User
    addProduct(name: String!, description: String, price: Float!): Product
    addOrder(userId: ID!, products: [ProductInput!]!): Order
  }
  input ProductInput {
    productId: ID!
    quantity: Int!
  }
  type User { … }
  type Order { … }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We’re adding four mutations here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;addUser&lt;/code&gt; will add a new user with the given string &lt;code&gt;name&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;updateUser&lt;/code&gt; will update the &lt;code&gt;name&lt;/code&gt; of the user with a given &lt;code&gt;id&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;addProduct&lt;/code&gt; will add a new product with the given &lt;code&gt;name&lt;/code&gt;, &lt;code&gt;description&lt;/code&gt;, and &lt;code&gt;price&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;addOrder&lt;/code&gt; will add a new order for a user with the given id, but it will also need to take an array of &lt;code&gt;ProductInput&lt;/code&gt; objects, which we also define as a combination of a &lt;code&gt;productId&lt;/code&gt; and &lt;code&gt;quantity&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Add &lt;code&gt;Mutation&lt;/code&gt; to &lt;code&gt;resolvers&lt;/code&gt;&lt;br&gt;
Next, we define &lt;code&gt;resolvers&lt;/code&gt; for our mutations.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const resolvers = {
  Query: { ... },
  Mutation: {
    addUser: (_, { name }) =&amp;gt; {
      const newUser = { id: String(data.users.length + 1), name };
      data.users.push(newUser);
      return newUser;
    },
    updateUser: (_, { id, name }) =&amp;gt; {
      const user = data.users.find(user =&amp;gt; user.id === id);
      if (!user) return null;
      user.name = name;
      return user;
    },
    addOrder: (_, { userId, products }) =&amp;gt; {
      const newOrder = {
        id: String(data.orders.length + 1),
        userId,
        products
      };
      data.orders.push(newOrder);
      return newOrder;
    },
    addProduct: (_, { name, description, price }) =&amp;gt; {
      const newProduct = { id: String(data.products.length + 1), name, description, price };
      data.products.push(newProduct);
      return newProduct;
    }
  },
  User: { ... },
  OrderProduct: { ... }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In our &lt;code&gt;resolvers&lt;/code&gt;, we specify how the Apollo Server should handle each of our mutation operations. This typically means creating a new object (or finding the object to be updated, in case of &lt;code&gt;updateUser&lt;/code&gt;) and then updating &lt;code&gt;data&lt;/code&gt; with that object.&lt;/p&gt;

&lt;p&gt;Restart server&lt;br&gt;
With our mutations defined, let’s restart our server with &lt;code&gt;node server.js&lt;/code&gt; again.&lt;/p&gt;

&lt;p&gt;Test mutation operations in Apollo Studio&lt;br&gt;
Apollo has a convenient and feature-rich web interface called Apollo Explorer that can connect to our development environment server to make querying and testing easier. With our server started, we can visit the server URL (if you are running locally, then it will be &lt;code&gt;http://localhost:4000&lt;/code&gt;) in our browser, and this will redirect us to Apollo Explorer. Alternatively, visit &lt;a href="https://studio.apollographql.com/sandbox/explorer" rel="noopener noreferrer"&gt;https://studio.apollographql.com/sandbox/explorer&lt;/a&gt; directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8wqcn6kn4y0jxzihq8l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8wqcn6kn4y0jxzihq8l.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s test out a few mutation operations in Apollo Explorer. For example, the following operation calls the addUser mutation to add a new user with the name Charlie, and then it returns the resulting id and name of the newly added user.&lt;/p&gt;

&lt;p&gt;In Apollo Explorer, the operation looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxe77pf3nxj7x7tyl00ss.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxe77pf3nxj7x7tyl00ss.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What if we wanted to add a new order for Charlie? It would look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzvuanxdeqdq2ebjz94v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzvuanxdeqdq2ebjz94v.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, here’s how we might perform an updateUser mutation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff545v92zwn09m9p12qk9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff545v92zwn09m9p12qk9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that mutation operations can be followed up with querying for more efficient operations.&lt;/p&gt;

&lt;p&gt;A note on handling data&lt;br&gt;
So far, our examples have used an in-memory object called data which started as data read in from a JSON file. This approach is great for learning and quick prototyping, but it isn’t suitable for production environments where data persistence is crucial.&lt;/p&gt;

&lt;p&gt;In a production setting, you would typically connect your GraphQL server to a &lt;a href="https://graphql.guide/background/databases/" rel="noopener noreferrer"&gt;persistent database&lt;/a&gt;. The resolvers we've written, which currently manipulate the in-memory &lt;code&gt;data&lt;/code&gt; object, would be adjusted to execute queries against this database instead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error Handling and Debugging&lt;/strong&gt;&lt;br&gt;
Handling errors and debugging in GraphQL may differ slightly from how you would do it with traditional REST APIs. Here are some tips for approaching error handling and debugging efficiently:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Check for &lt;code&gt;errors&lt;/code&gt;&lt;/strong&gt;: GraphQL operations typically return a list of errors in the response body. These might not be HTTP status errors but are instead part of the GraphQL response under the errors field. Therefore, you should always check for errors in the response alongside the data to ensure comprehensive error handling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logging&lt;/strong&gt;: Implement logging on the server side to capture requests and errors. This can help you trace issues back to specific queries or mutations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validate queries&lt;/strong&gt;: Use tools like Apollo Explorer during development to validate queries and mutations for syntax and structure before deployment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use clear error messages&lt;/strong&gt;: &lt;a href="https://www.apollographql.com/docs/apollo-server/data/errors#custom-errors" rel="noopener noreferrer"&gt;Design your &lt;code&gt;resolvers&lt;/code&gt; to throw errors&lt;/a&gt; which include clear and descriptive error messages. This helps clients understand what went wrong without the need to dig into server-side logs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tools and Ecosystem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When working with GraphQL—especially as you’re just starting out—you don’t need to build everything from scratch. Take advantage of the rich ecosystem of tools designed to enhance your development experience. Apollo Studio featuring Explorer is a prime example, allowing you to build and test your queries interactively within a web interface. This makes it easier to visualize and manipulate your GraphQL operations.&lt;/p&gt;

&lt;p&gt;For an alternative in-browser GraphQL IDEs, there is also &lt;a href="https://github.com/graphql/graphiql/tree/main/packages/graphiql" rel="noopener noreferrer"&gt;GraphiQL&lt;/a&gt;. For API testing and query building, &lt;a href="https://www.postman.com/" rel="noopener noreferrer"&gt;Postman&lt;/a&gt;—traditionally known for handling REST APIs—also supports GraphQL. Additionally, &lt;a href="https://github.com/graphql-kit/graphql-voyager" rel="noopener noreferrer"&gt;GraphQL&lt;/a&gt; &lt;a href="https://github.com/graphql-kit/graphql-voyager" rel="noopener noreferrer"&gt;Voyager&lt;/a&gt; is a tool for visually exploring your GraphQL API as an interactive graph. This can be particularly useful for understanding and documenting the relationships between entities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Congratulations on navigating through the fundamentals of GraphQL! By now, you should have a solid understanding of how GraphQL differs from traditional REST APIs, its efficiency in fetching and manipulating data, and the flexibility it offers through queries and mutations.&lt;/p&gt;

&lt;p&gt;Throughout this guide, we've explored:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Setting up a GraphQL environment using Apollo Server.&lt;/li&gt;
&lt;li&gt;Constructing and executing queries to retrieve data, demonstrating the efficiency of GraphQL in fetching exactly only the data that you need.&lt;/li&gt;
&lt;li&gt;Implementing mutations to modify data.&lt;/li&gt;
&lt;li&gt;Tips for error handling and debugging.&lt;/li&gt;
&lt;li&gt;Additional tools that can help you in your GraphQL learning journey.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Continue exploring GraphQL, and spend time looking into advanced features like subscriptions. As you design APIs for projects in the future, consider whether the efficiency and power of GraphQL may be a better fit for your needs than a traditional REST API. Keep learning, keep building, and make the most of what GraphQL has to offer!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvrzw1tzhygbfe6u1wdc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvrzw1tzhygbfe6u1wdc.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>graphql</category>
      <category>opensource</category>
      <category>beginners</category>
      <category>programming</category>
    </item>
    <item>
      <title>Open Source for Beginners: FAQ Guide</title>
      <dc:creator>Lucy</dc:creator>
      <pubDate>Fri, 15 Mar 2024 21:18:09 +0000</pubDate>
      <link>https://forem.com/intuitdev/open-source-for-beginners-quick-faq-guide-47a6</link>
      <guid>https://forem.com/intuitdev/open-source-for-beginners-quick-faq-guide-47a6</guid>
      <description>&lt;p&gt;Interested in contributing to open source, but not sure where to start? As a developer advocate working on &lt;a href="https://github.com/intuit"&gt;Intuit Open Source&lt;/a&gt;, I hear this quite a lot from our community. To help you get over that cold start problem, we've compiled a beginner-friendly FAQ to jumpstart your first open source contribution.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you prefer video, consider checking out our &lt;a href="https://youtu.be/XE22V0TU0cM?si=G8rW_FJsQW1Xyrg4"&gt;OSS beginners video guide&lt;/a&gt; as well!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where can I find good active projects to contribute to?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="http://firsttimersonly.com"&gt;firsttimersonly.com&lt;/a&gt; has a listing of various first-timer resources&lt;/li&gt;
&lt;li&gt;GitHub issues search: &lt;a href="https://github.com/issues?q=is%3Aopen+label%3A%22good+first+issue%22"&gt;#good-first-issue&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/MunGell/awesome-for-beginners"&gt;awesome-for-beginners&lt;/a&gt; (categorized by language)&lt;/li&gt;
&lt;li&gt;Nontechnical contribution resources: &lt;a href="https://github.com/szabgab/awesome-for-non-programmers"&gt;awesome-for-non-programmers&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our #1 tip: just do it! Don't get too stuck looking for the ideal first project, just choose something you &lt;em&gt;can&lt;/em&gt; do and start getting your feet wet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I feature OSS work on my resume or portfolio?&lt;/strong&gt;&lt;br&gt;
You can list open source contributions under the "Projects" or "Volunteering" headers on your resume. Rather than listing every contribution you've ever made, focus on the projects you're most associated with, and highlight the number of contributions you made to each of those projects.&lt;/p&gt;

&lt;p&gt;Even if it's just one pull request, it's worth showing off your work to potential employers. An open source PR demonstrates that you're able to get up to speed with an unknown codebase quickly, a valuable skill for any technical role in the industry.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What technologies do I need to know to contribute to open source?&lt;/strong&gt;&lt;br&gt;
A working understanding of Git and GitHub is essential, but beyond that, the specific technology stack and tools you'll need will depend on the project you want to contribute to. If you're looking to deepen your expertise in a particular skill set, take a closer look at the open source libraries out there leveraging the tools or languages you’re hoping to learn, and find one that fits your learning goals.&lt;/p&gt;

&lt;p&gt;E.g. As a frontend developer looking to improve my TypeScript fluency, I could search GitHub for good first issues tagged on projects that use TypeScript.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What happens when multiple people want to work on the same issue?&lt;/strong&gt;&lt;br&gt;
Most maintainers will typically have a "first come, first serve" policy when it comes to assigning issues to contributors. If you see an issue you'd like to tackle, make sure it's up for grabs before starting work by commenting on the issue and getting a confirmation from a maintainer on the project. It’s also common that someone may claim an issue and then not make progress on it for a while, at which point you’re welcome to step in and take over (with the maintainers’ blessing, of course).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When should I stop taking good first issues?&lt;/strong&gt;&lt;br&gt;
The beauty of open source is that there's always something new to learn and contribute to. Take good first issues as long as you feel like you need to, but don't be afraid to move on when you outgrow them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How can I generate interest in a new open source project?&lt;/strong&gt;&lt;br&gt;
Building a strong community around your project is key to its success. Offer good first issues, participate in events and open source conferences, create clean documentation and video guides, and don't hesitate to ask for feedback from others in the field. Remember that it takes time and consistency to build community momentum, and there are entire teams and OSPOs dedicated to this specific problem, so don’t be too hard on yourself if things don’t take off immediately or if you can’t achieve the perfectly consistent cadence. Little by little, you’ll get there!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What other resources can you recommend for beginners?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Here's a good &lt;a href="https://github.com/asmeurer/git-workflow"&gt;Git workflow walkthrough&lt;/a&gt;, or you can step through the &lt;a href="https://github.com/firstcontributions/first-contributions#first-contributions"&gt;first-contributions&lt;/a&gt; guide&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.conventionalcommits.org/en/v1.0.0/"&gt;Conventional commits&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let us know in the comments if you have any other questions, and we’ll answer as much as we can. If you’re still looking around for a project, our Intuit Open Source projects can also be a great place to start.&lt;/p&gt;

&lt;p&gt;Happy coding, open-sourcerors!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Getting started with LLM and LangChain</title>
      <dc:creator>Ayush</dc:creator>
      <pubDate>Wed, 13 Mar 2024 14:02:46 +0000</pubDate>
      <link>https://forem.com/intuitdev/getting-started-with-llm-and-langchain-fni</link>
      <guid>https://forem.com/intuitdev/getting-started-with-llm-and-langchain-fni</guid>
      <description>&lt;p&gt;Large Language Models (LLMs) have revolutionized the field of Natural Language Processing (NLP) and Artificial Intelligence (AI). However, building and training LLMs can be a complex and resource-intensive process, requiring significant computational power and expertise. That's where &lt;a href="https://www.langchain.com/"&gt;langchain&lt;/a&gt; comes in—LangChain provides a range of tools and features that address various complexities with building applications using LLMs, such as prompt crafting, response structuring, and model integration.&lt;/p&gt;

&lt;p&gt;In this blog post, we will go over some 101-level information on LLMs, and then walk through a demo on how to build a very simple LLM-based application to query for information. Let's dive into the world of Large Language Models and explore the ways in which LangChain is making them accessible and affordable for everyone.&lt;/p&gt;

&lt;h2&gt;
  
  
  First, what are Large Language Models?
&lt;/h2&gt;

&lt;p&gt;Large language models (LLMs) are a class of artificial intelligence (AI) algorithms that have been trained on massive amounts of text data, for example, text from books, articles, websites, or other written content. LLMs use this data to learn how to predict the next word or words that will occur in a sequence. Given some initial words in a sentence or paragraph, an LLM can generate a natural-sounding continuation of the text.&lt;/p&gt;

&lt;p&gt;There are several different types of LLMs, each with its own unique architecture and training methodologies. Some of the most popular types of LLMs include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Recurrent Neural Networks (RNNs): These neural networks are designed to handle sequential data, meaning language models with context, such as long sentences or paragraphs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Transformers: Transformers were developed in 2017 by OpenAI models and have since become among the most popular LLM designs. Transformers are trained using a &lt;a href="https://en.wikipedia.org/wiki/Self-supervised_learning"&gt;self-supervised learning&lt;/a&gt; learning method that enables them to analyze and encode vast amounts of text data while still maintaining the context of the information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generative Pre-trained Transformer 3 (GPT-3): This is one of the most advanced LLMs currently available, with 175 billion parameters as of 2021. GPT-3 is a neural network-based language model that is incredibly powerful at generating and understanding human language.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Large language models have many applications, including natural language search engines, text-to-speech systems, chatbots and automated customer service, and language translation software. As LLMs continue to improve, they are becoming increasingly useful tools for businesses in a wide range of industries.&lt;/p&gt;

&lt;p&gt;LLMs are already starting to change the way we interact with written content, and as their capabilities continue to grow, businesses and other organizations are starting to realize the benefits of using these systems to streamline their operations. For those interested in AI’s future, large language models are an exciting area to follow, and LangChain is a great platform to learn more and build new AI skills.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is LangChain?
&lt;/h2&gt;

&lt;p&gt;LangChain offers a comprehensive framework for creating applications that leverage the power of large language models. With LangChain, users are able to develop chatbots, personal assistants, and other AI-powered tools that can perform a range of tasks such as summarization, analysis, Q&amp;amp;A generation, code comprehension, API interaction, and more.&lt;/p&gt;

&lt;p&gt;The LangChain framework comes equipped with pre-designed chains for efficiently executing complex tasks. Users can customize existing chains or build entirely new ones by utilizing the vast library of components that LangChain offers.&lt;/p&gt;

&lt;p&gt;The LangChain framework has numerous &lt;a href="https://python.langchain.com/docs/integrations/components"&gt;components&lt;/a&gt; that can be used to develop custom applications. These components include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompts&lt;/strong&gt;: Prompt templates play a crucial role in streamlining user input to ensure effective communication with a natural language model. Its primary function is to format user input in a fashion that matches the expectations of the language model. These templates are crucial in providing context and specifying inputs to help the model perform the intended task efficiently. As an illustration, a chatbot prompt template may incorporate the user's name alongside their question to enhance contextual relevance and improve the performance of the language model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chains&lt;/strong&gt;: Chains are pre-designed language model components that accomplish specific tasks. These are pre-designed sets of components that can process input and generate output. Chains can be used as-is or customized to meet the specific requirements of a project. Custom chains can be designed by combining multiple components, creating powerful and flexible applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agents&lt;/strong&gt;: Agents are responsible for language model training and communication. Agents train models by determining the optimal input for the model and optimizing its output. They are also responsible for receiving user input, processing it, and providing appropriate output from the model. The LangChain Agent handles different input/output formats, language models, and APIs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LLMs&lt;/strong&gt;: Language models (LLMs) are at the core of LangChain's functionality. They analyze, summarize, generate Q&amp;amp;A, and understand code. LangChain enables users to create their own models or use pre-trained models from the LangChain library. The flexibility of LangChain allows users to train models on any type of data or use pre-trained models for any type of use case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Callbacks&lt;/strong&gt;: Callbacks enable developers to enhance the functionality of the LangChain components by adding customized algorithms. The callbacks can be used to modify the output generated by the language model or modify the input before input pre-processing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory&lt;/strong&gt;: Memory allows users to store data across multiple sessions, enabling LangChain to provide a more personalized experience to the users. Memory can be used to store user preferences, chat history, and interaction patterns, allowing LangChain to learn from previous interactions and adapt to user's needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Output Parsers&lt;/strong&gt;: Output parsers take responsibility for organizing the response output that Language Model (LLM) generates. The parser performs some functions, such as the elimination of unwanted or irrelevant content, addition of supplementary information, or alteration of response format. As a significant component of the LangChain framework, output parsers play an essential role in maintaining outcome relevance, accuracy, and structure. It ensures that the LLM's responses are more decipherable, thus improving ease of interpretation and increases in practicality of such responses.&lt;/p&gt;

&lt;p&gt;One of the most significant benefits that LangChain offers is the ability to access LLMs models without expending high development costs or needing extensive technical expertise. LangChain’s suite of tools deliver valuable assistance for prompt crafting, response structuring, model integration and other programming-related requirements. &lt;/p&gt;

&lt;p&gt;Now that we’re familiar with the concepts, let’s walk through a simple example to better understand how various components work together. We’ll build a service that checks whether Wikipedia contains cat recordings (who doesn’t love a cat video?):&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation and Setup
&lt;/h2&gt;

&lt;p&gt;Let’s first configure the environment. Assuming Python is already installed in your working environment, the next step is to install the LangChain library using pip:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install langchain
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we are going to use the OpenAI’s language models, we will install OpenAI SDK as well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install openai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, as a critical step in connecting to OpenAI's API, it is necessary to set the OPENAI_API_KEY as an environment variable. First, sign in to your OpenAI account and navigate to account settings. Then, head over to the &lt;a href="https://help.openai.com/en/articles/4936850-where-do-i-find-my-api-key"&gt;View API Keys&lt;/a&gt; option and generate a secret key, which you will need to copy. Within your Python script, use the os module to access the dictionary of available environment variables and set the "OPENAI_API_KEY" to the secret API key you copied earlier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
os.environ["OPENAI_API_KEY"] = "your-api-key-here"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Get predictions&lt;/strong&gt;&lt;br&gt;
Import the LLM wrapper:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.llms import OpenAI
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then initialize the wrapper with any arguments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gen_llm = OpenAI(openai_api_key=”…”, verbose=False)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Initialize the APIChain to query from the API endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.chains.api import open_meteo_docs

chain_new = APIChain.from_llm_and_api_docs(gen_llm, “https://cat-fact.herokuapp.com/facts/", verbose=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we’re ready to ask a question to the LLM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chain_new.run(‘Does wikipedia have a cat recording?’)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should get an output like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; Entering new APIChain chain…
https://cat-fact.herokuapp.com/facts/


[{“status”:{“verified”:true,”sentCount”:1},”_id”:”58e008800aac31001185ed07",”user”:”58e007480aac31001185ecef”,”text”:”Wikipedia has a recording of a cat meowing, because why not?”,”__v”:0,”source”:”user”,”updatedAt”:”2020–08–23T20:20:01.611Z”,”type”:”cat”,”createdAt”:”2018–03–06T21:20:03.505Z”,”deleted”:false,”used”:false},{“status”:{“verified”:true,”sentCount”:1},”_id”:”58e008630aac31001185ed01",”user”:”58e007480aac31001185ecef”,”text”:”When cats grimace, they are usually \”taste-scenting.\” They have an extra organ that, with some breathing control, allows the cats to taste-sense the air.”,”__v”:0,”source”:”user”,”updatedAt”:”2020–08–23T20:20:01.611Z”,”type”:”cat”,”createdAt”:”2018–02–07T21:20:02.903Z”,”deleted”:false,”used”:false},{“status”:{“verified”:true,”sentCount”:1},”_id”:”58e00a090aac31001185ed16",”user”:”58e007480aac31001185ecef”,”text”:”Cats make more than 100 different sounds whereas dogs make around 10.”,”__v”:0,”source”:”user”,”updatedAt”:”2020–08–23T20:20:01.611Z”,”type”:”cat”,”createdAt”:”2018–02–11T21:20:03.745Z”,”deleted”:false,”used”:false},{“status”:{“verified”:true,”sentCount”:1},”_id”:”58e009390aac31001185ed10",”user”:”58e007480aac31001185ecef”,”text”:”Most cats are lactose intolerant, and milk can cause painful stomach cramps and diarrhea. It’s best to forego the milk and just give your cat the standard: clean, cool drinking water.”,”__v”:0,”source”:”user”,”updatedAt”:”2020–08–23T20:20:01.611Z”,”type”:”cat”,”createdAt”:”2018–03–04T21:20:02.979Z”,”deleted”:false,”used”:false},{“status”:{“verified”:true,”sentCount”:1},”_id”:”58e008780aac31001185ed05",”user”:”58e007480aac31001185ecef”,”text”:”Owning a cat can reduce the risk of stroke and heart attack by a third.”,”__v”:0,”source”:”user”,”updatedAt”:”2020–08–23T20:20:01.611Z”,”type”:”cat”,”createdAt”:”2018–03–29T20:20:03.844Z”,”deleted”:false,”used”:false}]

&amp;gt; Finished chain.

Out[4]:
‘ Yes, Wikipedia has a recording of a cat meowing.’

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Congratulations! Your basic setup of an LLM-based application using LangChain is now working successfully.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>GraphQL, gqlXPath, and Enhanced Transformer</title>
      <dc:creator>Yaron Karni</dc:creator>
      <pubDate>Sun, 07 Jan 2024 22:25:43 +0000</pubDate>
      <link>https://forem.com/intuitdev/extend-graphql-gxpath-and-enhanced-transformer-12bf</link>
      <guid>https://forem.com/intuitdev/extend-graphql-gxpath-and-enhanced-transformer-12bf</guid>
      <description>&lt;p&gt;I recently worked on a project that required manipulating GraphQL documents for queries and mutations. &lt;br&gt;
I discovered that certain techniques for working with JSON and XML could also be useful when working with GraphQL documents.&lt;/p&gt;

&lt;p&gt;GraphQL documents are explicitly used for sending queries and mutations to the GraphQL service rather than primarily for data storage and exchange like JSON.&lt;/p&gt;

&lt;p&gt;To modify a GraphQL document, the following steps need to be followed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Traverse through the GraphQL document.&lt;/li&gt;
&lt;li&gt;Identify the relevant GraphQL node or nodes.&lt;/li&gt;
&lt;li&gt;Manipulate the GraphQL node or nodes as required.&lt;/li&gt;
&lt;li&gt;Create a new GraphQL document with the manipulated node or nodes.&lt;/li&gt;
&lt;li&gt;Pass the new GraphQL document to the GraphQL server to execute the query or mutation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Let's exclude point (5) from our discussion, as it can be accomplished through multiple tools and code snippets.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;gqlex will an advanced solution for the aforementioned statements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;gqlex will offer a new way to select GraphQL node&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This article will explain how ti use gqlex to traverse a GraphQL document, select the relevant nodes, and manipulate the document according to your needs.&lt;br&gt;
We'll also play with the code and elaborate on relevant use cases.&lt;/p&gt;

&lt;p&gt;The library is part of the Intuit open-source community named gqlex: &lt;a href="https://github.com/intuit/gqlex"&gt;gqlex-opensource-intuit&lt;/a&gt;, which aims to provide a polyglot implementation for &lt;em&gt;path-selection&lt;/em&gt; and &lt;em&gt;transformation&lt;/em&gt; for GraphQL with advanced techniques to optimize the traversal over GraphQL document. &lt;/p&gt;
&lt;h2&gt;
  
  
  Traverse over GraphQL document
&lt;/h2&gt;

&lt;p&gt;Every element in GraphQL, such as document, query, mutation, fragment, inline fragment, directive, etc., is derived from a node with unique attributes and behavior.&lt;/p&gt;

&lt;p&gt;gqlex uses the &lt;a href="https://en.wikipedia.org/wiki/Observer_pattern"&gt;observer pattern&lt;/a&gt; in which the GraphQL document acts as a subject, traverses the entire document and notifies relevant observers on the node and context.&lt;/p&gt;

&lt;p&gt;This observer pattern separates the traversal over the GraphQL document from the execution part that the observer consumer code would like to perform.&lt;/p&gt;
&lt;h2&gt;
  
  
  Selection of Node
&lt;/h2&gt;

&lt;p&gt;XML has &lt;a href="https://en.wikipedia.org/wiki/XPath"&gt;XPath&lt;/a&gt;.&lt;br&gt;
JSON has &lt;a href="https://github.com/json-path/JsonPath"&gt;JSONPath&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Now ... &lt;strong&gt;GraphQL document has gqlXPath&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;gqlXPath can be used to navigate through nodes in a GraphQL document.&lt;br&gt;
gqlXPath uses path expressions to select nodes or node on the GraphQL document.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Behind the scenes, the gqlXPath utilizes the traversal module &lt;br&gt;
and selects the node according to the required expression.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  gqlXPath syntax
&lt;/h3&gt;

&lt;p&gt;gqlXPath uses path expressions to select nodes in GraphQL document. The node is selected by following a path or steps. &lt;/p&gt;

&lt;p&gt;The most useful path expressions are listed below:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Expression&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;//&lt;/td&gt;
&lt;td&gt;Path prefix: Select all nodes from the root node and use a slash as a separator between path elements.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;/&lt;/td&gt;
&lt;td&gt;Path prefix: Select the first node from the root node and use a slash as a separator between path elements.  The range is not supported when the first node is selected.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;{x:y}/&lt;/td&gt;
&lt;td&gt;Path prefix, Select path node(s), between a range of x path node and y path node (inclusion), use of slash as a separator between path elements. x and y are positive integers. All nodes are selected if no x and y are not set.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;{:y}//&lt;/td&gt;
&lt;td&gt;Path prefix, Select path node(s), between a range of first path node result to y, using a slash as a separator between path elements. x and y are positive integers. All nodes are selected if no x and y are not set.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;{x:}/&lt;/td&gt;
&lt;td&gt;Path prefix, Select path node(a), between range of x path node result to the end of path nodes result, use slash as a separator between path elements. x and y are integers. if no x and y are not set, select all path nodes.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;{:}//&lt;/td&gt;
&lt;td&gt;Path prefix, Select node(s) from the root node, use of slash as a separator between path elements.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;Support of relative path &lt;strong&gt;"any"&lt;/strong&gt; selection e.g. {x:y}//a/b/.../f &lt;br&gt;  &lt;strong&gt;any&lt;/strong&gt; can be set anywhere in the gqlXPath, except at the end of the gqlXPath, You can set many &lt;strong&gt;any&lt;/strong&gt; as you request, this will help you while selecting node in large GraphQL structure, so you won't be required to mention/build the entire node structure.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The library also provides an equivalent code-named SyntaxPath that provides gqlXPath expression abilities use of code, mainly used by automation code.&lt;/p&gt;
&lt;h2&gt;
  
  
  Transformer
&lt;/h2&gt;

&lt;p&gt;The transformer provides the ability to transform (Manipulate) &lt;strong&gt;GraphQL&lt;/strong&gt; document simply.&lt;br&gt;
The transformer uses the abilities provided by gqlex such as: gqlXPath, SyntaxPath etc.&lt;/p&gt;

&lt;p&gt;The gqlex provides the following transform methods:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add Children - Add children node to selected &lt;strong&gt;GraphQL&lt;/strong&gt; node or nodes&lt;/li&gt;
&lt;li&gt;Add Sibling - Add sibling node to selected &lt;strong&gt;GraphQL&lt;/strong&gt; node or nodes&lt;/li&gt;
&lt;li&gt;Duplicate - Duplicate selected node by duplication number; multi nodes cannot be duplicated&lt;/li&gt;
&lt;li&gt;Remove Children  - Remove selected nodes or node&lt;/li&gt;
&lt;li&gt;Update name - Update selected node names or node names; for inline fragments, it will update the typeCondition name&lt;/li&gt;
&lt;li&gt;Update alias value - update field alias value and fragment spread alias value.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Code Play
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Let's start with traversal over the GrqphQL document.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Start by creating my observer. Let's name it StringBuilderObserver.&lt;/p&gt;

&lt;p&gt;The observer will append the GraphQL node to some StringBuilder.&lt;/p&gt;

&lt;p&gt;This way, we achieve separation of concern: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Traverse over the GraphQL document nodes&lt;/li&gt;
&lt;li&gt;Append node values with StringBuilder ....
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;StringBuilderObserver&lt;/span&gt; &lt;span class="kd"&gt;implements&lt;/span&gt; &lt;span class="nc"&gt;TraversalObserver&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;StringBuilderElem&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;stringBuilderElems&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ArrayList&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;gt;();&lt;/span&gt;

    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="kt"&gt;boolean&lt;/span&gt; &lt;span class="n"&gt;isIgnoreCollection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="nd"&gt;@Override&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;updateNodeEntry&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Node&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Node&lt;/span&gt; &lt;span class="n"&gt;parentNode&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Context&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;ObserverAction&lt;/span&gt; &lt;span class="n"&gt;observerAction&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

        &lt;span class="nc"&gt;String&lt;/span&gt;  &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
        &lt;span class="nc"&gt;DocumentElementType&lt;/span&gt; &lt;span class="n"&gt;documentElementType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getDocumentElementType&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;switch&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;documentElementType&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

            &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="nl"&gt;DOCUMENT:&lt;/span&gt;
                &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MessageFormat&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Node : {0} ||  Type : {1}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Document"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;documentElementType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

                &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

            &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="nl"&gt;DIRECTIVE:&lt;/span&gt;
                &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MessageFormat&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Name : {0} ||  Type : {1}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="o"&gt;((&lt;/span&gt;&lt;span class="nc"&gt;Directive&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;getName&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;documentElementType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
                &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="nl"&gt;FIELD:&lt;/span&gt;
                &lt;span class="nc"&gt;Field&lt;/span&gt; &lt;span class="n"&gt;field&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Field&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
                &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MessageFormat&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Name : {0} || Alias : {1} ||  Type : {2}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                        &lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getName&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
                        &lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getAlias&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
                        &lt;span class="n"&gt;documentElementType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
                &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="nl"&gt;OPERATION_DEFINITION:&lt;/span&gt;
                &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MessageFormat&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Name : {0} ||  Type : {1}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                        &lt;span class="o"&gt;((&lt;/span&gt;&lt;span class="nc"&gt;OperationDefinition&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;getOperation&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;toString&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;documentElementType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
                &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="nl"&gt;INLINE_FRAGMENT:&lt;/span&gt;
                &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MessageFormat&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Node : {0} ||  Type : {1}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"InlineFragment"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                        &lt;span class="n"&gt;documentElementType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

                &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="nl"&gt;FRAGMENT_DEFINITION:&lt;/span&gt;
                &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MessageFormat&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Name : {0} ||  Type : {1}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                        &lt;span class="o"&gt;((&lt;/span&gt;&lt;span class="nc"&gt;FragmentDefinition&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;getName&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;documentElementType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

                &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="nl"&gt;FRAGMENT_SPREAD:&lt;/span&gt;
                &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MessageFormat&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Node : {0} ||  Type : {1}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="o"&gt;((&lt;/span&gt;&lt;span class="nc"&gt;FragmentSpread&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;getName&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;documentElementType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
                &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="nl"&gt;VARIABLE_DEFINITION:&lt;/span&gt;
                &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MessageFormat&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Name : {0} || Default Value : {1} ||  Type : {2}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                        &lt;span class="o"&gt;((&lt;/span&gt;&lt;span class="nc"&gt;VariableDefinition&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;getName&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="o"&gt;((&lt;/span&gt;&lt;span class="nc"&gt;VariableDefinition&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;getDefaultValue&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;documentElementType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

                &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="nl"&gt;ARGUMENT:&lt;/span&gt;
                &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MessageFormat&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Name : {0} || Value : {1} ||  Type : {2}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                        &lt;span class="o"&gt;((&lt;/span&gt;&lt;span class="nc"&gt;Argument&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;getName&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
                        &lt;span class="o"&gt;((&lt;/span&gt;&lt;span class="nc"&gt;Argument&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;getValue&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
                        &lt;span class="n"&gt;documentElementType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
                &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="nl"&gt;ARGUMENTS:&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;isIgnoreCollection&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
                &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MessageFormat&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Node : {0} ||  Type : {1}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Arguments"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;documentElementType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
                &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="nl"&gt;SELECTION_SET:&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;isIgnoreCollection&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
                &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MessageFormat&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Node : {0} ||  Type : {1}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"SelectionSet"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;documentElementType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
                &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="nl"&gt;VARIABLE_DEFINITIONS:&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;isIgnoreCollection&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
                &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MessageFormat&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Node : {0} ||  Type : {1}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"VariableDefinitions"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;documentElementType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
                &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="nl"&gt;DIRECTIVES:&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;isIgnoreCollection&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
                &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MessageFormat&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Node : {0} ||  Type : {1}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Directives"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;documentElementType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
                &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="nl"&gt;DEFINITIONS:&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;isIgnoreCollection&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
                &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MessageFormat&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Node : {0} ||  Type : {1}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Definitions"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;documentElementType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

                &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;

        &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Strings&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;isNullOrEmpty&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;)){&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;

        &lt;span class="n"&gt;stringBuilderElems&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;add&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;StringBuilderElem&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getLevel&lt;/span&gt;&lt;span class="o"&gt;()));&lt;/span&gt;

        &lt;span class="n"&gt;levels&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;add&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getLevel&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
        &lt;span class="c1"&gt;//spaces++;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Integer&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;levels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ArrayList&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;gt;();&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;getGqlBrowsedPrintedString&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;getStringAs&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;getStringAs&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;boolean&lt;/span&gt; &lt;span class="n"&gt;isIdent&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
        &lt;span class="nc"&gt;StringBuilder&lt;/span&gt; &lt;span class="n"&gt;stringBuilder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;StringBuilder&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;StringBuilderElem&lt;/span&gt; &lt;span class="n"&gt;stringBuilderElem&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;stringBuilderElems&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="o"&gt;++;&lt;/span&gt;
            &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;spaceStr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt; &lt;span class="n"&gt;isIdent&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;stringBuilderElem&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getDepth&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="n"&gt;spaceStr&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="s"&gt;" "&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
                &lt;span class="n"&gt;stringBuilder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;append&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;spaceStr&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;stringBuilderElem&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getName&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"\n"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;stringBuilder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;append&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt; &lt;span class="n"&gt;stringBuilderElem&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getName&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;stringBuilderElems&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="o"&gt;()?&lt;/span&gt; &lt;span class="s"&gt;" "&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;stringBuilder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toString&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;getGqlBrowsedString&lt;/span&gt;&lt;span class="o"&gt;(){&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;getStringAs&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;//    int spaces = 0;&lt;/span&gt;
    &lt;span class="nd"&gt;@Override&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;updateNodeExit&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt; &lt;span class="nc"&gt;Node&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;&lt;span class="nc"&gt;Node&lt;/span&gt; &lt;span class="n"&gt;parentNode&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Context&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;ObserverAction&lt;/span&gt; &lt;span class="n"&gt;observerAction&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;
&lt;span class="nc"&gt;GqlTraversal&lt;/span&gt; &lt;span class="n"&gt;traversal&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;GqlTraversal&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

&lt;span class="nc"&gt;StringBuilderObserver&lt;/span&gt; &lt;span class="n"&gt;gqlStringBuilderObserver&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;StringBuilderObserver&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

&lt;span class="n"&gt;traversal&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getGqlTraversalObservable&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;addObserver&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;gqlStringBuilderObserver&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;traversal&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;traverse&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

&lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;out&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;println&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt; &lt;span class="n"&gt;gqlStringBuilderObserver&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getGqlBrowsedString&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;After we saw how the traversal is working, let's digg with some examples of &lt;strong&gt;gqlXPath selection GraphQL nodes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;gqlXPath defines an expression language as defined above, in addition, the expression language contains more terms to familiar with, &lt;em&gt;Element Name&lt;/em&gt; and types &lt;em&gt;abbreviations&lt;/em&gt;.&lt;br&gt;
Why it's important: A GraphQL document is more than a structure similar to JSON; It also provides a DSL exposed by the GraphQL server and GraphQL language.&lt;br&gt;
The types and the element name will assist the gqlXPath in selecting the exact node or nodes.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Element Names&lt;/u&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;element_name&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;type=&lt;/td&gt;
&lt;td&gt;Select element by type abbreviate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;name=&lt;/td&gt;
&lt;td&gt;Select element by name&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;alias=&lt;/td&gt;
&lt;td&gt;Select element by alias name&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;u&gt;Available types and abbreviation for _type _element_name&lt;/u&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type abbreviate&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;doc&lt;/td&gt;
&lt;td&gt;DOCUMENT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;frag&lt;/td&gt;
&lt;td&gt;FRAGMENT_DEFINITION&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;direc&lt;/td&gt;
&lt;td&gt;DIRECTIVE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;fld&lt;/td&gt;
&lt;td&gt;FIELD&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mutation&lt;/td&gt;
&lt;td&gt;MUTATION_DEFINITION&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;query&lt;/td&gt;
&lt;td&gt;OPERATION_DEFINITION&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;infrag&lt;/td&gt;
&lt;td&gt;INLINE_FRAGMENT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;var&lt;/td&gt;
&lt;td&gt;VARIABLE_DEFINITION&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arg&lt;/td&gt;
&lt;td&gt;ARGUMENT&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Let's practice the gqlXPath expression,&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;GraphQL Document&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this document, we have 2 nodes named: 'name', but of different types: argument and field.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query {
    Instrument(Name: "1234") {
        Reference {
            Name
            title
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Select all nodes (double slash) 'name', which is an argument type &lt;code&gt;//query/Instrument/name[type=arg]&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select first node (single slash) 'name', which is an argument type &lt;code&gt;/query/Instrument/name[type=arg]&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;//query/.../name[type=arg]&lt;/code&gt; Same as query &lt;code&gt;//query/Instrument/name[type=arg]&lt;/code&gt;  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select of 'name' node, which is a field under Reference, reside under Instrument, reside under query &lt;code&gt;//query/Instrument/Reference/name&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;//query/Instrument/.../name&lt;/code&gt; Same as &lt;code&gt;//query/Instrument/Reference/name&lt;/code&gt;  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;//.../name&lt;/code&gt; Same as &lt;code&gt;//query/Instrument/Reference/name&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;GraphQl Document&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query Hero($episode: Episode, $withFriends: Boolean!) {
  hero(episode: $episode) {
    name
    friends @include(if: $withFriends) {
      name
    }
    friends @include(if: $withFriends) {
      name
    }
    friends @include(if: $withFriends) {
      name
    }
    friends @include(if: $withFriends) {
      name
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Select all query nodes named hero &lt;code&gt;//query[name=hero]&lt;/code&gt; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select first query (single slash) node named hero &lt;code&gt;/query[name=hero]&lt;/code&gt;     &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select all nodes named '&lt;em&gt;name&lt;/em&gt;', reside under &lt;em&gt;friends&lt;/em&gt; node &lt;code&gt;//query[name=hero]/hero/friends/name&lt;/code&gt; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select all nodes named '&lt;em&gt;name&lt;/em&gt;' within a range of index 0 and index 2 (inclusion), reside under friends node &lt;code&gt;{0:2}//query[name=hero]/hero/friends/name&lt;/code&gt;   &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select node named '&lt;em&gt;name&lt;/em&gt;', reside under any node, which reside under &lt;em&gt;hero&lt;/em&gt; node &lt;code&gt;/query[name=hero]/hero/.../name&lt;/code&gt; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;em&gt;$withFriends&lt;/em&gt; variable reside directly under the root node named &lt;em&gt;hero&lt;/em&gt; &lt;code&gt;//query[name=hero]/withFriends[type=var]&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;em&gt;include&lt;/em&gt; directive first node resides under friends, which resides under root query node named &lt;em&gt;hero&lt;/em&gt; &lt;code&gt;/query[name=hero]/hero/friends/include[type=direc]&lt;/code&gt;             &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the '&lt;em&gt;if&lt;/em&gt;' argument node, reside under the &lt;em&gt;&lt;a class="mentioned-user" href="https://dev.to/include"&gt;@include&lt;/a&gt;&lt;/em&gt; directive &lt;code&gt;//.../include[type=direc]/if[type=arg]&lt;/code&gt;  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select episode variable &lt;code&gt;//.../episode[type=var]&lt;/code&gt;                                         &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How to use gqlXPath in the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;
&lt;span class="nc"&gt;SelectorFacade&lt;/span&gt; &lt;span class="n"&gt;selectorFacade&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;SelectorFacade&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;queryString&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Files&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;readString&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toPath&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

&lt;span class="c1"&gt;// query {  Instrument(id: "1234") }&lt;/span&gt;
&lt;span class="nc"&gt;GqlNodeContext&lt;/span&gt; &lt;span class="n"&gt;select&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;selectorFacade&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;select&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;queryString&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"//query/Instrument /   Reference  /"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Use of SyntaxPath&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;queryString&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Files&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;readString&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toPath&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

&lt;span class="nc"&gt;SyntaxBuilder&lt;/span&gt; &lt;span class="n"&gt;gqlexBuilder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;SyntaxBuilder&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

&lt;span class="n"&gt;gqlexBuilder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;appendQuery&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="n"&gt;gqlexBuilder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;appendField&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Instrument"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;gqlexBuilder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;appendField&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Reference"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// query {  Instrument(id: "1234") }&lt;/span&gt;
&lt;span class="nc"&gt;GqlNodeContext&lt;/span&gt; &lt;span class="n"&gt;select&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;selectorFacade&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;select&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;queryString&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;gqlexBuilder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And finally we will dwell on an example that will illustrate the use of gqlXPath node selection GraphQL manipulation (AKA transform).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Here a mutation GraphQL document&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mutation CreateReviewForEpisode($ep: Episode!, $review: ReviewInput!) {
  createReview(episode: $ep, review: $review) {
    stars
    commentary
    stars
    commentary
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following Java code will select and transform the mutation document,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;queryString&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Files&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;readString&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sourceFile&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toPath&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

&lt;span class="nc"&gt;TransformBuilder&lt;/span&gt; &lt;span class="n"&gt;transformBuilder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TransformBuilder&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="n"&gt;transformBuilder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addChildrenNode&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"//mutation[name=CreateReviewForEpisode]/createReview/stars"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Field&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"child_of_stars"&lt;/span&gt;&lt;span class="o"&gt;))&lt;/span&gt;
        &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addSiblingNode&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"//mutation[name=CreateReviewForEpisode]/createReview/stars"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Field&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"sibling_of_stars"&lt;/span&gt;&lt;span class="o"&gt;))&lt;/span&gt;
        &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;updateNodeName&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"//mutation[name=CreateReviewForEpisode]/createReview/stars"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"star_new_name"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
        &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;removeNode&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"//mutation[name=CreateReviewForEpisode]/createReview/commentary"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
        &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;duplicateNode&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"//mutation[name=CreateReviewForEpisode]/createReview/sibling_of_stars"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

&lt;span class="nc"&gt;TransformExecutor&lt;/span&gt; &lt;span class="n"&gt;transformExecutor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TransformExecutor&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;transformBuilder&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

&lt;span class="nc"&gt;RawPayload&lt;/span&gt; &lt;span class="n"&gt;rawPayload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RawPayload&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="n"&gt;rawPayload&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setQueryValue&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;queryString&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

&lt;span class="nc"&gt;RawPayload&lt;/span&gt; &lt;span class="n"&gt;executeRawPayload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;transformExecutor&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;execute&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rawPayload&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;u&gt;add new children node&lt;/u&gt; named: &lt;em&gt;child_of_stars&lt;/em&gt; under gqlXPath: &lt;code&gt;//mutation[name=CreateReviewForEpisode]/createReview/stars&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Add new sibling node&lt;/u&gt; named: &lt;em&gt;sibling_of_stars&lt;/em&gt; under gqlXPath selected node: &lt;code&gt;//mutation[name=CreateReviewForEpisode]/createReview/stars&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Set new node name&lt;/u&gt;: &lt;em&gt;star_new_name&lt;/em&gt; value to the selected node by gqlXPath: &lt;br&gt;
&lt;code&gt;//mutation[name=CreateReviewForEpisode]/createReview/stars&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Remove selected node by gqlXPath&lt;/u&gt; &lt;code&gt;//mutation[name=CreateReviewForEpisode]/createReview/commentary&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Duplicate selected gqlXPath node 10 times&lt;/u&gt; &lt;code&gt;//mutation[name=CreateReviewForEpisode]/createReview/sibling_of_stars&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Use TransformBuilder to build the transform plan, with selected node use of gqlXPath and the command to execute.&lt;br&gt;
The transform plan is loaded to the TransformExecutor with the GraphQL payload.&lt;/p&gt;

&lt;p&gt;The execution will result in a new GraphQL document,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mutation CreateReviewForEpisode($ep: Episode!, $review: ReviewInput!) {
  createReview(episode: $ep, review: $review) {
    sibling_of_stars
    star_new_name {
      child_of_stars
    }
    star_new_name {
      child_of_stars
    }
    sibling_of_stars
    sibling_of_stars
    sibling_of_stars
    sibling_of_stars
    sibling_of_stars
    sibling_of_stars
    sibling_of_stars
    sibling_of_stars
    sibling_of_stars
    sibling_of_stars
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Last example, will demonstrate the manipulation of directive from &lt;em&gt;include&lt;/em&gt; to &lt;em&gt;exclude&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here the query GraphQL document,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query Hero($episode: Episode, $withFriends: Boolean!) {
  hero(episode: $episode) {
    name
    friends **@include**(if: $withFriends) {
      name
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;        &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;queryString&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Files&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;readString&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toPath&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

        &lt;span class="c1"&gt;// query {  Instrument(id: "1234") }&lt;/span&gt;
        &lt;span class="nc"&gt;GqlNodeContext&lt;/span&gt; &lt;span class="n"&gt;includeDirectiveNode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;selectorFacade&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;selectSingle&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;queryString&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"//query[name=hero]/hero/friends/include[type=direc]"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

        &lt;span class="n"&gt;assertNotNull&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;includeDirectiveNode&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

        &lt;span class="n"&gt;assertTrue&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;includeDirectiveNode&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getType&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;equals&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;DocumentElementType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;DIRECTIVE&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
        &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;out&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;println&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"\nBefore manipulation:\n\n"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;queryString&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

        &lt;span class="c1"&gt;// Node newNode = new Field("new_name");&lt;/span&gt;
        &lt;span class="nc"&gt;Node&lt;/span&gt; &lt;span class="n"&gt;excludeDirectiveNode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;TransformUtils&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;updateNodeName&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;includeDirectiveNode&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"exclude"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

        &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;newGqlValue&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;gqlexWriter&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;writeToString&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;excludeDirectiveNode&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

        &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;out&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;println&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"\nAfter manipulation:\n\n"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;newGqlValue&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

        &lt;span class="nc"&gt;GqlNodeContext&lt;/span&gt; &lt;span class="n"&gt;excludeUpdateNode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;selectorFacade&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;selectSingle&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;newGqlValue&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"//query[name=hero]/hero/friends/exclude[type=direc]"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

        &lt;span class="n"&gt;assertTrue&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;excludeUpdateNode&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getType&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;equals&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;DocumentElementType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;DIRECTIVE&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
query Hero($episode: Episode, $withFriends: Boolean!) {
  hero(episode: $episode) {
    name
    friends @exclude(if: $withFriends) {
      name
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  gqlex Use Cases
&lt;/h2&gt;

&lt;p&gt;The gqlex can be used during base code when the developer is required to enrich the GraphQL document with more fields while querying the server for data, or during manipulation of data in the server so the code can articulate the relevant fields to manipulate in the service side.&lt;br&gt;
gqlex can also be utilized during integration or E2E testing, generating synthetic GraphQL data with a high velocity and managed solution.&lt;/p&gt;

&lt;p&gt;I elaborate the following use cases with more details:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Synthetic GraphQL document creation&lt;br&gt;
Testing, mostly the integration testing part, will demand the ability to query the GraphQL server with different queries and mutations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Of course, the developer can maintain large lists of example files to send to the server or to find and replace the &lt;br&gt;
relevant string in GraphQL document, but it is a cumbersome solution, hard to maintain etc.&lt;/p&gt;

&lt;p&gt;gqlex gives you the ability to manipulate the query or the mutation with ease and use configuration-wise to list your &lt;br&gt;
plan of action (gqlXPath, Transform Commands, and Argument to execute], and execute the plan:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Configuration
  Plan
    steps
      step 1
         gqlXPath (String)
         transform_commands
           transform_command
               command
               argument_object_definition
      step n
         gqlXPath (String)
         transform_commands
           transform_command
               command
               argument_object_definition
  origin_file_to_manipulate

Configuration config = read_configuration_plan(plan_file);

config_verification()

build_plan -&amp;gt; ... use of TransformBuilder

new_graphql_document = execute_plan -&amp;gt; ... use of TransformExecuter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;gqlex gives you versatility and the ability to produce synthetic GraphQL data and verify the integrity of a GraphQL service.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Articulate GraphQL document on-the-fly&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Sometimes, you may need to dynamically build a query or mutation based on business logic or configuration and send it to the GraphQL server. &lt;br&gt;
The gqlex library can assist with this.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;gqlex&lt;/strong&gt; library only allows manipulation of the GraphQL skeleton file, not its creation.&lt;/p&gt;

&lt;p&gt;The developer can create the skeleton GraphQL file, store the file in resource folder.&lt;br&gt;
&lt;em&gt;skeleton file, means file with structure but without field only.&lt;/em&gt;&lt;br&gt;
with the of syntaxPath, gqlex can assemble the gqlXPath and set the plan strategy on the fly, &lt;br&gt;
then use of TransformBuilder to build the plan, then use the TransformExecutor to run the plan.&lt;/p&gt;

</description>
      <category>graphql</category>
      <category>opensource</category>
      <category>intuit</category>
      <category>xpath</category>
    </item>
    <item>
      <title>Building AI-powered search using LangChain and Milvus</title>
      <dc:creator>Ayush</dc:creator>
      <pubDate>Mon, 11 Dec 2023 06:11:32 +0000</pubDate>
      <link>https://forem.com/intuitdev/building-ai-powered-search-using-langchain-and-milvus-ipc</link>
      <guid>https://forem.com/intuitdev/building-ai-powered-search-using-langchain-and-milvus-ipc</guid>
      <description>&lt;p&gt;This blog is co-authored by &lt;strong&gt;Ayush Pandey&lt;/strong&gt;, Senior Software Engineer and &lt;strong&gt;Amit Kaushal&lt;/strong&gt;, Software Manager at Intuit.&lt;/p&gt;

&lt;p&gt;Artificial Intelligence (AI) has revolutionized the way we interact with technology, and one of the most significant applications of AI is in search. With the help of AI, search tools can surface more accurate and relevant results to users. In this blog, we will discuss how to build an AI-powered search engine using LangChain and Milvus.&lt;/p&gt;

&lt;p&gt;Before we dive into the demo, let’s talk through some of the concepts and tools involved.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Langchain?
&lt;/h2&gt;

&lt;p&gt;LangChain is a framework for developing applications powered by language models. Use cases include applications for document question answering, building conversational interfaces for database interactions, and much more. We believe that the most powerful and differentiated applications will not only leverage a language model, but will also be:&lt;br&gt;
Data-aware: connect a language model to other sources of data&lt;br&gt;
Agentic: allow a language model to interact with its environment&lt;/p&gt;

&lt;p&gt;LangChain provides the modular components used to build such applications, which can be used standalone or combined for more complexity.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is Milvus?
&lt;/h2&gt;

&lt;p&gt;Milvus was created in 2019 with a singular goal: to store, index, and manage massive embedding vectors generated by deep neural networks and other machine learning (ML) models.&lt;/p&gt;

&lt;p&gt;As a database specifically designed to handle queries over input vectors, it is capable of indexing vectors on a trillion scale. Unlike existing relational databases, which mainly deal with structured data following a pre-defined pattern, Milvus is designed from the bottom-up to handle embedding vectors converted from unstructured data.&lt;/p&gt;
&lt;h2&gt;
  
  
  Vector embeddings: why the hype?
&lt;/h2&gt;

&lt;p&gt;Vector embeddings are a powerful tool for developers working with natural language processing (NLP) and ML applications. Vector embeddings are a way of representing words or phrases as vectors in a high-dimensional space, where each dimension represents a different feature of the word or phrase. This allows developers to perform complex operations on text data, such as sentiment analysis, text classification, and machine translation.&lt;/p&gt;

&lt;p&gt;Let’s go over a simple explainer of semantic features: scientists sort different types of animals in the world into categories based on certain characteristics. For example, Birds are a type of warm-blooded vertebrate that are adapted to fly. Based on these features, we created word coordinates to represent animals based on their Type and Domestication score. These scores are called “semantic features,” which capture parts of the meanings of each word. Now that the words have corresponding numerical values, we can then plot these words as points on a graph, where the x-axis represents Type, and the y-axis represents Domestication score. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6zwt4m4dsz9ehel6qbt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6zwt4m4dsz9ehel6qbt.png" alt="Word Coordinates for Animal types"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzipiud499cur3wi5ilee.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzipiud499cur3wi5ilee.png" alt="Word Coordinates for Animal types and plots in graph"&gt;&lt;/a&gt;&lt;br&gt;
We can add new words to the plot based on their meanings. For example, where should the words "Lions" and "Parrots" go? How about "Whales"? Or "Snakes"?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnvp8i1jy6jdl1xd9e0d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnvp8i1jy6jdl1xd9e0d.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are also several libraries and tools available for developers who want to work with vector embeddings. Some popular libraries include &lt;a href="https://pypi.org/project/gensim/" rel="noopener noreferrer"&gt;Gensim&lt;/a&gt;, &lt;a href="https://www.tensorflow.org/" rel="noopener noreferrer"&gt;TensorFlow&lt;/a&gt;, and &lt;a href="https://pytorch.org/" rel="noopener noreferrer"&gt;PyTorch&lt;/a&gt;. These libraries provide pre-trained models for &lt;a href="https://www.tensorflow.org/text/tutorials/word2vec" rel="noopener noreferrer"&gt;word2vec&lt;/a&gt; and &lt;a href="https://nlp.stanford.edu/projects/glove/" rel="noopener noreferrer"&gt;GloVe&lt;/a&gt;, as well as tools for training custom models on specific datasets.&lt;/p&gt;
&lt;h2&gt;
  
  
  Demo: Using a similarity search for asking questions from a wikipedia
&lt;/h2&gt;

&lt;p&gt;First, let’s go through some prerequisites.&lt;/p&gt;

&lt;p&gt;Install LangChain and Milvus on your local system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;! python -m pip install --upgrade pymilvus langchain openai tiktoken
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, import required modules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Milvus
from langchain.document_loaders import TextLoader
from langchain.document_loaders import WebBaseLoader
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, import your &lt;a href="https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key" rel="noopener noreferrer"&gt;OpenAI API key&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import getpass

os.environ['OPENAI_API_KEY'] = "your-openai-api-key"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, load in a Wikipedia document (here we’re grabbing the article for Intuit QuickBooks) using WebBaseLoader client, and split it into chunks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;loader = WebBaseLoader([
   "https://en.wikipedia.org/wiki/QuickBooks",
])

docs = loader.load()
# Split the documents into smaller chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(docs)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Afterwards, use OpenAIEmbeddings and store everything in a Milvus vector database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vector_db = Milvus.from_documents(
   docs,
   embeddings,
   connection_args={"host": "HostName", "port": "19530"},
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s time to try semantic searching! Let’s ask a question using LangChain and Milvus:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query = "What is quickbooks?"
docs = vector_db.similarity_search(query)
docs[0].page_content
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;'Retrieved from "https://en.wikipedia.org/w/index.php?title=QuickBooks&amp;amp;oldid=1155606425"\nCategories: Accounting softwareIntuit softwareHidden categories: CS1 maint: url-statusArticles with short descriptionShort description is different from WikidataUse mdy dates from March 2019Articles containing potentially dated statements from May 2014All articles containing potentially dated statements\n\n\n\n\n\n\n This page was last edited on 18 May 2023, at 23:04\xa0(UTC).\nText is available under the Creative Commons Attribution-ShareAlike License 3.0;\nadditional terms may apply.  By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.\n\n\nPrivacy policy\nAbout Wikipedia\nDisclaimers\nContact Wikipedia\nMobile view\nDevelopers\nStatistics\nCookie statement\n\n\n\n\n\n\n\n\n\n\n\nToggle limited content width'&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
The results above are decent, but need quite a lot of formatting help. Let’s try using load_qa_with_sources_chain to ask the questions instead for a cleaner output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.chains.qa_with_sources import load_qa_with_sources_chain
from langchain.llms import OpenAI

chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True)
query = "What is quickbooks?"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;code&gt;{'intermediate_steps': [' No relevant text.',&lt;br&gt;
 ' QuickBooks is an accounting software package developed and marketed by Intuit. First introduced in 1983, QuickBooks products are geared mainly toward small and medium-sized businesses and offer on-premises accounting applications as well as cloud-based versions that accept business payments, manage and pay bills, and payroll functions.',&lt;br&gt;
 " Intuit also offers a cloud service called QuickBooks Online (QBO). The user pays a monthly subscription fee rather than an upfront fee and accesses the software exclusively through a secure logon via a Web browser. QuickBooks Online is supported on Chrome, Firefox, Internet Explorer 10, Safari 6.1, and also accessible via Chrome on Android and Safari on iOS 7. Quickbooks Online offers integration with other third-party software and financial services, such as banks, payroll companies, and expense management software. QuickBooks desktop also supports a migration feature where customers can migrate their desktop data from a pro or prem SKU's to Quickbooks Online.",&lt;br&gt;
 ' QuickBooks - Wikipedia \nInitial release, Subsequent releases, QuickBooks Online, QuickBooks Point of Sale, Add-on programs.'],&lt;br&gt;
'output_text': ' QuickBooks is an accounting software package developed and marketed by Intuit. It offers on-premises accounting applications as well as cloud-based versions that accept business payments, manage and pay bills, and payroll functions. QuickBooks Online is a cloud service that offers integration with other third-party software and financial services.\nSOURCES: https://en.wikipedia.org/wiki/QuickBooks'}&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Other search-related use cases using LangChain and Milvus:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;E-commerce search engine: the language model can be trained on product descriptions and reviews, and the data can be converted into vectors using Milvus. The vectors can then be indexed in Milvus, and a search interface can be built to retrieve relevant products based on user queries.&lt;/li&gt;
&lt;li&gt;Image search engine: the language model can be trained on image captions and tags, and the images can be converted into vectors using Milvus. The vectors can then be indexed in Milvus, and a search interface can be built to retrieve relevant images based on user queries.&lt;/li&gt;
&lt;li&gt;Video search engine: the language model can be trained on video titles and descriptions, and the videos can be converted into vectors using Milvus. The vectors can then be indexed in Milvus, and a search interface can be built to retrieve relevant videos based on user queries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By following the simple steps we’ve outlined here, developers can use LangChain and Milvus to build search engines for various use cases ranging from a simple document search to applications in e-commerce, image, and video search. We hope this was a helpful starter guide, please leave a comment if you have any further questions!&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;References and further reading: *&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://python.langchain.com/docs/get_started/introduction.html" rel="noopener noreferrer"&gt;Langchain&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://milvus.io/" rel="noopener noreferrer"&gt;Milvus&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://openai.com/" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key" rel="noopener noreferrer"&gt;How to find OpenAI key&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/QuickBooks" rel="noopener noreferrer"&gt;QuickBooks Wikipedia page&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cs.cmu.edu/~dst/WordEmbeddingDemo/tutorial.html" rel="noopener noreferrer"&gt;Vector Embedding&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Hacktoberfest 2023 @ Intuit: Maintainer Spotlight</title>
      <dc:creator>Lucy</dc:creator>
      <pubDate>Wed, 25 Oct 2023 17:15:30 +0000</pubDate>
      <link>https://forem.com/intuitdev/hacktoberfest-2023-intuit-maintainer-spotlight-4jad</link>
      <guid>https://forem.com/intuitdev/hacktoberfest-2023-intuit-maintainer-spotlight-4jad</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AHw6lMz1UQcBLcjroCR-24g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AHw6lMz1UQcBLcjroCR-24g.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As &lt;a href="https://hacktoberfest.com/" rel="noopener noreferrer"&gt;Hacktoberfest 2023&lt;/a&gt; comes to an end, we at Intuit are excited to send rewards to our internal and external contributors for their amazing contributions. But before we do, we want to take a moment to recognize our maintainers!&lt;/p&gt;

&lt;p&gt;Intuit has been participating in Hacktoberfest &lt;a href="https://medium.com/intuit-engineering/the-moonshot-hacktoberfest-launches-at-intuit-d9805f342d6" rel="noopener noreferrer"&gt;since 2019&lt;/a&gt; by offering special incentives to employees for contributing to open source projects, and this year we took it to the next level by sending rewards (cool t-shirts!) to outside contributors to Intuit libraries as well. It’s important to remember that these contributions are only possible thanks to the hard work of our maintainer teams reviewing pull requests and keeping issues up to date.&lt;/p&gt;

&lt;p&gt;We wish we could highlight every single one of our 130+ open source projects here, but here’s a short list of featured maintainer teams and their projects:&lt;/p&gt;

&lt;h3&gt;
  
  
  Graph Quilt
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2ATj0Mt2pX_qe3uYmy" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2ATj0Mt2pX_qe3uYmy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/graph-quilt" rel="noopener noreferrer"&gt;graph-quilt&lt;/a&gt; is an open source Java library that provides recursive schema stitching and Apollo Federation style schema composition for GraphQL APIs. The project includes a modular set of libraries with features including a Representational State Transfer (REST) adapter, GraphQL authorization, and a reference implementation of a graph-quilt gateway. There are even more features coming soon, so keep an eye out for updates!&lt;/p&gt;

&lt;p&gt;Maintainers Ashpak Shaikh, Carlo Aureus, and Kyle Moore released a video demo and walkthrough on the Intuit Developers YouTube channel. Check it out here:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/gRwSAWgPdvg"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Ashpak also recently spoke in-depth about graph-quilt at GraphQLConf 2023. &lt;a href="https://graphql.org/conf/sessions/17f150667d13a57f28bae524443f4c60/" rel="noopener noreferrer"&gt;See the recording of his talk here!&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Numaproj
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F961%2F0%2ATY2y3ymVZyVbX-_0" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F961%2F0%2ATY2y3ymVZyVbX-_0"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/numaproj" rel="noopener noreferrer"&gt;numaproj&lt;/a&gt; is a collection of Kubernetes-native tools for doing real-time operation data analytics. Currently it includes &lt;a href="https://github.com/numaproj/numaflow" rel="noopener noreferrer"&gt;numaflow&lt;/a&gt;, a massively parallel, real-time data and stream processing engine, and &lt;a href="http://numalogic" rel="noopener noreferrer"&gt;numalogic&lt;/a&gt;, ML models and tools for real-time operational data analytics.&lt;/p&gt;

&lt;p&gt;Maintainers Derek Wang and Vigith Maurice recently gave a talk at &lt;a href="https://2023.allthingsopen.org/sessions/streaming-aiops-on-kubernetes-at-scale/" rel="noopener noreferrer"&gt;All Things Open&lt;/a&gt; walking through how to build a large scale AIOps platform with open source technologies, including numaproj tools. The recording of their talk will be live soon on the All Things Open &lt;a href="https://www.youtube.com/c/allthingsopen" rel="noopener noreferrer"&gt;YouTube channel&lt;/a&gt;, so stay tuned! Maintainer Sri Harsha Yayi also joined Derek for a presentation at the &lt;a href="https://www.linkedin.com/events/intuithacktoberfestmeetup-sandi7112546014225432576/comments/" rel="noopener noreferrer"&gt;Hacktoberfest Happy Hour&lt;/a&gt; we hosted at Intuit’s Mountain View and San Diego campuses.&lt;/p&gt;

&lt;p&gt;numaproj is still a young and growing project, and if you’d like to follow along on the journey, consider joining the team &lt;a href="https://join.slack.com/t/numaproj/shared_invite/zt-19svuv47m-YKHhsQ~~KK9mBv1E7pNzfg" rel="noopener noreferrer"&gt;on Slack&lt;/a&gt;, or follow the &lt;a href="https://blog.numaproj.io/" rel="noopener noreferrer"&gt;numaproj blog&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Player
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F497%2F0%2ALSsB5kq8cjSc6EvM" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F497%2F0%2ALSsB5kq8cjSc6EvM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/player-ui/player" rel="noopener noreferrer"&gt;Player&lt;/a&gt; is a framework for building cross-platform dynamic experiences. The core engine is authored in TypeScript with specific adaptors to natively render on iOS, Android, and React. Start by supplying some semantic JSON content, where you can describe views, your data, validation rules, and much more. Add your own asset library to handle the rendering, and voilà, you have a full dynamic user experience.&lt;/p&gt;

&lt;p&gt;Player’s &lt;a href="https://github.com/player-ui/player#contributing" rel="noopener noreferrer"&gt;maintainer team&lt;/a&gt; is one of our most active here at Intuit, and they’ve been an incredible source of open source energy, year-round. To see the library in action, check out our YouTube video featuring maintainer Adam Dierkens here:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/cPMkqyMEUHI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Cello
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AddB6YmdsX4Fc1ev5" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AddB6YmdsX4Fc1ev5"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/cello-proj/cello" rel="noopener noreferrer"&gt;Cello&lt;/a&gt; is an engine for Cloud deployments: infrastructure as code (IaC) with GitOps, with isolation from your Cloud provider, all Cloud-agnostic. It fits into any CI/CD system — Jenkins, GitHub actions, you name it. We developed Cello to help us manage our complex set of various IaCs.&lt;/p&gt;

&lt;p&gt;Maintainer Jerome Kuptz recently gave a lightning talk demonstrating Cello’s features for a round of Hacktoberfest lightning talks we did internally at Intuit. We’ll work on getting that talk shared soon on the Intuit Developers YouTube channel, but please check out Cello’s &lt;a href="https://cello-proj.github.io/cello/quickstart/" rel="noopener noreferrer"&gt;Quick Start guide&lt;/a&gt; in the meantime!&lt;/p&gt;

&lt;h3&gt;
  
  
  Trapheus (and TrapheusAI)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2Aoa_VcFYzm6sPqFxp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2Aoa_VcFYzm6sPqFxp"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/intuit/Trapheus" rel="noopener noreferrer"&gt;Trapheus&lt;/a&gt; is a tool for restoring relational database service (RDS) instances in AWS without worrying about client downtime or configuration retention. It supports individual RDS snapshot as well as cluster snapshot restore operations. Modeled as a state machine, with the help of AWS step functions, Trapheus restores the RDS instance in a much faster way than the usual SQL dump preserving the same instance endpoint and configurations as before.&lt;/p&gt;

&lt;p&gt;Maintainer Rohit Kumar recently published a &lt;a href="https://medium.com/@rite2rohit88/celebrating-three-years-of-trapheus-with-hacktoberfest-43a4043dc60f" rel="noopener noreferrer"&gt;blog post&lt;/a&gt; on Intuit’s Engineering Blog celebrating 3 years since Trapheus’ first release during Hacktoberfest 2020, and announcing the launch of &lt;a href="https://github.com/intuit/Trapheus/tree/master/labs/TrapheusAI#-demo" rel="noopener noreferrer"&gt;TrapheusAI&lt;/a&gt;, a next-generation data search and analysis assistant that can overcome the challenges of traditional keyword search with powerful data analyses using natural language. See the demo here:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/zY5BsxMfHQM"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  argoproj
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2A9ltrYELq_E7N53-0" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2A9ltrYELq_E7N53-0"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How could we do an Intuit open source software maintainer feature without Argo? &lt;a href="https://github.com/argoproj" rel="noopener noreferrer"&gt;Argo Project&lt;/a&gt; is one of our largest open source projects, with a vibrant contributor, maintainer, and user community. Argo is a collection of open source tools for Kubernetes to run workflows, manage clusters, and do GitOps right. The project includes &lt;a href="https://github.com/argoproj/argo-workflows" rel="noopener noreferrer"&gt;argo-workflows&lt;/a&gt;, &lt;a href="https://github.com/argoproj/argo-rollouts" rel="noopener noreferrer"&gt;argo-rollouts&lt;/a&gt;, &lt;a href="https://github.com/argoproj/argo-cd" rel="noopener noreferrer"&gt;argo-cd&lt;/a&gt;, and more.&lt;/p&gt;

&lt;p&gt;Maintainer Michael Crenshaw gave a talk on Argo CD for the Hacktoberfest lightning talks mentioned earlier, which we’ll work on publishing soon. The Argo community is also hosting an &lt;a href="https://www.meetup.com/out-of-the-box-ospo-developers/events/296853074/" rel="noopener noreferrer"&gt;in-person Kubernetes meetup&lt;/a&gt; at the Intuit Mountain View campus.&lt;/p&gt;

&lt;p&gt;—&lt;/p&gt;

&lt;p&gt;This was only a tiny fraction of all the open source projects we’re proud to work on here at Intuit. Hacktoberfest may be drawing to a close, but the open source work continues through the rest of the year. Whether you managed to get a contribution done this October or not, we hope to see you out there in the open source community.&lt;/p&gt;

&lt;p&gt;A huge, endless thank you to all of our maintainers and contributors for your hard work and dedication to the open source community. There are still a few days left in October, so keep those contributions coming (&lt;a href="https://github.com/intuit/" rel="noopener noreferrer"&gt;see Intuit’s Hacktoberfest incentives here&lt;/a&gt;), and Happy Hacktoberfest!&lt;/p&gt;




</description>
      <category>opensource</category>
      <category>argo</category>
      <category>graphql</category>
      <category>hacktoberfest</category>
    </item>
  </channel>
</rss>
