<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Cloudsmith</title>
    <description>The latest articles on Forem by Cloudsmith (@cloudsmith).</description>
    <link>https://forem.com/cloudsmith</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/cloudsmith"/>
    <language>en</language>
    <item>
      <title>13 KubeCon Europe 2026 sessions not to miss</title>
      <dc:creator>Nigel Douglas</dc:creator>
      <pubDate>Mon, 23 Feb 2026 15:53:00 +0000</pubDate>
      <link>https://forem.com/cloudsmith/13-kubecon-europe-2026-sessions-not-to-miss-pdj</link>
      <guid>https://forem.com/cloudsmith/13-kubecon-europe-2026-sessions-not-to-miss-pdj</guid>
      <description>&lt;p&gt;Thousands of platform engineers and security experts are about to gather for KubeCon + CloudNativeCon Europe 2026, making it the year’s definitive cloud-native event. To help you cut through the noise of a packed agenda, we’ve identified the standout sessions you can’t afford to miss. Whether you're looking for high-level strategy or interactive workshops, here is our curated roadmap to ensure you get the most out of your time on the ground.&lt;/p&gt;

&lt;p&gt;📍 &lt;strong&gt;RAI Amsterdam&lt;/strong&gt; (Europaplein 24, 1078 GZ Amsterdam, Netherlands)&lt;br&gt;
🗓️ March 24 - 26, 2026&lt;/p&gt;

&lt;h2&gt;
  
  
  1. SBOOM: Making SBOMs Play Together
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Date/Time:&lt;/strong&gt; March 26, 14:30 - 15:00 CET&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Location:&lt;/strong&gt; Hall 8 | Room F&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Theme:&lt;/strong&gt; Security&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this session, research engineers &lt;strong&gt;Jacopo Bufalino&lt;/strong&gt; (CNAM) and &lt;strong&gt;Agathe Blaise&lt;/strong&gt; (Thales SIX GTS France) will dive into the messy reality of the Software Bill of Materials (SBOM). They plan to pull back the curtain on why current open-source and cloud-based tools (despite all their promises) frequently generate conflicting package lists and inconsistent vulnerability reports when tasked with scanning complex container images.&lt;/p&gt;

&lt;p&gt;This talk is particularly electrifying for the security community because it tackles the compliance anxiety triggered by the EU’s Cyber Resilience Act (CRA). As the CRA shifts SBOMs from a nice-to-have transparency initiative to a legal necessity for software lifecycles, the industry is waking up to a major problem, which is inconsistent data. By dissecting the technical roots of these discrepancies, Bufalino and Blaise aren't just pointing out flaws, they will also be providing a roadmap for developers to ensure their tooling is actually CRA-compliant. For anyone navigating the shift toward a transparent and trustworthy software supply chain, this session offers the technical clarity needed to turn these metaphorical bombs into secure builds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://kccnceu2026.sched.com/event/2CW7Q/sbm-making-sboms-play-together-jacopo-bufalino-cnam-agathe-blaise-thales-six-gts-france" rel="noopener noreferrer"&gt;View session details&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Modelpack: Standardising the packaging and distribution of AI/ML Models
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Date/Time:&lt;/strong&gt; March 23, 15:10 - 15:15 CET&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Location:&lt;/strong&gt; Elicium 2&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Theme:&lt;/strong&gt; AI + ML&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this high-impact lightning talk, &lt;strong&gt;Andrew Block&lt;/strong&gt;, a Distinguished Architect at Red Hat, tackles one of the most pressing bottlenecks in modern infrastructure: the chaotic fragmentation of AI/ML integration within the cloud-native ecosystem. Block, a seasoned expert in helping organisations scale open-source solutions, will break down how the current wild west of competing formats and runtimes is actively stifling innovation. He will introduce ModelPack, a CNCF Sandbox project designed to act as the universal translator for AI/ML artifacts, allowing them to finally speak the same language as established tools like Kubernetes, ORAS, and Harbor.&lt;/p&gt;

&lt;p&gt;The security and DevOps communities are closely watching this session because it addresses the Day 2 operations nightmare of managing AI at scale. As AI moves from experimental notebooks to production-grade Kubernetes clusters, the lack of standardisation creates massive technical debt and security blind spots. Block’s exploration of &lt;strong&gt;&lt;a href="https://modelpack.org" rel="noopener noreferrer"&gt;ModelPack&lt;/a&gt;&lt;/strong&gt; and emerging artifact standards offers a glimpse into a future where AI models are managed with the same consistency and rigour as container images. For anyone struggling to bridge the gap between data science and cloud-native engineering, this talk provides the blueprint for a unified, scalable, and manageable AI supply chain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://kccnceu2026.sched.com/event/2EFyi/project-lightning-talk-modelpack-standardizing-the-packaging-and-distribution-of-aiml-models-andrew-block-maintainer" rel="noopener noreferrer"&gt;View session details&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Kubernetes security at Shopify scale: Automating security across an infrastructure monrepo
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Date/Time:&lt;/strong&gt; March 25, 16:00 - 16:30 CET&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Location:&lt;/strong&gt; Hall 8 | Room F&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Theme:&lt;/strong&gt; Security&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this session, Senior Infrastructure Security Engineers &lt;strong&gt;Jie Wu&lt;/strong&gt; and &lt;strong&gt;Pulkit Garg&lt;/strong&gt; will pull back the curtain on how one of the world’s largest e-commerce platforms secures its massive infrastructure monorepo. Drawing from their deep backgrounds in cloud defence and 5G network security, Wu and Garg will detail the high-stakes challenge of managing thousands of services where a single misconfiguration could impact millions of merchants. They will demonstrate how Shopify moved beyond manual checkbox security by building an automated pipeline that combines Semgrep for static analysis and Open Policy Agent (OPA) for real-time policy enforcement.&lt;/p&gt;

&lt;p&gt;This talk is a must-attend for the security community because it provides a battle-tested blueprint for solving the velocity vs. security dilemma we’re all too familiar with. While many orgs struggle with friction between DevOps and Security teams, Shopify’s approach shows how to bake guardrails directly into the developer workflow without slowing down deployment. Attendees will gain rare insights into the so-called rough patches of scaling open-source security tools and leave with a practical framework for automating risk detection across complex, high-traffic Kubernetes environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://kccnceu2026.sched.com/event/2CW3b/kubernetes-security-at-shopify-scale-automating-security-across-an-infrastructure-monorepo-jie-wu-pulkit-garg-shopify" rel="noopener noreferrer"&gt;View session details&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. To upstream or not? Why becoming the maintainer of your dependencies matters
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Date/Time:&lt;/strong&gt; March 26, 11:00 - 11:30 CET&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Location:&lt;/strong&gt; Hall 7 | Room B&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Theme:&lt;/strong&gt; Cloud Native Experience&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the session, &lt;strong&gt;Christos Markou&lt;/strong&gt;, a Principal Software Engineer at Elastic and an OpenTelemetry code owner, dives into the strategic heart of modern software engineering. Markou moves beyond the typical "&lt;em&gt;open source is good for the soul&lt;/em&gt;" narrative to present a pragmatic, business-first case for active maintenance. By sharing a high-stakes story of an OTel component saved from the brink of deprecation, he demonstrates how moving from a passive consumer to an active contributor transformed a potential technical debt nightmare into a rapid-response tool for solving critical customer issues.&lt;/p&gt;

&lt;p&gt;This talk is a vital wake-up call for the security and observability communities because it addresses the growing crisis of software supply chain sustainability. In an era where a single abandoned dependency can lead to massive security vulnerabilities or operational outages, Markou provides a concrete example to share with your manager on why upstreaming isn't just altruism, it also can be treated as sensible risk management. For teams building on CNCF projects, this session offers a masterclass in how staying present in the ecosystem pays dividends in architectural control, security posture, and long-term engineering velocity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://kccnceu2026.sched.com/event/2CW5U/to-upstream-or-not-why-becoming-the-maintainer-of-your-dependencies-matters-christos-markou-elastic" rel="noopener noreferrer"&gt;View session details&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. How to add a new language feature to OPA
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Date/Time:&lt;/strong&gt; March 23, 10:20 - 10:25 CET&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Location:&lt;/strong&gt; Elicium 2&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Theme:&lt;/strong&gt; Open Policy Agent&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Adding features to a widely adopted language like Rego involves more than just a simple pull request. In this lightning session, Apple's &lt;strong&gt;Charlie Egan&lt;/strong&gt; explores the full-stack impact of introducing new syntax, from the initial parser modifications to updating the broader ecosystem of editor tools. Learn the specific engineering hurdles involved in upgrading a mature security project without disrupting the developer experience. At Cloudsmith, we’re huge fans of Open Policy Agent and Rego, so we can’t wait to hear what Charlie has in store for us.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://kccnceu2026.sched.com/event/2CW5U/to-upstream-or-not-why-becoming-the-maintainer-of-your-dependencies-matters-christos-markou-elastic" rel="noopener noreferrer"&gt;View session details&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Platform Engineering 2.0: Just-enough Kubernetes and AI-native DevOps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Date/Time:&lt;/strong&gt; March 26, 15:15 - 15:45 CET&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Location:&lt;/strong&gt; Hall 11&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Theme:&lt;/strong&gt; Platform Engineering&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the session, &lt;strong&gt;Shweta Vohra&lt;/strong&gt;, a Lead Architect at &lt;strong&gt;Booking.com&lt;/strong&gt; and a seasoned authority on platform patterns, tackles the industry's growing complexity bloat. Drawing from 20 years of experience and the literal scars of re-architecting internal platforms at one of the world’s largest travel sites, Vohra argues that the future of infrastructure isn't about building more, it’s really about building just enough. She will break down how to transition from heavy, static systems to lean, right-sized architectures using k3s, Gateway API, and ambient mesh, while layering in AIOps via Kubeflow and Prometheus to transform raw automation into true system intelligence.&lt;/p&gt;

&lt;p&gt;This talk is a critical touchstone for the DevOps and platform communities because it offers a pragmatic exit ramp from the scale at all costs mentality that leads to massive cloud waste and operational blindness. Vohra’s AI-native approach is a blueprint for creating self-optimising ecosystems that align infrastructure directly with business value. For engineering leaders and architects feeling the weight of over-engineered clusters, this session aims to be a masterclass in staying lean, staying smart, and evolving platforms to be adaptive rather than just additive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://kccnceu2026.sched.com/event/2CW80/platform-engineering-20-just-enough-kubernetes-and-ai-native-devops-shweta-vohra-bookingcom" rel="noopener noreferrer"&gt;View session details&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  7. AI Agents &amp;amp; Platform Engineering: Efficiency boost or new source of trouble?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Date/Time:&lt;/strong&gt; March 26, 11:45 - 12:15 CET&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Location:&lt;/strong&gt; Auditorium&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Theme:&lt;/strong&gt; AI + ML&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the blockbuster panel, an elite group of industry leaders from &lt;strong&gt;Cisco&lt;/strong&gt;, &lt;strong&gt;Red Hat&lt;/strong&gt;, &lt;strong&gt;AWS&lt;/strong&gt;, &lt;strong&gt;Solo.io&lt;/strong&gt;, and the &lt;strong&gt;United Nations&lt;/strong&gt; converges to debate the most disruptive shift in DevOps history. Featuring heavyweights like from the industry, this vendor-neutral discussion moves past the hype to ask the hard questions: Can non-deterministic AI agents actually keep pace with AI-accelerated developers, or are we just inviting unpredictable trouble into our production clusters? The panel will tackle the minimum viable platform foundation required for AI success, the shifting cost implications of agents in production, and how to build human trust in systems that don't always behave the same way twice.&lt;/p&gt;

&lt;p&gt;This session is arguably the must-see event for the modern platform engineer because it addresses the looming intelligence gap in infrastructure. As developers pump out code at record speeds, the platform team is traditionally treated as the bottleneck. This panel explores whether AI agents are the ultimate scaling solution or a new category of technical debt. By bringing together perspectives from global tech giants and international organisations, attendees will walk away with a high-level framework for defining golden metrics for AI effectiveness and a roadmap for navigating the transition from static automation to dynamic, agent-led collaboration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://kccnceu2026.sched.com/event/2CW6V/ai-agents-platform-engineering-efficiency-boost-or-new-source-of-trouble-hasith-kalpage-cisco-vincent-caldeira-red-hat-sara-qasmi-united-nations-idit-levine-soloio-carlos-santana-aws" rel="noopener noreferrer"&gt;View session details&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  8. In Falco’s nest: The evolution of cloud native runtime security
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Date/Time:&lt;/strong&gt; March 24, 12:00 - 12:30 CET&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Location:&lt;/strong&gt; G102-103&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Theme:&lt;/strong&gt; Security&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the maintainer track session, Falco contributors &lt;strong&gt;Iacopo Rozzo&lt;/strong&gt; (Sysdig) and &lt;strong&gt;Aldo Lacuku&lt;/strong&gt; (Kong) provide a deep-dive update on the CNCF’s de facto standard for runtime threat detection. The core of their presentation focuses on the highly anticipated Falco Operator, a game-changer designed to automate the deployment and management of security sensors across massive, distributed clusters, effectively lowering the barrier to entry for enterprise-scale security.&lt;/p&gt;

&lt;p&gt;For the security community, this talk is a vital look at the roadmap for high-throughput runtime defence. As cloud-native environments become more complex and data-heavy, traditional security tools often struggle with performance overhead; Rozzo and Lacuku will demonstrate the specific optimisations that allow Falco to maintain deep visibility without sacrificing system reliability. Beyond the code, this session also is a great opportunity to see how the Falco ecosystem is integrating with the broader CNCF landscape, offering attendees a first look at the features that will define Cloud Detection &amp;amp; Response (CDR) in the coming year.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://kccnceu2026.sched.com/event/2EF6W/in-falcos-nest-the-evolution-of-cloud-native-runtime-security-iacopo-rozzo-sysdig-aldo-lacuku-kong-inc" rel="noopener noreferrer"&gt;View session details&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  9. What LLMs do, and don’t, know about securing Kubernetes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Date/Time:&lt;/strong&gt; March 24, 14:30 - 15:00 CET&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Location:&lt;/strong&gt; Hall 11&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Theme:&lt;/strong&gt; Security&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the session, &lt;strong&gt;Rory McCune&lt;/strong&gt;, a Senior Security Researcher at &lt;strong&gt;Datadog&lt;/strong&gt;, explores the practical (and often perilous) intersection of Generative AI and cluster orchestration. McCune, a veteran KubeCon speaker and a foundational voice in container security, will present the results of his deep-dive research into whether Large Language Models (LLMs) can actually be trusted with the sensitive credentials. Rather than just discussing theory, Rory will aim to demonstrate how LLMs handle specific Kubernetes security tasks, revealing where they provide genuine architectural insight and where they tend to hallucinate dangerous misconfigurations that could leave an organisation exposed.&lt;/p&gt;

&lt;p&gt;This talk is a critical reality check for the security community as AI-assisted DevOps moves from a trend to a standard operating procedure. McCune will break down how advanced techniques like improved prompting and &lt;strong&gt;&lt;a href="https://www.promptingguide.ai/techniques/cot" rel="noopener noreferrer"&gt;chain-of-thought&lt;/a&gt;&lt;/strong&gt; reasoning can significantly shift the safety of an LLM's output, while also highlighting the so-called no-go zones where human expertise remains non-negotiable. For security architects and engineers, this session provides a vital framework for auditing AI-generated IaC templates, ensuring that the speed of AI doesn't come at the cost of a catastrophic security breach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://kccnceu2026.sched.com/event/2CVyo/what-llms-do-and-dont-know-about-securing-kubernetes-rory-mccune-datadog" rel="noopener noreferrer"&gt;View session details&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  10. Audit-ready Kubernetes: How Chase UK leveraged policy-as-code for continuous compliance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Date/Time:&lt;/strong&gt; March 25, 14:15 - 14:45 CET&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Location:&lt;/strong&gt; Hall 7 | Room C&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Theme:&lt;/strong&gt; Security&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the session, &lt;strong&gt;Nischay Goyal&lt;/strong&gt;, VP of Cloud Platform Engineering at &lt;strong&gt;JP Morgan Chase&lt;/strong&gt;, and &lt;strong&gt;Jim Bugwadia&lt;/strong&gt;, CEO of &lt;strong&gt;Nirmata&lt;/strong&gt; and co-creator of Kyverno, provide a rare under the hood look at building a cloud platform within the high-stakes world of regulated retail banking. Goyal, who manages the end-to-end Kubernetes ecosystem for Chase UK, will detail how his team transformed a massive compliance undertaking into a streamlined, automated engine. By leveraging Kyverno, OpenReports, and Grafana, they successfully shifted security left, allowing their backend engineers to ship code at speed while maintaining a real-time policy-as-code safety net that satisfies stringent financial regulators.&lt;/p&gt;

&lt;p&gt;This talk is a beacon for the security and compliance communities because it addresses the ultimate white whale of enterprise DevOps: reducing audit times from weeks to minutes. Bugwadia’s deep perspective as a Kubernetes &lt;strong&gt;&lt;a href="https://kubernetes.io/blog/2025/10/18/wg-policy-spotlight-2025" rel="noopener noreferrer"&gt;Policy Working Group&lt;/a&gt;&lt;/strong&gt; co-chair, combined with Goyal’s real-world production perspective from JP Morgan Chase, offers a definitive guide on how to empower security teams to write independent policies without bottlenecking development. For any platform team operating in a regulated sector (or simply running mission-critical workloads) this session provides a proven framework for turning compliance from a manual, fear-driven process into a better, more transparent, and scalable process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://kccnceu2026.sched.com/event/2CW2L/audit-ready-kubernetes-how-chase-uk-leveraged-policy-as-code-for-continuous-compliance-jim-bugwadia-nirmata-nischay-goyal-jp-morgan-chase" rel="noopener noreferrer"&gt;View session details&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  11. Sandbox Operator: Enabling session-aware, efficient MCP tool execution in Kubernetes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Date/Time:&lt;/strong&gt; March 25, 11:00 - 11:30 CET&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Location:&lt;/strong&gt; Auditorium&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Theme:&lt;/strong&gt; AI + ML&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Alibaba&lt;/strong&gt; Cloud engineers &lt;strong&gt;Mingshan Zhao&lt;/strong&gt; and &lt;strong&gt;Zhen Zhang&lt;/strong&gt; tackle the next massive infrastructure hurdle for AI: the Model Context Protocol (MCP). As maintainers of the OpenKruise project with experience managing Alibaba's million-container scheduling system, Zhao and Zhang are uniquely qualified to address the so-called “session explosion” problem. They will introduce the Sandbox Operator, a specialised Kubernetes controller designed to manage the lifecycle of AI tool executions without the massive resource waste or state loss typical of traditional Pod deployments.&lt;/p&gt;

&lt;p&gt;This talk is a game-changer for the cloud-native community because it solves the sparse invocation dilemma, where these AI tools often sit idle, eating up expensive cluster resources while waiting for a user's next prompt. By integrating cutting-edge Checkpoint/Snapshot mechanisms, the Sandbox Operator allows tools to be paused and resumed without losing the memory of the conversation. For platform engineers and AI architects, this session shows how developers can efficiently scale AI agents to hundreds of thousands of concurrent users while keeping infrastructure costs low and user context intact.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://kccnceu2026.sched.com/event/2CW1c/sandbox-operator-enabling-session-aware-efficient-mcp-tool-execution-in-kubernetes-mingshan-zhao-zhen-zhang-alibaba" rel="noopener noreferrer"&gt;View session details&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  12. Saxo service blueprint: Bridging legacy and modern world with Kubernetes operators
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Date/Time:&lt;/strong&gt; March 25, 16:00 - 16:30 CET&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Location:&lt;/strong&gt; Hall 11&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Theme:&lt;/strong&gt; Platform Engineering&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the session, &lt;strong&gt;Jinhong Brejnholt&lt;/strong&gt;, Chief Cloud Architect at &lt;strong&gt;Saxo Bank&lt;/strong&gt;, presents a sophisticated solution to the two-speed IT problem facing large enterprises. As a certified Kubestronaut and a leader in the Danish cloud-native community, Brejnholt details how Saxo Bank moved beyond the bottleneck of manual ticketing for DNS, certificates, and load balancing. She will showcase the Saxo Service Blueprint, a platform powered by Kubernetes Operators that exports the speed and reliability of GitOps to traditional VM-based applications, effectively unifying legacy infrastructure with modern cloud-native workflows.&lt;/p&gt;

&lt;p&gt;This talk is particularly compelling for the platform engineering community because it addresses the last mile of digital transformation: managing the heavy, non-containerised dependencies that still power most financial institutions. Brejnholt will share how this architecture has saved thousands of developer hours and significantly bolstered disaster recovery capabilities for both cloud-native and legacy stacks. For architects struggling to reconcile modern Kubernetes automation with entrenched enterprise systems, this session offers some valuable real-world experiences and a roadmap for other banks to accelerate delivery across the entire organisational footprint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://kccnceu2026.sched.com/event/2CW4E/saxo-service-blueprint-bridging-legacy-and-modern-world-with-kubernetes-operators-jinhong-brejnholt-saxo-bank" rel="noopener noreferrer"&gt;View session details&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  13. Optimising error recovery for cost-efficient distributed AI model training with Kubernetes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Date/Time:&lt;/strong&gt; March 26, 14:30 - 15:00 CET&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Location:&lt;/strong&gt; Elicium 2&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Theme:&lt;/strong&gt; AI + ML&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For our final recommended talk, the academic precision of &lt;strong&gt;Radostin Stoyanov&lt;/strong&gt; (undergoing his PhD at the &lt;strong&gt;University of Oxford&lt;/strong&gt;) meets the enterprise-scale expertise of &lt;strong&gt;Andrey Velichkevich&lt;/strong&gt; (AI Engineer at &lt;strong&gt;Apple&lt;/strong&gt; &amp;amp; Kubeflow Steering Committee member). Together, they aim to address the GPU tax that plagues modern AI development: the massive costs incurred when long-running training jobs fail or when idle interactive workloads, like Jupyter notebooks, waste expensive compute resources. The duo will present a breakthrough approach using transparent GPU checkpointing integrated with Kubernetes-native APIs like Kueue, JobSet, and TrainJob to capture and restore the state of training jobs automatically, ensuring no progress is lost during failures or preemptions.&lt;/p&gt;

&lt;p&gt;This talk is really exciting for the ML and Platform communities because it unlocks the holy grail of AI infrastructure: the ability to run reliable, high-stakes model training on preemptible spot instances. By moving checkpointing from the application layer to the infrastructure layer, Stoyanov and Velichkevich show how organisations can slash their cloud bills by up to 90% without risking their training timelines. For anyone tasked with scaling AI workloads while maintaining strict cost-efficiency and cluster utilisation, this session offers a sophisticated technical roadmap for turning unstable, expensive hardware into a resilient, high-performance training engine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://kccnceu2026.sched.com/event/2CW7Z/optimizing-error-recovery-for-cost-efficient-distributed-ai-model-training-with-kubernetes-radostin-stoyanov-university-of-oxford-andrey-velichkevich-apple" rel="noopener noreferrer"&gt;View session details&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Secure software starts at Booth 570
&lt;/h2&gt;

&lt;p&gt;Cloudsmith will be at booth 570, and we are bringing plenty of action to keep you busy. If you are ready to put your security skills to the test, join our capture the flag workshop: Stop the AGI apocalypse. You will hunt hidden malware, lock down LLM weights, and earn a shot at some great prizes. We also have a lineup of fun events to help you unwind, from canal cruises and kickoff drinks to beach-themed parties. Check out the full schedule of Cloudsmith activities here:&lt;br&gt;
&lt;a href="https://cloudsmith.com/events/in-person-events/kceu26" rel="noopener noreferrer"&gt;https://cloudsmith.com/events/in-person-events/kceu26&lt;/a&gt; &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>ai</category>
      <category>mcp</category>
    </item>
    <item>
      <title>Track your Bandwidth &amp; Storage limits with our Quota API</title>
      <dc:creator>Kyle Harrison</dc:creator>
      <pubDate>Tue, 08 Dec 2020 14:14:26 +0000</pubDate>
      <link>https://forem.com/cloudsmith/track-your-bandwidth-storage-limits-with-our-quota-api-1011</link>
      <guid>https://forem.com/cloudsmith/track-your-bandwidth-storage-limits-with-our-quota-api-1011</guid>
      <description>&lt;p&gt;At Cloudsmith, helping fledgeling startups grow from a single person operation to enterprise-level organisations is a constant joy! In those early stages, startups need all the help they can get to survive, and even veteran organisations experience similar challenges when scaling up rapidly.&lt;/p&gt;

&lt;p&gt;Cloudsmith can help with our self-service approach to managing and defining storage and bandwidth limits to keep costs under control while allowing you to scale when needed. After all, no one wants to be caught between an unexpected bill for overages or any interruption to their business, no matter how minor.&lt;/p&gt;

&lt;p&gt;The fantastic news is that now you have a new tool in your arsenal, as you can track your bandwidth and storage programmatically via the Quota API. It's easy to miss an email when your busy or a prompt within the Cloudsmith UI (if you haven't needed to log in recently); but you will always want to know if you're near/over a limit or even if you're over-provisioned.&lt;/p&gt;

&lt;p&gt;Now you can automate a solution that works for you using either our API or CLI to always stay on top of your storage and bandwidth limits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started quickly
&lt;/h2&gt;

&lt;p&gt;Using the Cloudsmith CLI, you can quickly and easily view the usage, limits, and maximum allocation of bandwidth and storage for your plan.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cloudsmith quota limits ORG-NAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwiq59gybuvm2l5mb8sy7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwiq59gybuvm2l5mb8sy7.png" alt="Quota Limits CLI" width="585" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alternatively, if you want to view the entire history for your organisation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cloudsmith quota &lt;span class="nb"&gt;history &lt;/span&gt;ORG-NAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnpv7wdkmvityqohdflg1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnpv7wdkmvityqohdflg1.png" alt="Quota history" width="800" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, to view the OSS usage for an organisation, you can add the &lt;code&gt;-oss&lt;/code&gt; flag to any command to view only OSS limits and history.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cloudsmith quota limits ORG-NAME &lt;span class="nt"&gt;-oss&lt;/span&gt;
cloudsmith quota &lt;span class="nb"&gt;history &lt;/span&gt;ORG-NAME &lt;span class="nt"&gt;-oss&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you identify a limit that is over-provisioned or quickly approaching a threshold, then you can quickly and easily &lt;a href="https://help.cloudsmith.io/docs/organisations?#usage-limits" rel="noopener noreferrer"&gt;adjust your limits&lt;/a&gt; at any time within the Cloudsmith UI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6mrgdfzo38nnhcnhetv9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6mrgdfzo38nnhcnhetv9.png" alt="Organization Limits UI" width="800" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, if you wish to automate a solution, you can also check out the Quota API or the Cloudsmith API Bindings (available in multiple languages) and connect it to your favourite CI/CD or monitoring service to alert internally within your organisation.&lt;/p&gt;

</description>
      <category>cloudsmith</category>
      <category>quotas</category>
    </item>
    <item>
      <title>Caching and Upstream Proxying for RedHat Packages</title>
      <dc:creator>Dan McKinney</dc:creator>
      <pubDate>Wed, 25 Nov 2020 13:54:14 +0000</pubDate>
      <link>https://forem.com/cloudsmith/caching-and-upstream-proxying-for-redhat-packages-4lbg</link>
      <guid>https://forem.com/cloudsmith/caching-and-upstream-proxying-for-redhat-packages-4lbg</guid>
      <description>&lt;p&gt;In keeping with our vision of offering a universal feature set across all the package formats we support, we are delighted to announce that we are now offering configurable upstream proxying and caching support for RedHat packages.&lt;/p&gt;

&lt;p&gt;As we touched upon when announcing the same for Debian and Maven packages, there are a lot of reasons why this is a really &lt;a href="https://cloudsmith.com/blog/caching-and-upstream-proxying-for-debian-packages/" rel="noopener noreferrer"&gt;good thing&lt;/a&gt;, so instead of going over those again, let’s jump straight into how you can set this up in you Cloudsmith repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started.
&lt;/h2&gt;

&lt;p&gt;We don’t think that we could have made this much easier to configure (but, of course, we would love to hear &lt;a href="https://help.cloudsmith.io/docs/contact-us" rel="noopener noreferrer"&gt;your thoughts&lt;/a&gt; on this!)&lt;/p&gt;

&lt;p&gt;In your Cloudsmith repository, you’ll see a menu item called “Upstream Proxying”. This is where we will configure our upstreams. Simply click the “Create Upstreams” button and select “RedHat” to create a new RedHat upstream:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fl3nj0n8tynmaz2t4zpc3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fl3nj0n8tynmaz2t4zpc3.png" alt="Create an Upstream" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will then see the “Create RedHat Upstream” form:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fe6n8mkp80npywar05e5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fe6n8mkp80npywar05e5c.png" alt="Create Upstream Form" width="497" height="745"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You just need to add a Name for the upstream, the upstream URL, and a priority weighting – If you have multiple upstreams this will determine the order in which they are used.  &lt;/p&gt;

&lt;p&gt;We can choose to fetch and cache any requested package (instead of just fetching them), and to verify the SSL certificates provided by the upstream. Then, we select the individual distribution for this upstream.&lt;/p&gt;

&lt;p&gt;Optionally - we can also add a GPG key, if required, for package signing, authentication headers for private repositories that require authentication and also some arbitrary headers if you wish to send something custom along with your request.&lt;/p&gt;

&lt;p&gt;And that’s it, we have now added a new Redhat upstream to our Cloudsmith repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0blz7jfo3dbvzl9tu49c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0blz7jfo3dbvzl9tu49c.png" alt="Upstream Created" width="800" height="138"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So now, the next request for a package that isn’t present in this repository will be passed through to the upstream and the package fetched it if it is available there.  If you also enabled “fetch and cache”, the package will be cached in your Cloudsmith repo for any future requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;By adding Redhat upstream proxying and caching, we are moving forwards towards our goal of being the most universal package management solution for your development, deployment and distribution workflows. We would love you to join us on this journey, and we are very excited about the platform we have built, the service that we provide, and where we are going to next. Sign up for free today, and experience Cloudsmith for yourself!&lt;/p&gt;

</description>
      <category>cloudsmith</category>
      <category>proxying</category>
      <category>caching</category>
    </item>
    <item>
      <title>Did We Lose Something With The Adoption Of Containers? And Can We Get It Back?</title>
      <dc:creator>Dan McKinney</dc:creator>
      <pubDate>Tue, 13 Oct 2020 10:36:05 +0000</pubDate>
      <link>https://forem.com/cloudsmith/did-we-lose-something-with-the-adoption-of-containers-and-can-we-get-it-back-mhi</link>
      <guid>https://forem.com/cloudsmith/did-we-lose-something-with-the-adoption-of-containers-and-can-we-get-it-back-mhi</guid>
      <description>&lt;p&gt;The subject of containers probably doesn’t need much of an introduction. Since the launch of Docker in 2013 containers have become almost ubiquitous, with &lt;a href="https://www.aquasec.com/news/portworx-container-adoption-survey/" rel="noopener noreferrer"&gt;89% of IT Professionals&lt;/a&gt; confirming their organizations used containers in some way during 2019.&lt;/p&gt;

&lt;p&gt;That is no surprise. Docker itself calls the container “a standardized unit of software”, which is a nice way of saying that a typical container packages up all the relevant code, system tools, dependencies and libraries within an application, and effectively isolates that application from the rest of the environment and infrastructure.&lt;/p&gt;

&lt;p&gt;As a result, containers have one great merit: they will reliably run, and run in the same way, wherever they are deployed. At a stroke, they eliminate the “well it’s working on my machine” problem that has caused endless heartache for both development and operations teams over the decades.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That benefit should not be under-estimated&lt;/strong&gt;. For developers, in particular, it is rather wonderful to be able to put everything into a container and throw it over the wall knowing that it is going to deploy, and in turn knowing that it isn’t going to come back over the same wall for round 2.&lt;/p&gt;

&lt;p&gt;It’s also - at least in theory - good for the operations team as well, for all of the same sorts of reasons.&lt;/p&gt;

&lt;p&gt;But it isn’t all good news.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is A “Unit” Of Software, Really?
&lt;/h2&gt;

&lt;p&gt;Prompted by the Docker definition of container I mentioned above, it might be instructive to ask ourselves whether the container is either the only, or indeed the best, ‘unit’ of software.&lt;/p&gt;

&lt;p&gt;If we take a 10,000 ft view of the history of software development it probably looks something like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The entirety of the source code is the unit&lt;/li&gt;
&lt;li&gt;The package is the unit&lt;/li&gt;
&lt;li&gt;The container is the unit&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the old days of source code, you really had to be on top of things. The whole thing stood up or fell over as a whole, and things were pretty painful and expensive if (when) it did the latter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When the package became the unit, things improved&lt;/strong&gt;. Certainly, if we were following good package management and distribution best practices, then in most cases things fell over less and were easier to fix when they did. I’ll talk more about this later, but let’s also remember that packages also help speed up development full stop as they enable us to more easily re-use code and services.&lt;/p&gt;

&lt;p&gt;In the move to containerization, the third stage in our evolution, the ‘unit’ of software has got bigger. As it has done so, it has concealed complexity in order to enable that crucial isolation from the environment it runs.&lt;/p&gt;

&lt;p&gt;Essentially, the container runs as a kind of black box. The operations team just need to know what it does. They don’t need more detailed metadata and they don’t need to know what is inside it. After all, it’s all one self-contained package. So the unit of software has not only got bigger, it has also become less transparent.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem With Being Big And Opaque
&lt;/h2&gt;

&lt;p&gt;The problem with a large unit size is that within it are likely to be hundreds if not thousands of dependencies.&lt;/p&gt;

&lt;p&gt;And the problem with being opaque is that we don’t know what they are.&lt;/p&gt;

&lt;p&gt;What happens when something goes wrong? Let’s imagine (and this won’t take too much effort for veterans of &lt;a href="https://www.theregister.co.uk/2016/03/23/npm_left_pad_chaos/" rel="noopener noreferrer"&gt;LeftPad&lt;/a&gt;, &lt;a href="https://en.wikipedia.org/wiki/Heartbleed" rel="noopener noreferrer"&gt;HeartBleed&lt;/a&gt;, &lt;a href="https://blog.npmjs.org/post/180565383195/details-about-the-event-stream-incident" rel="noopener noreferrer"&gt;Event-Stream&lt;/a&gt; or one of any number of dependency related screw-ups) that a vulnerability is discovered within a certain package.&lt;/p&gt;

&lt;p&gt;Where is that package currently in use in our ecosystem? We don’t know. Can we quickly roll that package back to a known ‘safe’ version wherever necessary? No.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Containers - as a ‘unit’ of software - live or die as a whole&lt;/strong&gt;. If something is wrong or might be wrong, we have no option other than to spin up a new version and redeploy: not necessarily a simple task. And as we don’t necessarily know which containers are affected, we find ourselves struggling to get on top of an emerging security crisis based on limited data. Not a nice place to be.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Lost Art Of Package Management
&lt;/h2&gt;

&lt;p&gt;My last point is this: that containerization has led to a general decline in diligence and observance of best practices when it comes to handling packages and dependencies. After all, if I am going to throw them all into one container and check if it runs does it really matter where I get packages from?&lt;/p&gt;

&lt;p&gt;The short answer is yes.&lt;/p&gt;

&lt;p&gt;The longer answer is yes because if we aren’t going to be sure what is in any given container further down the line, we should be as careful as we possibly can be that nothing we can’t stand over sneaks in now.&lt;/p&gt;

&lt;p&gt;Unfortunately, that doesn’t always happen. The path of least resistance is to get dependencies from pretty much anywhere and live with the consequences later. And as the developer cares about speed, and the ops team will be the ones dealing with those consequences, there is all the more reason to cut corners.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;As developers, we need to get back to being diligent - and consistently diligent - about questions of security, provenance, reliability and availability when it comes to packages and dependencies&lt;/strong&gt;. We need to be as sure as we can be that what we integrate into our projects (whether destined for containers or not) is what it says it is. And we need to know what we used where and when - so that we can fall back to a previous version quickly and easily if necessary.&lt;/p&gt;

&lt;p&gt;It’s the least our colleagues in operations, and the users of our end product, deserve.&lt;/p&gt;

</description>
      <category>cloudsmith</category>
      <category>containers</category>
    </item>
    <item>
      <title>Restore authority with Token Bandwidth Controls!</title>
      <dc:creator>Kyle Harrison</dc:creator>
      <pubDate>Tue, 06 Oct 2020 16:33:15 +0000</pubDate>
      <link>https://forem.com/cloudsmith/restore-authority-with-token-bandwidth-controls-29ld</link>
      <guid>https://forem.com/cloudsmith/restore-authority-with-token-bandwidth-controls-29ld</guid>
      <description>&lt;p&gt;Now you can go beyond &lt;a href="https://cloudsmith.com/blog/vendors-rejoice-analyse-your-bandwidth-usage/" rel="noopener noreferrer"&gt;measuring your bandwidth usage&lt;/a&gt; and regain control via Cloudsmith's new bandwidth controls for Entitlement tokens. You can craft tokens with individual usage limits using the UI, API, and CLI, allowing you to decide the exact level of usage for each token.&lt;/p&gt;

&lt;p&gt;Combining the new and existing limits for entitlement tokens, allowances are configurable to provide fine-grained control for any combination of properties. For example, the total amount of bandwidth, number of unique clients using a token, or the maximum number of downloads a token can perform on an individual token basis. Also, you can also scope your tokens by restrictions for advanced control of tokens.&lt;/p&gt;

&lt;p&gt;If you are a vendor, you may want to have tiered levels of tokens for different users. Providing higher or even potentially unlimited allowances to your premium users is now possible, whilst maintaining control of your offering to free users with suitable limits. The best part is you can provide a lifetime limit for a token, or you can configure a refresh period to refresh the limits after the period has elapsed. For example, a token with a 1 GB daily limit, will allow up to 1 GB of daily usage and no more. After 1 day has passed, the token will be reset allowing another 1 GB of daily usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;p&gt;Within the Cloudsmith UI for Entitlement tokens, you can edit individual tokens to provide visibility restrictions, usage limits, or even provide additional metadata on the token for your own internal requirements.&lt;/p&gt;

&lt;p&gt;To provide a bandwidth restriction with a refresh period. You will need to select a unit of Bandwidth e.g., GB, and enter a corresponding unit of bandwidth you would like to restrict by (e.g., 1GB of bandwidth). A "Monthly" refresh may be more than enough for most users but may prevent misuse of tokens by users accidentally using many TB's of data per month.&lt;/p&gt;

&lt;p&gt;In this example, fine-grained bandwidth controls will be configured for the amount of bandwidth usage that token is allowed to consume every month. This is accomplished by selecting a few preset values and entering a bandwidth amount within the Edit Entitlement Token form.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvqvi6qbn2bytth7paaxo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvqvi6qbn2bytth7paaxo.png" alt="Token Restrictions Form" width="665" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To configure a 1GB bandwidth limit that resets monthly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Select "Monthly" for "Refresh Token"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;(Presets provide values from "Never Reset" to a range between "Daily" to "Annual")&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Enter a number to Restrict by Bandwidth.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The number should be between 0 - 1000; you can go higher than 1000; however, the unit of bandwidth provides sensible defaults to keep values readable. (e.g. 1000000 bytes can instead be expressed as 1 Megabyte).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Select a Unit of Bandwidth.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Values range from a single Byte to an insanely large number of Petabytes.&lt;/p&gt;

&lt;p&gt;Finally, select Edit to save your changes to the token. These restrictions will take effect almost immediately.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Faqyuamo6bx2p61hyu89z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Faqyuamo6bx2p61hyu89z.png" alt="Token Restricted to 1GB" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you wish to set your limits programmatically, entitlement restrictions are configurable using the &lt;a href="https://help.cloudsmith.io/reference?#entitlements-1" rel="noopener noreferrer"&gt;Entitlements API&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Finally, the easiest way to get started with making programmatic changes to entitlement tokens and applying restrictions is via the &lt;a href="https://help.cloudsmith.io/docs/cli" rel="noopener noreferrer"&gt;Cloudsmith CLI&lt;/a&gt;. Using the following command, you can set all visibility restrictions and usage limits for an entitlement token:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cloudsmith entitlements restrict OWNER/REPOSITORY/TOKEN_IDENTIFIER  &lt;span class="nt"&gt;--RESTRICTION&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;For Example:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cloudsmith entitlements restrict demo/example-repo/GYwg00eEElKs &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--limit-bandwidth&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--limit-bandwidth-unit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;gigabyte &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--limit-num-clients&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10 &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--limit-num-downloads&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1000 &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--limit-package-query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"package-darwin-amd64"&lt;/span&gt;  &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--limit-path-query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tag:latest &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--limit-date-range-from&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2020-01-01T00:00:00Z &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--limit-date-range-to&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2077-01-01T00:00:00Z &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--refresh-token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;daily
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's that simple to get started! &lt;/p&gt;

</description>
      <category>cloudsmith</category>
    </item>
    <item>
      <title>Vendors Rejoice! Analyse your Bandwidth Usage</title>
      <dc:creator>Kyle Harrison</dc:creator>
      <pubDate>Tue, 29 Sep 2020 15:13:29 +0000</pubDate>
      <link>https://forem.com/cloudsmith/vendors-rejoice-analyse-your-bandwidth-usage-4i52</link>
      <guid>https://forem.com/cloudsmith/vendors-rejoice-analyse-your-bandwidth-usage-4i52</guid>
      <description>&lt;p&gt;As a vendor, understanding your bandwidth usage is an invaluable insight into how packages are distributed across your user base and how specific users have grown over a timeframe.&lt;/p&gt;

&lt;p&gt;As a user base grows from 10s, 100s, and 1000s of users and package downloads are many multiples of your total number of users, it's essential to understand the distribution of your bandwidth utilisation by a user (or more precisely, entitlement token's) to aid in the identification of token's that make up a large percentage of your overall traffic.&lt;/p&gt;

&lt;p&gt;When providing an entitlement token to a user under a specific license, you trust an entitlement token will be used to download packages upon the agreed-upon terms. However, an increase in your total bandwidth may start off slow and continuously grow over several months until it's suddenly become a massive increase in your overall bandwidth. Understanding this usage and precisely where this growth originates helps identify where/where change occurs and provides options to mitigate runaway bandwidth from specific entitlement tokens by changing how those specific users are managed.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does it work?
&lt;/h2&gt;

&lt;p&gt;Whenever a user downloads a package via an entitlement token, the exact number of bytes transmitted from our servers to the user host is stored for that specific token as an individual log entry. At the end of each day, we calculate the total bandwidth for every user across all of our repositories containing one or more packages and store the result as a daily aggregated value representing bandwidth usage.&lt;/p&gt;

&lt;p&gt;The easiest way to get started exploring these metrics is to check out the metrics command within the Cloudsmith CLI. Alternatively, to get more fine-grained metrics, you can implement a programmatic solution using our API or one of the API binding libraries published in various languages.&lt;/p&gt;

&lt;p&gt;Getting started quickly! Using the Cloudsmith CLI you can quickly and easily query the total bandwidth usage for your repository by running the following command with your organisation/repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cloudsmith metrics tokens OWNER/REPOSITORY
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cloudsmith metrics tokens demo/examples-repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following screenshot shows an example repository containing a number of active and inactive entitlement tokens alongside the total bandwidth used. The statistics table provides a simple insight into min/max/average token usage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0d742entds0d1bu847h3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0d742entds0d1bu847h3.png" alt="cloudsmith metrics tokens" width="724" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also retrieve usage metrics for one or more specific tokens by providing a comma-separated list of Entitlement token identifiers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cloudsmith metrics tokens cloudsmith/example &lt;span class="nt"&gt;--tokens&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ZGCV58VqT8Sl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmnbkf8zn0gg4uvrb9hql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmnbkf8zn0gg4uvrb9hql.png" alt="metrics for specific token" width="585" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you wish to drill down further into a specific period of usage for a token, you can supply time and date for the start and finish parameters to filter for usage within that period. In this example, a single token is displayed for all of 2019:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cloudsmith metrics tokens cloudsmith/example &lt;span class="nt"&gt;--tokens&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ZGCV58VqT8Sl &lt;span class="nt"&gt;--start&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2019-01-01T00:00:00Z &lt;span class="nt"&gt;--finish&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2019-12-31T00:00:00Z
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6wcjp1f8ggzqh1d4tmif.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6wcjp1f8ggzqh1d4tmif.png" alt="metrics for specific time period" width="566" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's that simple to get started! &lt;/p&gt;

&lt;p&gt;You can find more information about Entitlement Tokens in our &lt;a href="https://help.cloudsmith.io/docs/entitlements" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cloudsmith</category>
    </item>
    <item>
      <title>Universal Package Tagging</title>
      <dc:creator>Dan McKinney</dc:creator>
      <pubDate>Tue, 22 Sep 2020 10:55:53 +0000</pubDate>
      <link>https://forem.com/cloudsmith/universal-package-tagging-3412</link>
      <guid>https://forem.com/cloudsmith/universal-package-tagging-3412</guid>
      <description>&lt;p&gt;A key factor in good package management is organization. How you organize and structure your repositories will help you to unlock the efficiencies and promises of modern DevOps processes.&lt;/p&gt;

&lt;p&gt;There are, of course, a multitude of ways you can organize packages. You can group by version numbers, formats, architectures, filetype and more. At Cloudsmith we have supported this by extracting as much of this metadata as possible when you upload a package and making this metadata available for searching/filtering. &lt;/p&gt;

&lt;p&gt;Sometimes, however, the metadata just wasn’t granular enough – or it didn’t provide quite the nomenclature that you may have wanted. You wanted more.  And we are extremely pleased to say that we have now added a new feature that greatly expands upon this functionality. Presenting:&lt;/p&gt;

&lt;h2&gt;
  
  
  UNIVERSAL PACKAGE TAGGING
&lt;/h2&gt;

&lt;p&gt;So, what is new about this? And why does it matter?&lt;/p&gt;

&lt;p&gt;Well, in short, we now give you the ability to add ANY custom tags to ANY package or container, either during package upload or after the fact. And you can do this via the Cloudsmith CLI or the Cloudsmith API.&lt;/p&gt;

&lt;p&gt;Let’s say for example that you were using Cloudsmith to distribute your packages to end-users, and you have “Free” and “Premium” editions of your package. Well now you can simply add “Free” and “Premium” tags to the respective packages, and then create &lt;a href="https://help.cloudsmith.io/docs/entitlements" rel="noopener noreferrer"&gt;Entitlement Tokens&lt;/a&gt; to give your users access to just the packages they should be allowed – without resorting to adding the edition into the filename or creating two separate repositories. You can manage it all from one central location. And of course, as Cloudsmith repositories are fully multi-tenant for package formats, you can apply these tags across all the package formats you use, all in one repository.&lt;/p&gt;

&lt;p&gt;Or perhaps you would like to tag a package based on where it is deployed. You could tag a package as “rest-api”, or similar, to differentiate where in your production application the package is used.&lt;/p&gt;

&lt;p&gt;Let’s look at an example&lt;/p&gt;

&lt;p&gt;Let’s start off by listing the packages in our demo repo. The CLI command for this is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cloudsmith list packages OWNER/REPOSITORY
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fkf5lcc1z9mxuich2etu2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fkf5lcc1z9mxuich2etu2.png" alt="list packages" width="800" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, we can see that we have a collection of RPM packages in this repo, with various different versions.  Let’s check one of those packages to see if it has any custom tags attached. The CLI command for this is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cloudsmith tags list OWNER/REPOSITORY/PACKAGE-IDENTIFIER
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcbhfp7lvwl2cpj27ehne.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcbhfp7lvwl2cpj27ehne.png" alt="list package by identifier" width="800" height="63"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OK, this package does not have any custom tags. Let’s now add one, and the CLI command for adding a tag is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cloudsmith tags add OWNER/REPOSITORY/PACKAGE-IDENTIFIER tag1, tag2, tag3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffncm1q3pq5fil1se7921.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffncm1q3pq5fil1se7921.png" alt="adding a tag" width="800" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Great, we have now added a custom tag “free” to this package.  We can repeat this for the other packages in the repository, varying the tags to suit our purpose. In addition, we can also specify that tags are “Immutable”, which means they can only be removed or altered by someone with Administrator permissions on the repository, or by the package owner.  When we are finished adding tags, we can look at the repository on the Cloudsmith website:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fe3hylitqv69gytxgrx2t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fe3hylitqv69gytxgrx2t.png" alt="tags in repository" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see the new custom tags listed for each package alongside the tags that were automatically created from the metadata when the packages were processed after upload. We can now use these custom tags in any searching/filtering we need to do, or add them as a restriction on an Entitlement Tokens that we create for the repository:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Foc3n6eqc1osxqxiueokp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Foc3n6eqc1osxqxiueokp.png" alt="search by tag" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The syntax for searching/filtering is the same syntax you use when creating a search-based restriction for an Entitlement Token, so it really is easy to create a set of access tokens that divide the repository up into subsets of packages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Visibility of the attributes of your packages is important, it will help you not only structure your repositories but can also help you gain insight on what package is where or what package a group of customers can access. And it’s not just the basic attributes of a package that you need visibility on, it’s anything about a package that matters to your organization and workflows. With universal package tagging, you now have the ability to add your own searchable attributes to your packages, so you can define what is of importance.&lt;/p&gt;

</description>
      <category>cloudsmith</category>
    </item>
    <item>
      <title>Caching and Upstream Proxying for Debian Packages</title>
      <dc:creator>Dan McKinney</dc:creator>
      <pubDate>Tue, 15 Sep 2020 19:57:32 +0000</pubDate>
      <link>https://forem.com/cloudsmith/caching-and-upstream-proxying-for-debian-packages-410g</link>
      <guid>https://forem.com/cloudsmith/caching-and-upstream-proxying-for-debian-packages-410g</guid>
      <description>&lt;p&gt;At Cloudsmith, we want to be your “one central source of truth” for your dependencies and package management needs. And in keeping with this ideal, we are extremely pleased to announce that we have added fully configurable transparent Proxying and Caching support for Debian packages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why does this matter?
&lt;/h2&gt;

&lt;p&gt;Well, in short, it means that you can now use your private Cloudsmith repository for all of your Debian package needs – whether that is your own private packages or packages that you need from public upstream sources.  Your private Cloudsmith repository is all that you need to handle both. &lt;/p&gt;

&lt;p&gt;If you request a package from your Cloudsmith repository, and that package isn’t present in the repo, then Cloudsmith will automatically check any upstream repos that you have configured and will then fetch (and optionally cache the package for future requests) from the upstream.&lt;/p&gt;

&lt;p&gt;This brings you several important benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Easier setup&lt;/strong&gt;. Your Cloudsmith repository is the only repository you need to configure on your clients. No more need to configure multiple repos, and handle multiple authentication credentials etc. Configure the upstreams once in Cloudsmith, and that’s it done.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Isolation&lt;/strong&gt;. If you have cached packages and dependencies that you require in your Cloudsmith repository, then if the upstream repo goes down, is otherwise unavailable, or if the packages are removed then you can still access your cached versions. No more breaking of build or deployment process due to an unreliable upstream.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visibility&lt;/strong&gt;. You can view details on what specific packages were requested from the upstreams. Gain insights into what you have, and what’s missing – or who and what else you are currently relying on. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;. Cloudsmith repositories are backed by a performant, global CDN. This means that your own packages and those cached from an upstream are delivered with the same low latencies and speeds. Going further than this, with edge nodes in almost all geographic regions, your users will experience this performance wherever they are located. Distributed teams all benefit equally.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security and Control&lt;/strong&gt;. All of your packages and dependencies in one place means it’s easier for you to implement the controls and security policies that you need. Multiple sources mean multiple management tasks. Keep everything in one place and keep a tighter hold on what you have.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Sounds Great!. How do we set this up?.
&lt;/h2&gt;

&lt;p&gt;Well, it’s easy. In your Cloudsmith repository, you’ll see a menu item called “Upstream Proxying”. This is where we will configure our upstreams. Simply click the “Create Upstreams” button and select “Debian” to create a new Debian upstream:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5l43df2e4eqz5tai2w6o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5l43df2e4eqz5tai2w6o.png" alt="Create a Debian Upstream" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You are then presented with the “Edit Debian Upstream” form.  This is where we enter the details of the Debian upstream we wish to use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2adept6endro0t7ex26u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2adept6endro0t7ex26u.png" alt="Debian Upstream Form" width="424" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We add a Name for the upstream, it’s URL and a priority weighting- in cases of multiple upstreams this will determine the order in which they are checked for a package.&lt;/p&gt;

&lt;p&gt;We can then choose to fetch and cache any requested package (instead of just fetching them), and to verify the SSL certificates provided by the upstream. In addition, we can choose to enable this upstream for source packages too.  &lt;/p&gt;

&lt;p&gt;Next, we select the distributions and architectures that we wish to use this upstream for, and finally, we can add optional authentication headers (for private repositories that require authentication) and also optional arbitrary headers, if you wish to send something custom along with your request.&lt;/p&gt;

&lt;p&gt;And that’s it, we have now added a new Debian upstream to our Cloudsmith repository.&lt;/p&gt;

&lt;p&gt;Behind the scenes, Cloudsmith will now start to index the packages available in the upstream repository.  The upstream will be ready for use as soon as the indexing is complete.&lt;/p&gt;

&lt;p&gt;So now, for the next request for a package that isn’t present in this repository, Cloudsmith will check the upstream and fetch it if it is available there and also cache it in the Cloudsmith repo (if you enabled caching) for future requests. It’s that simple.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Debian upstream proxying and caching support is just another step on our path to providing you with the centralized controls, security, management and visibility that you need to enable a modern, high-velocity and DevOps-first workflow for your package management needs. It means fewer things to worry about, less exposure to change and most importantly.... &lt;/p&gt;

&lt;p&gt;Well, what's most important to you? &lt;a href="https://cloudsmith.com/company/contact-us/" rel="noopener noreferrer"&gt;Let us know&lt;/a&gt;! &lt;/p&gt;

</description>
      <category>cloudsmith</category>
      <category>debian</category>
    </item>
    <item>
      <title>Integrating a Cloudsmith Repository with a GitLab CI/CD Pipeline</title>
      <dc:creator>Dan McKinney</dc:creator>
      <pubDate>Thu, 10 Sep 2020 08:44:38 +0000</pubDate>
      <link>https://forem.com/cloudsmith/integrating-a-cloudsmith-repository-with-a-gitlab-ci-cd-pipeline-34mp</link>
      <guid>https://forem.com/cloudsmith/integrating-a-cloudsmith-repository-with-a-gitlab-ci-cd-pipeline-34mp</guid>
      <description>&lt;h2&gt;
  
  
  What is GitLab CI/CD
&lt;/h2&gt;

&lt;p&gt;GitLab CI/CD is a tool that is built into GitLab. It allows you to create automated tasks that you can use to form a Continuous Integration and Continuous Delivery / Deployment process.&lt;/p&gt;

&lt;p&gt;You configure GitLab CI/CD by adding a yaml file (called &lt;code&gt;.gitlab-ci.yml&lt;/code&gt;) to your source repository. This file creates a pipeline, which will then run when a code change is pushed to the repository. Pipelines are made up of a series of stages, and each stage can each contain a number of jobs or scripts. The GitLab Runner agent will then run these jobs.&lt;/p&gt;

&lt;p&gt;For an on-premise instance of GitLab, you can install the GitLab runner agent on your own instances, and it supports many different operating systems (thereby creating your own fleet of instances to run your pipelines). But to keep things simple in the following examples, we will use gitlab.com and the default hosted GitLab runner environments provided.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Why use Cloudsmith with Gitlab CI/CD
&lt;/h2&gt;

&lt;p&gt;So why would you want to use Cloudsmith with your Gitlab CI/CD pipelines? Well, there are a few reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Universality&lt;/strong&gt; – Cloudsmith supports over 20 package formats, so whatever artifacts your project produces, you can find a home for them in a private Cloudsmith repository&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Control&lt;/strong&gt; – Cloudsmith private repositories provide extensive security and access controls, that have been designed to accommodate workflows such as internal development, deployment or even distribution to external customers. The fine-grained permissions system available enables you to craft bespoke access control, and lock down or open up your repository as much or as little as you need.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation and Integration&lt;/strong&gt; – Thanks to the Cloudsmith CLI and also native support for format-specific package managers, a Cloudsmith private repository can fit in seamlessly with other tools in your development or distribution processes. It provides you with a single source of truth across your packages and dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance and Reliability&lt;/strong&gt; – As a cloud-native platform, Cloudsmith manages the availability and performance of your repositories. You don’t need to worry about managing a fleet of servers, containers or virtual machines. Global performance, backed by our ultra-fast CDN and multi-region infrastructure, ensures that your packages are delivered worldwide. Reliably, quickly and securely.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s not all. With features like custom domain support, configurable upstream proxying and caching, configurable edge caching rules, download logs/statistics and more, Cloudsmith aims to provide the best solution for all your package management needs. It really is the ideal tool in a high-velocity CI/CD workflow – precisely the type of workflow that GitLab CI/CD is intended to enable you to create.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let’s work through an example.
&lt;/h2&gt;

&lt;p&gt;OK, so let’s get started with a worked example. The very first thing you will need is some source code in a GitLab repository that you want to build.  For this example, we will build Ruby source code into a Ruby Gem package. &lt;/p&gt;

&lt;p&gt;Our project on GitLab has the following structure:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6a991y8wwflb9p2vovra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6a991y8wwflb9p2vovra.png" alt="Project Structure" width="488" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The important thing here, as previously mentioned, is the &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file. This is where we define the GitLab CI/CD pipeline that will run when we push a change to the GitLab repo.&lt;/p&gt;

&lt;p&gt;Let’s take a look at the .gitlab-ci.yml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ruby:2.5"&lt;/span&gt;

&lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;RUBYGEMS_HOST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://ruby.cloudsmith.io/demo/examples-repo"&lt;/span&gt;


&lt;span class="na"&gt;stages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;push&lt;/span&gt;

&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;gem build cloudsmith-ruby-example.gemspec&lt;/span&gt;
  &lt;span class="na"&gt;artifacts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cloudsmith-ruby-example-*&lt;/span&gt;

&lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;push&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mkdir -p ~/.gem&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mv $CLOUDSMITH_API_KEY ~/.gem/credentials &amp;amp;&amp;amp; chmod 0600 ~/.gem/credentials&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;gem push cloudsmith-ruby-example-1.0.1.gem&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first thing we have specified is the image that will be used when creating the Docker container for the GitLab runner that will execute this pipeline, in this case, Ruby 2.5. &lt;/p&gt;

&lt;p&gt;Next, we add an environment variable, &lt;code&gt;RUBYGEMS_HOST&lt;/code&gt;. This is where we define the URL of the Cloudsmith package repository that we will push the result of the build to.&lt;/p&gt;

&lt;p&gt;We then define two stages in this pipeline, the build stage and the push stage.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Build Stage
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;gem build cloudsmith-ruby-example.gemspec&lt;/span&gt;
  &lt;span class="na"&gt;artifacts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cloudsmith-ruby-example-*&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This stage is pretty straightforward. We just run the &lt;code&gt;gem build&lt;/code&gt; command to build our Ruby source as defined in our cloudsmith-ruby-example.gemspec file. Following this, we have defined an &lt;code&gt;artifact&lt;/code&gt; job, as we need to temporarily store the output of the build so that the push stage next can use it. &lt;/p&gt;

&lt;p&gt;This is because different stages in a pipeline will run on a new runner instance, so a subsequent stage wouldn’t have the access to the package built in a previous stage. You could also perform any jobs required to build and push within the same stage, and therefore on the same runner to avoid this, but for more complex pipelines you’ll likely have many stages.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Push Stage
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;push&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mkdir -p ~/.gem&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mv $CLOUDSMITH_API_KEY ~/.gem/credentials &amp;amp;&amp;amp; chmod 0600 ~/.gem/credentials&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;gem push cloudsmith-ruby-example-1.0.1.gem&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The push stage is a little bit more complex. This is due to the fact that we are going to push our package to a private Cloudsmith repo, which requires authentication.  The Ruby package manager, gem, allows you to store your authentication credentials (in our case, the Cloudsmith API Key) in a credentials file, located at &lt;code&gt;~/.gem/credentials&lt;/code&gt; – But of course, we don’t want to check this credentials file into our GitLab repository along with our source!&lt;/p&gt;

&lt;p&gt;So we can make use of GitLab's ability to add variables to the source code repository.  We can create a file variable called &lt;code&gt;CLOUDSMITH_API_KEY&lt;/code&gt;, and then as part of the push step, we add a job to move this variable into the required location before we run the &lt;code&gt;gem push&lt;/code&gt; command. &lt;/p&gt;

&lt;p&gt;You add this file variable in your GitLab repository settings:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7bdxe96t6nwz30s2ar6w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7bdxe96t6nwz30s2ar6w.png" alt="File Variable" width="756" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, when our push step runs, the &lt;code&gt;mkdir&lt;/code&gt; and &lt;code&gt;mv&lt;/code&gt; jobs will create the required &lt;code&gt;~/.gem/credentials&lt;/code&gt; file (with our Cloudsmith API Key in it), and all without exposing our API Key in any logs on the runner instance, nor checked in with our source code.&lt;/p&gt;

&lt;p&gt;The final job in our push step simply runs &lt;code&gt;gem push&lt;/code&gt; to upload the package we have built to our private Cloudsmith repo, as Cloudsmith repositories offer full native support for the &lt;code&gt;gem&lt;/code&gt; package manager.&lt;/p&gt;

&lt;p&gt;Triggering the pipeline&lt;/p&gt;

&lt;p&gt;All that is left to do is for us to make a change to our source, and then commit and push the change to our Gitlab repo.  Let’s see what happens when we do that. &lt;/p&gt;

&lt;p&gt;We make a change, commit it and push:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fle8tfqbt2gww265aqzu1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fle8tfqbt2gww265aqzu1.png" alt="Commit Change" width="800" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Gitlab CI/CD pipeline starts, and the build stage executes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcl6drzaxc59ms1hbirtd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcl6drzaxc59ms1hbirtd.png" alt="Build Stage Executing" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The build stage runs &lt;code&gt;gem build&lt;/code&gt; to build the package and then stores it in GitLab’s temporary artifact storage:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frdhf7e0b3lpt1xd30gcx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frdhf7e0b3lpt1xd30gcx.png" alt="Build Stage Output" width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Push stage then starts to execute once the Build stage completes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyti58upnrtn7nlwngxjy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyti58upnrtn7nlwngxjy.png" alt="Push Stage Executing" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The push stage downloads the package from the temporary artifact storage, creates the required &lt;code&gt;~/gem/credentials&lt;/code&gt; file, and runs &lt;code&gt;gem push&lt;/code&gt; which uploads the package to our private Cloudsmith repository:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyti58upnrtn7nlwngxjy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyti58upnrtn7nlwngxjy.png" alt="Push Stage Output" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that’s it, the pipeline is now complete and reports as "Passed":&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fkdmzn1g1na1o212vcmru.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fkdmzn1g1na1o212vcmru.png" alt="Pipeline Complete" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we now go and login to Cloudsmith, and check our "examples-repo" repository, we can see that the Ruby gem we just built is present:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc8phh4unr2xq3tdicva6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc8phh4unr2xq3tdicva6.png" alt="Package in Repo" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Integrating a private Cloudsmith repository with GitLab CI/CD pipeline is easy, whether you are building a package that Cloudsmith provides native tooling support for (like Ruby) or even if you are using the Cloudsmith CLI to push a raw file, or other binary artifacts.  You can add your own private Cloudsmith repository with just a couple of lines of configuration, and it just works. &lt;/p&gt;

&lt;p&gt;Package management can be complex and difficult, and doubly so if you are trying to manage your own package management solution and infrastructure at the same time.  Try it for yourself, and see the productivity and efficiency gains that you can get from a centralized, hosted, secure package management service.&lt;/p&gt;

</description>
      <category>cloudsmith</category>
      <category>integrations</category>
      <category>gitlab</category>
    </item>
    <item>
      <title>Integrating a Cloudsmith Repository and a Buildkite pipeline</title>
      <dc:creator>Dan McKinney</dc:creator>
      <pubDate>Tue, 01 Sep 2020 11:45:02 +0000</pubDate>
      <link>https://forem.com/cloudsmith/integrating-a-cloudsmith-repository-and-a-buildkite-pipeline-58ke</link>
      <guid>https://forem.com/cloudsmith/integrating-a-cloudsmith-repository-and-a-buildkite-pipeline-58ke</guid>
      <description>&lt;h2&gt;
  
  
  Cloudsmith and Buildkite
&lt;/h2&gt;

&lt;p&gt;At Cloudsmith, you will often hear us refer to our mantra of “Automate Everything”. It's a quest that we never deviate from, and we believe that anything that can be automated, should be automated.&lt;/p&gt;

&lt;p&gt;With that in mind, we would like to show you how simple it is to integrate a Cloudsmith repository with your Buildkite pipeline, and automate the pushing of your build artifacts into your own private repository for further CI/CD steps or even as a source for your global distribution needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Buildkite?
&lt;/h2&gt;

&lt;p&gt;Buildkite is a platform for running fast, secure, and scalable continuous integration pipelines on your own infrastructure. That means you can use Buildkite to orchestrate and manage your own fleet of build hosts, and these can be anything from containers, to cloud instances, to bare metal servers.&lt;/p&gt;

&lt;p&gt;Why would I want to integrate Cloudsmith with Buildkite?&lt;br&gt;
Well, in short, a continuous integration pipeline is going to have an output, and you are going to need and want to put that output somewhere that is secure, controlled and integrates with all the other tooling that will form the next parts of your DevOps workflow - whether that is continuous deployment to production, distribution to an end-user or customer, or even consumption by another internal team as part of their development process.&lt;/p&gt;

&lt;p&gt;This is where Cloudsmith fits in, and this is another phrase you might hear us use quite a bit - Cloudsmith offers a central source of truth for your packages and build artifacts. We provide a global platform that gives you the performance, scalability, security and visibility that you need to control and manage your software assets.&lt;/p&gt;
&lt;h2&gt;
  
  
  Using Buildkite and Cloudsmith – An Example:
&lt;/h2&gt;

&lt;p&gt;So, you’re let’s assume that you’re new to Buildkite, how do you get started? &lt;/p&gt;
&lt;h3&gt;
  
  
  Step 1- Install the buildkite-agent
&lt;/h3&gt;

&lt;p&gt;The first thing that you need to do is install the buildkite-agent so that you have a machine (remember, this can be a container, a VM, or even a real server) that will act as your build host. Buildkite provides install instructions for all major platforms and operating systems:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ficmpmlukltt1gpbuaqvm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ficmpmlukltt1gpbuaqvm.png" alt="Buildkite agent supported OSes" width="783" height="223"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have installed the agent, you will see a new build host in the Buildkite UI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj4sr6zpkgjs8dtanflky.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj4sr6zpkgjs8dtanflky.png" alt="Buildkite agent host" width="767" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this example, I have installed the buildkite-agent on a Debian instance.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 2(a) – Create a pipeline
&lt;/h3&gt;

&lt;p&gt;The next thing you need to do is create your pipeline. A Buildkite pipeline is a series of steps that your build host(s) will execute in order to build your assets/artifacts.  For this example, we will create a pipeline that will compile a simple C source program, package it into a deb package and then push that deb package to a Cloudsmith repository.&lt;/p&gt;

&lt;p&gt;To get started, you give your pipeline a name, and then you specify a source repository for the pipeline. In this case, it is a GitHub repository (although Buildkite will also work with many other platforms as a source):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ff7vb3q363h6ufc6siy0b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ff7vb3q363h6ufc6siy0b.png" alt="New Buildkite Pipeline" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If using a private GitHub repository, you also need to remember to add an SSH key to your GitHub profile so that the host you have installed the buildkite-agent on can access the source repository.&lt;/p&gt;
&lt;h3&gt;
  
  
  2(b) – Add your pipeline steps.
&lt;/h3&gt;

&lt;p&gt;Pipeline steps are where you specify the commands or scripts that you need to run in order to build your source / project / application. In Buildkite, you can define these steps via the Buildkite UI or as a &lt;code&gt;pipeline.yaml&lt;/code&gt; file. To keep things simple, we will define two steps via the Buildkite UI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PreBuild Step&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this step, we install the tools we need to build our source and package it into a deb package. We also install the Cloudsmith CLI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fijs7hfe5ierellt5w84l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fijs7hfe5ierellt5w84l.png" alt="PreBuild Step" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;BuildAndPushPackage Step&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this step, we use &lt;code&gt;make&lt;/code&gt; to compile our source and then we use &lt;code&gt;fpm&lt;/code&gt; to build the deb package. Finally, we use the &lt;code&gt;cloudsmith push&lt;/code&gt; command to push the package to our Cloudsmith repository:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsxw2na9z4mrg3d1rzqt3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsxw2na9z4mrg3d1rzqt3.png" alt="BuildAndPushPackage Step" width="800" height="549"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The equivalent &lt;code&gt;pipeline.yaml&lt;/code&gt; file for these steps would look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;label&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;PreBuild"&lt;/span&gt;
  &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sudo apt update&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sudo apt-get install ruby ruby-dev rubygems build-essential python-pip -y&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sudo gem install --no-document fpm&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;pip install cloudsmith-cli&lt;/span&gt; 

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;label&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;BuildAndPushPackage"&lt;/span&gt;
    &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;make&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;fpm -f -s dir -t deb -v 1.0.1 -n cloudsmith-buildkite-test .&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cloudsmith push deb demo/buildkite-demo/debian/buster cloudsmith-buildkite-test_1.0.1_amd64.deb&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more complex build pipelines, you’ll likely have a lot more steps and the advantage of using a &lt;code&gt;pipeline.yaml&lt;/code&gt; file is that you can version it and check it in right alongside your source.  &lt;/p&gt;

&lt;p&gt;One thing to note is that pipeline steps in Buildkite are stateless. As a result, if you have a fleet of agents then each step is not guaranteed to run on the same agent. This means that if a subsequent step need / relies on the output from a previous step, the output from one step will need to be stored and then retrieved. Buildkite provides temporary artifact storage that you can use for this purpose (see &lt;a href="https://buildkite.com/docs/pipelines/artifacts" rel="noopener noreferrer"&gt;here&lt;/a&gt; for more details), but to keep things simple in this example we performed the build and push in a single step.    &lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2(c) – Add a Github Webhook.
&lt;/h3&gt;

&lt;p&gt;We want this pipeline to start when we push a new commit of our source to our GitHub repository and for that, we can configure a GitHub Webhook. Conveniently, Buildkite provide a webhook URL that we just need to copy and paste into our GitHub webhook configuration, and then select the event type we wish to fire this webhook on (in this case, all pushes):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fkmdnbibs411yngq1i4tr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fkmdnbibs411yngq1i4tr.png" alt="GitHub Webhook" width="800" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s it, our pipeline is now built and will be ready to run.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3 – Environment hooks
&lt;/h3&gt;

&lt;p&gt;Buildkite supports several types of hooks that can run on a build host during a pipeline run and a final piece of configuration that we need to do is use an environment hook to configure our environment.&lt;/p&gt;

&lt;p&gt;As we are using the Cloudsmith CLI to push the package to our Cloudsmith repository, we need to set up our Cloudsmith API Key as an environment variable. We do this because we don’t want to store a sensitive secret like an API-Key in our pipeline source, or within a build step as a plain text environment variable where it could be exposed in logs. There are other alternative methods of managing secrets in Buildkite, see &lt;a href="https://buildkite.com/docs/pipelines/secrets" rel="noopener noreferrer"&gt;here&lt;/a&gt; for more details.&lt;/p&gt;

&lt;p&gt;Our environment hook is pretty simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;set -euo pipefail&lt;/span&gt;

&lt;span class="s"&gt;if [[ "$BUILDKITE_PIPELINE_SLUG" == "cloudsmith-buildkite-demo" ]]; then&lt;/span&gt;
    &lt;span class="s"&gt;export CLOUDSMITH_API_KEY="abcdefghijklmnop1234567890"&lt;/span&gt;
    &lt;span class="s"&gt;export PATH="$HOME/.local/bin:$PATH"&lt;/span&gt;

&lt;span class="s"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It does two things:   &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sets the &lt;code&gt;CLOUDSMITH_API_KEY&lt;/code&gt; environment variable&lt;/li&gt;
&lt;li&gt;Adds the location that the Cloudsmith CLI is installed to our &lt;code&gt;PATH&lt;/code&gt;
Additionally, it only runs when executed by a pipeline with the name “cloudsmith-buildkite-demo”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;OK, at this stage our pipeline is built and ready, and the environment on our build host will be set up when the pipeline executes. It’s time to test it out.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4 – Push a change to our source repository.
&lt;/h3&gt;

&lt;p&gt;Now if we make a change our source, commit it and then push the change to our GitHub repository, our build will start:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh5b63k89qwrj2n8wjjsk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh5b63k89qwrj2n8wjjsk.png" alt="PreBuild Step Running" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And once the BuildAndPushPackage step has completed, we can view the output in the logs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftzhuhutvfiym6axaujrv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftzhuhutvfiym6axaujrv.png" alt="BuildAndPushPackage Step Complete" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see that our deb package was successfully built and pushed to our Cloudsmith repository.&lt;/p&gt;

&lt;p&gt;If we now go to the repository on the Cloudsmith website, we can see the built package:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F16skjtg8ss1j4gfdb8n0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F16skjtg8ss1j4gfdb8n0.png" alt="Built Package in Cloudsmith repo" width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  To sum up
&lt;/h2&gt;

&lt;p&gt;Buildkite is a very powerful and flexible CI management platform, and integrating your Cloudsmith repositories with your Buildkite pipelines is as simple as installing the Cloudsmith CLI and then using the Cloudsmith push command in your build steps. We would really encourage you to give it a go yourself, our trial is fully-featured and we are here to help you get started. No question is too big or too small. &lt;/p&gt;

&lt;p&gt;Happy Build(kite)ing! &lt;/p&gt;

</description>
      <category>cloudsmith</category>
      <category>integrations</category>
      <category>buildkite</category>
    </item>
    <item>
      <title>Cloudsmith Now Supports Conan!</title>
      <dc:creator>Kyle Harrison</dc:creator>
      <pubDate>Wed, 26 Aug 2020 10:40:14 +0000</pubDate>
      <link>https://forem.com/cloudsmith/cloudsmith-now-supports-conan-5bb4</link>
      <guid>https://forem.com/cloudsmith/cloudsmith-now-supports-conan-5bb4</guid>
      <description>&lt;p&gt;We’re delighted to announce that Cloudsmith now supports Conan! &lt;/p&gt;

&lt;p&gt;As most of your know, Cloudsmith is universal. &lt;strong&gt;It is our aim to support all the languages and package formats our customers and prospective customers use&lt;/strong&gt;. We think any organization benefits from being able to store, secure, manage and distribute ALL of their software assets in a single consistent manner.&lt;/p&gt;

&lt;p&gt;That doesn’t necessarily mean multi-format repositories, but rather every member of the team knowing where to find the packages they need and being able to integrate them into build and deployment processes in the same way - no matter what format.&lt;/p&gt;

&lt;p&gt;Of course, there are a lot of formats and languages out there. So we never stop working to ensure that we cover as many as possible. We listen and respond to our customers, all with the intention of building the only truly universal cloud-native package management platform. &lt;/p&gt;

&lt;p&gt;Hence our support for Conan. Now on with the detail...&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing Conan
&lt;/h2&gt;

&lt;p&gt;Conan is an open-source package manager for C/C++, including everything from its client to server implementation and even documentation. &lt;/p&gt;

&lt;p&gt;It is actively developed on GitHub by an awesome community of contributors and a team of engineers working full time on the project. C++ and C popularity continues to hold a steady spot at 9th and 11th place in the "most popular programming, scripting and mark up languages" category of the &lt;a href="https://insights.stackoverflow.com/survey/2019#most-popular-technologies" rel="noopener noreferrer"&gt;2019 Stack Overflow developer survey&lt;/a&gt;. Additionally, they hold the 6th and 9th place in the &lt;a href="https://www.businessinsider.com/most-popular-programming-languages-github-2019-11?r=US&amp;amp;IR=T" rel="noopener noreferrer"&gt;most popular programming languages on Github for 2019&lt;/a&gt;, demonstrating the C/C++ community's longevity.&lt;/p&gt;

&lt;p&gt;Conan is an excellent choice as a package manager. It provides the flexibility developers crave in a developer tool. It uses Python-based package recipes for extensibility, customization and integration with other systems. &lt;/p&gt;

&lt;p&gt;It also works on a multitude of systems; including Windows, Linux (Ubuntu, Debian, RedHat, ArchLinux, Raspbian), OSX, FreeBSD, and SunOS. It can target any existing platform, from bare metal to desktop, mobile, embedded, servers, cross-building and works with a range of build systems (Visual Studio MSBuild, CMake, Makefiles, SCons, etc), with extensibility to use any build system. When combined, these aspects of Conan make it an excellent choice as a multi-platform package manager.&lt;/p&gt;

&lt;p&gt;Using Conan with Cloudsmith allows development teams to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Develop packages internally and share them privately with other teams.&lt;/li&gt;
&lt;li&gt;Distribute and deploy your packages in a pipeline at your organization.&lt;/li&gt;
&lt;li&gt;Distribute packages as commercial software.&lt;/li&gt;
&lt;li&gt;Make modifications to public packages, choosing how you wish to republish (open-source, public, private).&lt;/li&gt;
&lt;li&gt;Capture the exact state of your dependencies at a particular version, release, user, and channel.&lt;/li&gt;
&lt;li&gt;Control (allow list/deny list) at an organization, repository, and package level&lt;/li&gt;
&lt;li&gt;In short, all the benefits of using Cloudsmith that are already enjoyed by development teams all over the world today, are now available for Conan.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;See also:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.conan.io/en/latest/introduction.html" rel="noopener noreferrer"&gt;Conan documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Getting started with Cloudsmith and Conan couldn't be simpler. First, you'll need a Cloudsmith account and a repository to which you can upload your packages. If you need to install Conan you can find &lt;a href="https://docs.conan.io/en/latest/installation.html#" rel="noopener noreferrer"&gt;instructions on the Conan website&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Cloudsmith should work with all supported versions of Conan, but we recommend using at least Version 1.25.2 or later for the best experience. You can check your local version like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ conan --version

Conan version 1.25.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating a Conan Package
&lt;/h2&gt;

&lt;p&gt;For the purpose of this demonstration, we will create a Conan package containing a single function that prints "Hello World" using the official example. Running the following example will create a new package called "hello" at version "0.0.1" without the optional user/channel. &lt;/p&gt;

&lt;p&gt;The Conan create command is equivalent to running export, install, and test.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir mypkg &amp;amp;&amp;amp; cd mypkg

$ conan new hello/0.0.1 -t

$ conan create .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The conanfile.py generated as part of the above command will be used to by Conan to build packages however it will also be used by Cloudsmith to retrieve metadata related to a package such as the package name, version, license, etc which can be used for advanced filtering using the UI and Cloudsmith CLI.&lt;/p&gt;

&lt;p&gt;If you wish to learn more about how Conan creates the &lt;a href="https://docs.conan.io/en/latest/creating_packages/getting_started.html" rel="noopener noreferrer"&gt;Package Recipe and Test Packages&lt;/a&gt;, the official documentation provides a detailed breakdown for each command. You're now ready to upload your package to Cloudsmith.&lt;/p&gt;

&lt;h2&gt;
  
  
  Uploading your Conan Package
&lt;/h2&gt;

&lt;p&gt;First, you need to add a remote for a specific namespace/repository to the list of Conan remotes.  The below example uses Cloudsmith as the namespace but this could be your namespace or one of an organization in which you are a member.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ conan remote add cloudsmith-testing-public
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://conan.cloudsmith.io/cloudsmith/testing-public/" rel="noopener noreferrer"&gt;https://conan.cloudsmith.io/cloudsmith/testing-public/&lt;/a&gt; &lt;br&gt;
Once a remote has been added, a user can then be configured using your Cloudsmith username and password in place of the substituted values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ conan user -p PASSWORD -r cloudsmith-testing-public USERNAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you're remote and user has been configured within Conan your token will be cached in the client until it expires or becomes invalid. You're now ready to upload your package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;conan upload hello/0.0.1 --all -r cloudsmith-testing-public
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once uploaded, you can view your package in Cloudsmith.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fl08ia8wox907acyuhyic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fl08ia8wox907acyuhyic.png" alt="Conan Package in Cloudsimth repo" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's that simple to get started with Conan on Cloudsmith&lt;/p&gt;

&lt;h2&gt;
  
  
  In Conclusion
&lt;/h2&gt;

&lt;p&gt;Cloudsmith provides fully featured Conan package repositories on all plans, flexible enough for use whether you’re hosting public packages for a public or open-source project, or private packages for your company’s internal needs. We're extremely proud to be able to support the C/C++ ecosystem with this tooling.&lt;/p&gt;

&lt;p&gt;You can find further, context-specific information, including detailed setup and integration instructions inside each Cloudsmith repository.&lt;/p&gt;

&lt;p&gt;Why wait? Get your public and private Conan package repository hosting at Cloudsmith now.&lt;/p&gt;

</description>
      <category>cloudsmith</category>
      <category>formats</category>
      <category>conan</category>
    </item>
    <item>
      <title>Deploy packages from a Cloudsmith repository with Ansible</title>
      <dc:creator>Dan McKinney</dc:creator>
      <pubDate>Tue, 18 Aug 2020 15:31:58 +0000</pubDate>
      <link>https://forem.com/cloudsmith/deploy-packages-from-a-cloudsmith-repository-with-ansible-3lhg</link>
      <guid>https://forem.com/cloudsmith/deploy-packages-from-a-cloudsmith-repository-with-ansible-3lhg</guid>
      <description>&lt;h2&gt;
  
  
  What is Ansible?
&lt;/h2&gt;

&lt;p&gt;Ansible is an open source continuous configuration automation (CCA) tool.  You can use it to automate the management of the configuration of host systems. For example: installing and configuring applications, services, security policies; or to perform a wide variety of other administration and configuration tasks.&lt;/p&gt;

&lt;p&gt;You also can use Ansible with a provisioning tool (such as the excellent Terraform, from the awesome folks over at Hashicorp) to automate the entire build and deployment of your infrastructure, and take steps towards true Infrastructure-as-code DevOps practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ansible And Cloudsmith
&lt;/h2&gt;

&lt;p&gt;Ansible is also a perfect partner for your artifacts and assets stored in your private Cloudsmith repositories.  As your Cloudsmith repositories can be your single source of truth, with all the access control and permission control that they provide, they give you another secure layer to your infrastructure build and management. By providing you with control of the sources of your packages that you use with your Ansible configuration automation, you insulate yourself further from any upstream changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started – The Ansible Playbook.
&lt;/h2&gt;

&lt;p&gt;An Ansible playbook is where you define the series of tasks that you wish to perform to ensure the configuration of your hosts is in the desired state.  As an example, we have created an Ansible playbook that will install and configure a Cloudsmith repository for Debian packages, and then install a package from this repository&lt;/p&gt;

&lt;p&gt;Here is our example Ansible playbook:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- hosts: DebianHosts
  remote_user: dmckinney

  tasks:
    - name: add Cloudsmith Repository GPG key
      apt_key:
        url: https://dl.cloudsmith.io/QOH0JBRgKQx5lE7S/demo/examples-repo/cfg/gpg/gpg.7D4D4CE49534374A.key
        state: present
      become: yes

    - name: add Cloudsmith Repository
      apt_repository:
        repo: 'deb https://dl.cloudsmith.io/QOH0JBRgKQx5lE7S/demo/examples-repo/deb/debian buster main'
        state: present
        update_cache: yes
      become: yes

    - name: install Cloudsmith Example package
      apt:
        name: cloudsmith-debian-example
        state: present
        update_cache: yes
      become: yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break down what this playbook will do.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hosts:
&lt;/h3&gt;

&lt;p&gt;This is where we specify which of our hosts these tasks will run against. The hosts and their addresses are defined in a separate Ansible Inventory file and for this example, our inventory file is just a single host.&lt;/p&gt;

&lt;p&gt;These hosts can be bare metal servers, cloud instances or virtual machines etc. You can define different groups of hosts within a playbook for different tasks and in this way a single playbook can manage a single machine or several hundred or even thousands of hosts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Remote User:
&lt;/h3&gt;

&lt;p&gt;This is where you specify the user on the host machine that the tasks will run as&lt;/p&gt;

&lt;h3&gt;
  
  
  Tasks:
&lt;/h3&gt;

&lt;p&gt;This is where we specify the tasks themselves.&lt;/p&gt;

&lt;p&gt;The first task is the “add Cloudsmith Repository GPG key” task.  This task uses the apt_key ansible module to retrieve the GPG key from our Cloudsmith Repository and install in on our host system. We specify the URL for the GPG key and as this is a private Cloudsmith repository the URL contains an embedded entitlement token to authenticate for read only access:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: add Cloudsmith Repository GPG key
      apt_key:
        url: https://dl.cloudsmith.io/QOH0JBRgKQx5lE7S/demo/examples-repo/cfg/gpg/gpg.7D4D4CE49534374A.key
        state: present
      become: yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also add the &lt;code&gt;state: present&lt;/code&gt; – which means Ansible will install the key if it is not already installed, and if it is already installed it will do nothing. This is important, as it means our task will be idempotent, meaning no matter how many times we run this task the outcome, and therefore the configuration will always be the same.  We will see this &lt;code&gt;state: present&lt;/code&gt; in all our ansible tasks in this playbook.  Finally, as all apt operations on our host system require sudo permissions, we add &lt;code&gt;become: yes&lt;/code&gt; which tells ansible to run this task with elevated permissions. Again, we will see this on all tasks in this playbook.&lt;/p&gt;

&lt;p&gt;The second task is the “add Cloudsmith Repository” task. This task uses the apt_repository Ansible module to install our Cloudsmith repository on our host system.  We specify the URL for the repository, again containing our embedded entitlement token for authentication:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    - name: add Cloudsmith Repository
      apt_repository:
        repo: 'deb https://dl.cloudsmith.io/QOH0JBRgKQx5lE7S/demo/examples-repo/deb/debian buster main'
        state: present
        update_cache: yes
      become: yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The final task is the “install Cloudsmith Example package” task, and this task uses the apt Ansible module to install a deb package. We just need to specify the name of the package we wish to install:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: install Cloudsmith Example package
      apt:
        name: cloudsmith-debian-example
        state: present
        update_cache: yes
      become: yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that’s it, that’s all we need in this playbook to get this repository set up and our package installed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running the playbook.
&lt;/h2&gt;

&lt;p&gt;To run a playbook, we use the &lt;code&gt;ansible-playbook&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook -i cloudsmith-demo-inventory cloudsmith-demo-playbook -K
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;the &lt;code&gt;-K&lt;/code&gt; flag tells ansible to ask for our sudo password, but you can also store secrets and password encrypted using Ansible Vault – which means you could check the encrypted vault file into your version control system, alongside your playbooks themselves. But to keep things simple for this example, we will just have ansible ask us for the password:&lt;/p&gt;

&lt;p&gt;When we run this playbook, the output was as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fixv9ru0aaphxwq65kjpw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fixv9ru0aaphxwq65kjpw.png" alt="Ansible Playbook Run" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see that the three tasks ran successfully, and report that they made a total of three changes. If we now check what packages we have installed our Debian host, we can see our cloudsmith-debian-example package:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fa8iwd0izdgticb94b4b1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fa8iwd0izdgticb94b4b1.png" alt="Package Installed" width="693" height="90"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And finally, if we were to run this playbook a second time (or any subsequent number of times), it reports that all tasks are OK, no changes need to be made:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fk0iaq0k69c9gq8y8umtq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fk0iaq0k69c9gq8y8umtq.png" alt="Ansible Package Rerun" width="800" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  In Closing
&lt;/h2&gt;

&lt;p&gt;While this is a very simple example, we hope that it demonstrates the possibilities and the power of using a configuration automation tool like Ansible in association with a private Cloudsmith repository. It gives you the ability to automate not only common system configuration tasks, but deployment of your own packages, your own custom configurations and even the software build of your entire infrastructure. You can version these builds in your version control systems, track changes and provide an audit-trail-of-truth all the way back to your packages, stored securely in your own private repositories. &lt;/p&gt;

&lt;p&gt;We hope you have very happy continuous configuration automation!&lt;/p&gt;

</description>
      <category>cloudsmith</category>
      <category>integrations</category>
      <category>ansible</category>
    </item>
  </channel>
</rss>
