<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Michael Bogan</title>
    <description>The latest articles on Forem by Michael Bogan (@mbogan).</description>
    <link>https://forem.com/mbogan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mbogan"/>
    <language>en</language>
    <item>
      <title>When AI Agents Get It Wrong: The Accountability Crisis in Multi-Agent Systems</title>
      <dc:creator>Michael Bogan</dc:creator>
      <pubDate>Wed, 18 Mar 2026 14:57:43 +0000</pubDate>
      <link>https://forem.com/mbogan/when-ai-agents-get-it-wrong-the-accountability-crisis-in-multi-agent-systems-2l02</link>
      <guid>https://forem.com/mbogan/when-ai-agents-get-it-wrong-the-accountability-crisis-in-multi-agent-systems-2l02</guid>
      <description>&lt;p&gt;In the world of security and DevOps, AI agents are being pushed from demos into production quickly. They triage security alerts, coordinate incident response, provision infrastructure, and decide which remediation playbooks to run. &lt;/p&gt;

&lt;p&gt;When it all works, everyone is happy. It’s a force multiplier.  But when an AI agent fails … who do you blame? &lt;/p&gt;

&lt;p&gt;It can be hard to tell who made the call, why it happened, or what evidence exists to explain it. This is even more difficult in multi-agent systems, where responsibility is distributed across models, tools, orchestrators, and human operators. This distribution is powerful, but it also creates an accountability gap that most teams aren’t prepared for. &lt;/p&gt;

&lt;p&gt;This is not a theoretical issue. Regulators and standards bodies are converging on real expectations for governance, traceability, and auditability. The &lt;a href="https://www.nist.gov/publications/artificial-intelligence-risk-management-framework-ai-rmf-10" rel="noopener noreferrer"&gt;NIST AI Risk Management&lt;/a&gt; Framework's GOVERN and MAP functions explicitly call for documented roles, risk ownership, and decision provenance for AI systems. The EU &lt;a href="https://www.consilium.europa.eu/en/press/press-releases/2024/05/21/artificial-intelligence-ai-act-council-gives-final-green-light-to-the-first-worldwide-rules-on-ai/" rel="noopener noreferrer"&gt;AI Act&lt;/a&gt; goes further: systems that affect safety or critical infrastructure are classified as high-risk, triggering mandatory requirements for logging, human oversight, and traceability.&lt;/p&gt;

&lt;p&gt;These signals mean one thing: accountability can no longer be optional or implicit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accountability must be designed into how agentic systems make decisions and how those decisions are recorded and governed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The point of this post is practical, not academic. If you’re building multi-agent systems in security, DevOps, or observability, you need a clear answer to three questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What can go wrong?&lt;/li&gt;
&lt;li&gt;How will you know?&lt;/li&gt;
&lt;li&gt;Who is accountable when it does?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The good news is that you can make these systems trustworthy without slowing them down. The key is to treat accountability as a product feature, not a compliance afterthought. And you probably already have the tools (hint: it’s your analytics platform) to do just that.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Accountability Gap
&lt;/h2&gt;

&lt;p&gt;Single-agent systems are already complex, but they at least have a central decision point. Multi-agent systems take it a step further, distributing decisions across multiple components. &lt;/p&gt;

&lt;p&gt;One agent classifies the alert, another correlates with telemetry, a third chooses the response. &lt;/p&gt;

&lt;p&gt;If the output is wrong, there may be no obvious root cause. Was it the classification prompt? The tool that provided stale data? The orchestrator that weighted the wrong agent? Or a handoff that silently dropped a warning? When your system looks like a team, it can also fail like one, with responsibility spread across roles.&lt;/p&gt;

&lt;p&gt;This is where the accountability gap shows up in real teams. Ask a group of engineers, security analysts, and platform owners who is responsible when an agent misses an incident. You’ll get a mix of answers: the model team, the product team, the on-call team, or the vendor. &lt;/p&gt;

&lt;p&gt;At the end of the day, you must be able to assign accountability in order to fix problems and improve the system. This means moving from "the algorithm did it" to named responsibility and documented evidence. &lt;/p&gt;

&lt;p&gt;In other words, if an agent makes a call that leads to a bad outcome, &lt;strong&gt;there must be an identifiable person, team, or role that can explain the decision, the data that informed it, and the controls that were in place&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;This shift is already happening inside compliance programs and audit expectations, and it’s coming to product and engineering teams next. &lt;/p&gt;

&lt;p&gt;One solution is using your &lt;em&gt;observability&lt;/em&gt; and &lt;em&gt;analytics&lt;/em&gt; program as an &lt;em&gt;accountability&lt;/em&gt; program. When agent decisions are logged with the same rigor as infrastructure events, you can connect outcomes to evidence. You know what was decided and why. That makes accountability real rather than rhetorical.&lt;/p&gt;

&lt;p&gt;Of course, that raises the question: how do disagreements in multi-agent systems work? Let's look at that next.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-Agent Conflict Resolution
&lt;/h2&gt;

&lt;p&gt;When multiple agents evaluate the same input (which is common for high-stakes decisions), disagreements are inevitable. You are literally running multiple models, each with partial context and different heuristics. The important question is: how does the system handle these disagreements and whether it makes the conflicts visible to your analytics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Voting&lt;/strong&gt;&lt;br&gt;
The simplest pattern is &lt;strong&gt;voting&lt;/strong&gt;. In voting, each agent returns a decision and the majority wins. This is fast, but it can be brittle. A correlated error across two agents can drown out the one that is correct. It’s also easy to hide the disagreement, which is the worst possible choice from an accountability perspective. &lt;/p&gt;

&lt;p&gt;The disagreement itself is a risk signal. You want it recorded and reviewable later.&lt;/p&gt;

&lt;p&gt;Here's what a recorded voting disagreement might look like in practice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-06-14T03:22:19.007Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"trace_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"abc-7f3a-..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"event"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"agent_vote_conflict"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"alert_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"SEC-90471"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"votes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"agent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"classifier-v2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"decision"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"suppress"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"confidence"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.72&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"agent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"correlation-agent"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"decision"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"suppress"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"confidence"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.65&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"agent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"anomaly-detector"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"decision"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"escalate"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"confidence"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.88&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"resolution"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"majority_vote"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"outcome"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"suppress"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dissenting_agents"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"anomaly-detector"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dissent_confidence_gap"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.19&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the &lt;code&gt;dissent_confidence_gap&lt;/code&gt;. The dissenting agent was actually the most confident. &lt;/p&gt;

&lt;p&gt;Now it’s time to take advantage of our analytics platform. For this (and other examples in this article), we’ll use &lt;a href="https://www.sumologic.com/" rel="noopener noreferrer"&gt;Sumo Logic&lt;/a&gt;, a cloud analytics platform. In Sumo Logic we set up a scheduled search to alert when a high-confidence agent is outvoted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_sourceCategory=agents | json "event", "dissent_confidence_gap" | where event = "agent_vote_conflict" and
  dissent_confidence_gap &amp;gt; 0.15
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the silent disagreement is a flag you can review before it becomes an incident.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Negotiation&lt;/strong&gt;&lt;br&gt;
Negotiation-based systems are more flexible. One agent can propose a remediation, another bids to handle it, and the orchestrator chooses. &lt;/p&gt;

&lt;p&gt;This approach is based on early &lt;a href="https://pdfs.semanticscholar.org/2ece/73e3d50a5996503d261f816b3f1885f75afb.pdf" rel="noopener noreferrer"&gt;multi-agent research&lt;/a&gt;, but in production it should be grounded in clear criteria and recorded choices. If a lower-cost or lower-confidence agent is chosen as the winner, that decision needs to be visible later in an incident review.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mediator&lt;/strong&gt;&lt;br&gt;
Mediator or arbitrator agents resolve conflicts when other agents disagree. This can work well, but it changes the accountability picture. The mediator becomes a critical decision point, and its reasoning must be traceable. Usually, the mediator is a more advanced model. &lt;/p&gt;

&lt;p&gt;Importantly, you need to know why the mediator made the decision it made. If you can’t explain why the arbitrator overruled a security warning, you haven't actually improved safety. You’ve just moved the black box.&lt;/p&gt;

&lt;p&gt;In practice, you don’t need academic consensus protocols to get this right. You need simple rules: define how disagreement is detected, set thresholds for escalation, and make the disagreement and resolution visible in logs. &lt;/p&gt;

&lt;p&gt;That last part is crucial. Without it, you are left with a clean output but no evidence or audit trail.&lt;/p&gt;

&lt;p&gt;Here is a diagram that demonstrates all the methods:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3dprdrh561orf2ggfbbd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3dprdrh561orf2ggfbbd.png" alt=" " width="624" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's consider how to turn these ideas into product-level controls.&lt;/p&gt;
&lt;h2&gt;
  
  
  Process Frameworks for Production
&lt;/h2&gt;

&lt;p&gt;The most important move you can make is to &lt;strong&gt;treat agent workflows like production systems&lt;/strong&gt;, not experiments. That means clear ownership, controlled changes, and reliable telemetry.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Governance&lt;/strong&gt;&lt;br&gt;
Start with governance. An AI Quality Control function does not have to be a new department. It can be a lightweight set of responsibilities: who approves changes to prompts and thresholds, who reviews the impact of those changes, and who owns the system-level outcomes. If the system is making high-stakes decisions, those roles need to be explicit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decision Record&lt;/strong&gt;&lt;br&gt;
Next, define a decision record. For each agent action, capture the inputs, tool calls, outputs, confidence, and any thresholds or policies applied. A readable summary is useful for humans, but it’s not enough. You need the raw evidence. This is where analytics platforms are extremely useful. &lt;/p&gt;

&lt;p&gt;Here's what a structured decision record looks like when ingested into Sumo Logic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-06-14T03:22:18.441Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"trace_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"abc-7f3a-..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"agent_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"triage-classifier-v2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"classify_alert"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"inputs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"alert_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"SEC-90471"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"source"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"waf-east-1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"raw_severity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"medium"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"tool_calls"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"tool"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"lookup_ioc_feed"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"result"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"no_match"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"latency_ms"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;340&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"tool"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"get_recent_alerts"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"result"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"3 similar in 24h"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"latency_ms"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;122&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"output"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"classification"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"low"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"confidence"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.72&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"threshold_applied"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"suppress_below_0.80"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"escalated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gpt-5.2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"prompt_version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"classifier-v2.4.1"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When every agent decision emits a record like this, you can correlate agent actions with infrastructure events. A query as simple as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_sourceCategory=agents | json "output.confidence" as confidence, "agent_id" | where confidence &amp;lt; 0.80 | count by agent_id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;surfaces every low-confidence decision across your fleet, creating a searchable evidence trail.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instrument Everything&lt;/strong&gt;&lt;br&gt;
Finally, instrument everything. Observability is the &lt;a href="https://www.sumologic.com/blog/why-aws-agentcore-logs-matter" rel="noopener noreferrer"&gt;bridge between "we think" and "we know."&lt;/a&gt; If your agents call tools, read from data stores, and write outputs, those actions should be traced end to end. &lt;a href="https://opentelemetry.io/docs/" rel="noopener noreferrer"&gt;OpenTelemetry&lt;/a&gt; is a practical, vendor-neutral way to &lt;a href="https://opentelemetry.io/blog/2025/ai-agent-observability/" rel="noopener noreferrer"&gt;make that happen&lt;/a&gt; across services and tools. &lt;/p&gt;

&lt;p&gt;In practice, wrapping an agent decision in an OpenTelemetry span takes very little code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;

  &lt;span class="n"&gt;tracer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_tracer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;agent.triage&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;classify_alert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;alert&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
      &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;tracer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start_as_current_span&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;classify_alert&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
          &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;agent.id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;triage-classifier-v2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
          &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;agent.prompt_version&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;classifier-v2.4.1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
          &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;alert.id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;alert&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

          &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;run_classification&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;alert&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

          &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;agent.confidence&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;confidence&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
          &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;agent.decision&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;classification&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
          &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;agent.escalated&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;confidence&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.80&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
          &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each agent in the pipeline emits its own span under the same trace, so you get a full causal chain from alert ingestion to final decision. Then, using Sumo Logic's OpenTelemetry integration, we ingest these traces directly, letting you query across agent spans, tool calls, and infrastructure events in one place.&lt;/p&gt;

&lt;p&gt;Next let's look at a hypothetical, but very plausible, failure of process that happens when observability is weak.&lt;/p&gt;

&lt;h2&gt;
  
  
  A realistic failure story (and how it happens)
&lt;/h2&gt;

&lt;p&gt;Imagine a security operations team running a multi-agent triage system. One agent classifies incoming alerts, a second agent correlates with recent logs, and a third decides whether to open a ticket. &lt;/p&gt;

&lt;p&gt;A genuine intrusion alert arrives. The classification agent labels it as low priority. The correlation agent flags a weak anomaly but sees no matching indicator. The decision agent chooses to suppress the alert. Hours later, a breach is discovered.&lt;/p&gt;

&lt;p&gt;When the incident review begins, the team tries to answer a simple question: &lt;em&gt;why was the alert suppressed&lt;/em&gt;? &lt;/p&gt;

&lt;p&gt;The logs show the final decision but not the intermediate reasoning. It turns out the correlation tool was operating on stale data due to a delayed pipeline. The classification prompt had been tuned the prior week to reduce noise. The decision agent gave extra weight to the classification agent because it was historically more accurate. The system made a rational choice given its inputs. The problem is that no one can reproduce those inputs or see the disagreement that occurred.&lt;/p&gt;

&lt;p&gt;This is the core accountability gap. The organization does not just lack a fix. It lacks a coherent explanation. And without an explanation, it can neither learn nor prove that the system is safe enough to keep in production. That is why analytics and evidence are not nice-to-haves. They are the difference between a system you can trust and one you cannot.&lt;/p&gt;

&lt;p&gt;Now imagine the same scenario, but instrumented. The team opens Sumo Logic and runs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_sourceCategory=agents "abc-7f3a"
    | json "agent_id", "action", "tool_calls", "output.confidence", "inputs", "trace_id"
    | where trace_id matches "abc-7f3a-*"
    | sort by _messageTime asc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;They immediately see the full decision chain: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The classifier's 0.72 confidence&lt;/li&gt;
&lt;li&gt;The correlation agent's &lt;code&gt;lookup_ioc_feed&lt;/code&gt; call returned &lt;code&gt;no_match&lt;/code&gt; against data that was 6 hours stale, and the decision agent's suppression. &lt;/li&gt;
&lt;li&gt;They can see that the classifier prompt was updated two days ago by checking &lt;code&gt;prompt_version&lt;/code&gt;. And they can see the decision agent suppressed the alert despite the anomaly detector's 0.88 confidence flag, because the classifier's low-priority label and the correlation agent's &lt;code&gt;no-match&lt;/code&gt; result both pointed toward suppression.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most importantly, they can set a rule so it never happens the same way again: alert when &lt;code&gt;tool_calls&lt;/code&gt; reference a data source with freshness older than a threshold, or when a high-confidence dissent is overridden on a security-tagged alert. &lt;/p&gt;

&lt;p&gt;The hours-long investigation now just takes minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Does This Matter?
&lt;/h2&gt;

&lt;p&gt;Business stakeholders, developers and operators care about operational outcomes: fewer false positives, faster triage, and better reliability. Multi-agent systems can deliver those outcomes, but only if the team can trust them. &lt;/p&gt;

&lt;p&gt;And trust is not a feeling…or at least it shouldn't be. It’s a property of the system. It comes from being able to answer questions like: What did the agent see? What tools did it call? What did it ignore? Why was that alert suppressed? Who changed the thresholds last week?&lt;/p&gt;

&lt;p&gt;This is exactly where observability helps. Your observability and analytics platform probably already collects and correlates logs and metrics at scale. The opportunity is to extend that same rigor to agentic workflows: treat agent decisions as first-class telemetry, and connect them to the infrastructure and security signals they depend on. When you do that, you can move from a black-box system to a transparent one, without sacrificing speed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Multi-agent systems will become a standard part of modern operations. The teams that win with them will be the teams that treat accountability as a feature, not a burden. They will know who owns what, they will be able to trace decisions, and they will have the evidence to explain outcomes when things go wrong including the complete trajectory of every agent and cross-agent communication. That is what trust looks like, and it is what regulators, customers, and internal stakeholders are looking for.&lt;/p&gt;

&lt;p&gt;If you are already investing in deep observability, you have most of the building blocks. The next step is to apply them to agentic systems. When AI agents get it wrong, the most important thing is not that they were wrong. It is whether you can prove what happened, learn from it, and show that your system is accountable. This also opens the door for quick improvement, so the system doesn't repeat past mistakes.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>logging</category>
      <category>observability</category>
    </item>
    <item>
      <title>Yes! I Can Finally Run My .NET Application on Heroku!</title>
      <dc:creator>Michael Bogan</dc:creator>
      <pubDate>Wed, 19 Feb 2025 19:18:36 +0000</pubDate>
      <link>https://forem.com/heroku/yes-i-can-finally-run-my-net-application-on-heroku-2nak</link>
      <guid>https://forem.com/heroku/yes-i-can-finally-run-my-net-application-on-heroku-2nak</guid>
      <description>&lt;p&gt;Heroku now officially supports .NET!&lt;/p&gt;

&lt;p&gt;.NET developers now have access to the &lt;a href="https://github.com/heroku/heroku-buildpack-dotnet" rel="noopener noreferrer"&gt;officially supported buildpack for .NET&lt;/a&gt;, which means you can now deploy your .NET apps onto Heroku with just one command: &lt;code&gt;git push heroku main&lt;/code&gt;. 🤯 Gone are the days of searching for Dockerfiles or community buildpacks. With official support, .NET developers can now run any .NET application (version 8.0 and higher) on the Heroku platform.&lt;/p&gt;

&lt;p&gt;Being on the platform means you also get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple, low friction deployment&lt;/li&gt;
&lt;li&gt;Scaling and service management&lt;/li&gt;
&lt;li&gt;Access to the add-on ecosystem&lt;/li&gt;
&lt;li&gt;Security and governance features for enterprise use&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Intrigued? Let’s talk about what this means for .NET developers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters for .NET Developers
&lt;/h2&gt;

&lt;p&gt;In my experience, running an app on Heroku is pretty easy. But deploying .NET apps was an exception. You could deploy on Heroku, but there wasn’t official support. One option was to wrap your app in a Docker container. This meant creating a Dockerfile and dealing with all the maintenance that comes along with that approach. Alternatively, you could find a third-party buildpack; but that introduced another dependency into your deployment process, and you’d lose time trying to figure out which community buildpack was the right one for you.&lt;/p&gt;

&lt;p&gt;Needing to use these workarounds was unfortunate, as Heroku’s seamless deployment is supposed to make it easy to create and prototype new apps. Now, with official buildpack support, the deployment experience for .NET developers is smoother and more reliable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Benefits of .NET on Heroku
&lt;/h2&gt;

&lt;p&gt;The benefits of the new update center around simplicity and scalability. It all begins with &lt;strong&gt;simple deployment&lt;/strong&gt;. Just one &lt;code&gt;git&lt;/code&gt; command… and your deployment begins. No need to start another workflow or log into another site every time; just push your code from the command line, and Heroku takes care of the rest.&lt;/p&gt;

&lt;p&gt;Heroku’s &lt;a href="https://devcenter.heroku.com/articles/dotnet-heroku-support-reference" rel="noopener noreferrer"&gt;official .NET support&lt;/a&gt; currently includes C#, Visual Basic, and F# projects for .NET and ASP.NET Core frameworks (version 8.0 and higher). This means that a wide variety of .NET projects are now officially supported. Want to deploy a Blazor app alongside your ASP.NET REST API? You can do that now.&lt;/p&gt;

&lt;p&gt;Coming into the platform also means you can &lt;strong&gt;scale&lt;/strong&gt; as your app grows. If you need to add another service using a different language, you can deploy that service just as easily as your original app. Or you can easily scale your dynos to match peak load requirements. This scaling extends to Heroku’s ecosystem of &lt;strong&gt;add-ons&lt;/strong&gt;, making it easy for you to add value to your application with supporting services while keeping you and your team focused on your core application logic.&lt;/p&gt;

&lt;p&gt;In addition to simple application deployment, the platform also supports more advanced &lt;strong&gt;CI/CD and DevOps&lt;/strong&gt; needs. With &lt;a href="https://devcenter.heroku.com/articles/pipelines" rel="noopener noreferrer"&gt;Heroku Pipelines&lt;/a&gt;, you have multiple deployment environment support options and can set up &lt;a href="https://devcenter.heroku.com/articles/github-integration-review-apps" rel="noopener noreferrer"&gt;review apps&lt;/a&gt; so code reviewers can access a live version of your app for each pull request. And all of this integrates tightly with GitHub, giving you automatic deployment triggers to streamline your dev flow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Let’s do a quick walk-through on how to get started. In addition to your application and Git, you will also need the &lt;a href="https://devcenter.heroku.com/articles/heroku-cli" rel="noopener noreferrer"&gt;Heroku CLI&lt;/a&gt; installed on your local machine. Initialize the CLI with the &lt;code&gt;heroku login&lt;/code&gt; command. This will take you to a browser to log into your Heroku account:&lt;/p&gt;

&lt;p&gt;Once you’re logged in, navigate to your .NET application folder. In that folder, run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/project$ heroku create
~/project$ heroku buildpacks:add heroku/dotnet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you’re ready to push your app! You just need one command to go live:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/project$ git push heroku main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it! For simpler .NET applications, this is all you need. Your application is now live at the app URL provided in the response to your &lt;code&gt;heroku create&lt;/code&gt; command. To see it again, you can always use &lt;code&gt;heroku info&lt;/code&gt;. Or, you can run &lt;code&gt;heroku open&lt;/code&gt; to launch your browser at your app URL.&lt;/p&gt;

&lt;p&gt;If you can’t find the URL, log in to the &lt;a href="https://dashboard.heroku.com" rel="noopener noreferrer"&gt;Heroku Dashboard&lt;/a&gt;. Find your app and click on &lt;strong&gt;Open app&lt;/strong&gt;. You’ll be redirected to your app URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwi3asshwgujj9tfve6y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwi3asshwgujj9tfve6y.png" alt="Image description" width="273" height="55"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have a more complex application or one with multiple parts, you will need to define a Procfile, which will tell Heroku how to start up your application. Don’t be intimidated! Many Procfiles are just a couple lines. For more in-depth information, check out the &lt;a href="https://devcenter.heroku.com/articles/getting-started-with-dotnet#define-a-procfile" rel="noopener noreferrer"&gt;Getting Started on Heroku with .NET guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now we’ve got another question to tackle…&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Should Care?
&lt;/h2&gt;

&lt;p&gt;The arrival of .NET on Heroku is relevant to anyone who wants to deploy scalable .NET services and applications seamlessly.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;solo devs and startups&lt;/strong&gt;, the platform’s low friction and scaling takes away the burden of deployment and hosting. This allows small teams to focus on building out their core application logic. These teams are also not restricted by their app’s architecture, as Heroku supports both large single-service applications as well as distributed microservice apps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enterprise teams&lt;/strong&gt; are poised to benefit from this as well. .NET has historically found much of its adoption in the enterprise, and the addition of official support for .NET to Heroku means that these teams can now combine their .NET experience with the ease of deploying to the Heroku platform. Heroku’s low friction enables rapid prototyping of new applications, and &lt;a href="https://devcenter.heroku.com/articles/scaling" rel="noopener noreferrer"&gt;Dyno Formations&lt;/a&gt; make it easier to manage and scale a microservice architecture. Additionally, you can get governance through &lt;a href="https://www.heroku.com/enterprise" rel="noopener noreferrer"&gt;Heroku Enterprise&lt;/a&gt;, enabling the security and controls that larger enterprises require.&lt;/p&gt;

&lt;p&gt;Finally, &lt;strong&gt;.NET enthusiasts&lt;/strong&gt; from all backgrounds and skill levels can now benefit from this new platform addition. By going with a modern PaaS, you can play around with apps and projects of all sizes, hassle-free.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-up
&lt;/h2&gt;

&lt;p&gt;That’s a brief introduction to official .NET support on Heroku! It’s now easier than ever to deploy .NET applications of all sizes to Heroku. What are you going to build and deploy first? Let me know in the comments!&lt;/p&gt;

</description>
      <category>heroku</category>
      <category>webdev</category>
      <category>dotnet</category>
      <category>programming</category>
    </item>
    <item>
      <title>8 Ways AI Can Maximize the Value of Logs</title>
      <dc:creator>Michael Bogan</dc:creator>
      <pubDate>Wed, 17 Jul 2024 12:21:47 +0000</pubDate>
      <link>https://forem.com/mbogan/8-ways-ai-can-maximize-the-value-of-logs-1o7c</link>
      <guid>https://forem.com/mbogan/8-ways-ai-can-maximize-the-value-of-logs-1o7c</guid>
      <description>&lt;p&gt;Logging is essential for successful DevSecOps teams. Logs are &lt;em&gt;filled&lt;/em&gt; with the information needed to monitor and understand systems. Tracking down a defect? Trying to understand a sudden burst in questionable logins from a new region? Need to figure out why an app is crawling? Logs are that single source of truth for understanding what’s really happening.&lt;/p&gt;

&lt;p&gt;But there’s a problem that comes along with logs: the sheer amount of data. The information logged by services and applications just keeps on growing. And growing. It doesn’t take long for it to become more—much more—than can be managed. The data becomes overwhelming. Alert fatigue sets in.&lt;/p&gt;

&lt;p&gt;Data keeps growing. Human resources can’t.&lt;/p&gt;

&lt;p&gt;But there’s hope on the horizon. Innovations in AI have revolutionized the process of &lt;em&gt;continuous log monitoring&lt;/em&gt;. AI algorithms can analyze and detect patterns within vast datasets, translate raw logs into actionable insights, and proactively alert teams to problems—all at a scale and at higher precision to assist humans. &lt;/p&gt;

&lt;p&gt;Let’s look at 8 ways that AI can maximize the value of logs.&lt;/p&gt;

&lt;h2&gt;
  
  
  1: Handling massive amounts of data
&lt;/h2&gt;

&lt;p&gt;First, the most obvious. Cloud-native environments, with their dozens (or hundreds) of distributed components, emit a massive volume of log data. In turn, this data requires high levels of expertise to sift through and analyze it all. &lt;/p&gt;

&lt;p&gt;But most organizations already face a shortage of people with the skills needed to tease out the insights from this data. Companies could train more people—but training is slow. And the growing complexity of the data tends to outpace any new skills. &lt;/p&gt;

&lt;p&gt;Here is where AI shines: It’s a scalable way to handle log data, no matter the volume and no matter the time. Humans need breaks; they clock out for the day, get sick, take PTO, and &lt;a href="https://marvelcinematicuniverse.fandom.com/wiki/Galaga_Guy" rel="noopener noreferrer"&gt;play Galaga when bosses aren’t looking&lt;/a&gt;. But log data still arrives in massive volumes, regardless of the time of the day or day of the week. An AI-based system with analysis, detection, and alerting is always on.&lt;/p&gt;

&lt;h2&gt;
  
  
  2: Automated security and access control
&lt;/h2&gt;

&lt;p&gt;When dealing with logs, it’s important to protect any sensitive user information and ensure that the data is only accessible to authorized team members. SecOps managers often implement fine-grained access control to ensure that the approved team members have access to (and only to) the data or metrics intended for use. &lt;/p&gt;

&lt;p&gt;AI systems can automatically identify and redact sensitive information—such as personal identifiers, financial details, or confidential business information—from log data before it's accessed by humans. Or, as part of automated preprocessing, AI can de-identify or mask sensitive parts of data.&lt;/p&gt;

&lt;h2&gt;
  
  
  3: Collating data from disparate sources
&lt;/h2&gt;

&lt;p&gt;Merging data from various sources is a complex task. But it’s essential for effective security and operations. When logs are properly aggregated and correlated, the resulting data and metrics can give the context needed for better visibility and better troubleshooting.&lt;/p&gt;

&lt;p&gt;But this is a menial and time-consuming task … which makes it perfect for AI. AI can automatically gather information from various sources and identify patterns within the data, making its analysis easier.&lt;/p&gt;

&lt;p&gt;AI correlates data from different sources far more efficiently than a human. Modern &lt;a href="https://www.sumologic.com/solutions/log-analytics/" rel="noopener noreferrer"&gt;log analytics tools&lt;/a&gt; leverage AI to gather log data from cloud services and on-premises environments. Log analysis and issue resolution become proactive, preventing negative impacts on the health of applications and systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  4: Transforming raw log data
&lt;/h2&gt;

&lt;p&gt;Organizations depend on skilled professionals to handle and analyze log data, yet they often face overwhelming resource constraints. This is where AI can contribute significantly, by automating repetitive tasks and enhancing human capabilities.&lt;/p&gt;

&lt;p&gt;Before analysis, log data often required cleaning and preprocessing to remove errors, duplicates, or irrelevant information. AI can automate this process, ensuring the accuracy and standardization of all the data. AI can also organize log data into clusters based on similarities or classify them into predefined categories. This helps manage data more efficiently, making it easier for humans to understand and act upon the insights derived.&lt;/p&gt;

&lt;h2&gt;
  
  
  5: Analyzing log data
&lt;/h2&gt;

&lt;p&gt;A clear use case for AI within a DevSecOps strategy is the automation of repetitive and time-consuming tasks—such as data cleaning, feature selection, and model training. With AI taking on these tasks, developers can focus on other tasks.&lt;/p&gt;

&lt;p&gt;AI can sift through a mountain of data to spot &lt;a href="https://en.wikipedia.org/wiki/Longest_common_substring" rel="noopener noreferrer"&gt;duplication&lt;/a&gt; and anomalies—like subtle signs of a cyberattack or unusual traffic patterns—that might easily slip past human scrutiny. This yields enhanced security and operational insights.&lt;/p&gt;

&lt;p&gt;It’s more than just about handling the volume; AI is adept at detecting patterns that are too complex or too faint for the human eye to catch. For example, let’s consider logging and monitoring for a network to catch signs of data exfiltration. This kind of anomaly might manifest as an unusually high volume of data being sent to an unfamiliar external IP address during off-hours; that’s a pattern that might not immediately raise flags for a human amid thousands of legitimate data transfers happening every day.&lt;/p&gt;

&lt;p&gt;On the other hand, an AI-based system that’s trained on vast datasets of normal and malicious network behavior can identify this subtle pattern by correlating different indicators:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The timing of the data transfer&lt;/li&gt;
&lt;li&gt;The volume of data&lt;/li&gt;
&lt;li&gt;The destination IP address&lt;/li&gt;
&lt;li&gt;The type of data being transferred&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a human, recognizing such a complex pattern requires painstaking analysis and might be missed entirely due to the sheer volume of log data. But an AI system can continuously monitor for these patterns across the entire network, detecting potential threats with precision and speed that far surpasses human capability.&lt;/p&gt;

&lt;h2&gt;
  
  
  6: Reducing alert fatigue
&lt;/h2&gt;

&lt;p&gt;Traditional infrastructure and service monitoring solutions are notoriously noisy, often generating alerts for events that don’t signify a genuine threat. Excessive unactionable alerts lead to alert fatigue.&lt;/p&gt;

&lt;p&gt;AI-based alerting intelligently filters alerts and reduces the noise, ensuring that the alerts generated are relevant and actionable. For example, traditional monitoring can’t adjust for seasonality—so a crossed threshold in the middle of a high-season weekday afternoon gets as much attention as one in the middle of the night during what should be a slow week. AI-based alerting uses historical data to continually train its models, factoring seasonality into its baselines. The result is fewer false positives and no more alert fatigue.&lt;/p&gt;

&lt;p&gt;Of course, organizations using AI-driven alerting need to rigorously test results in order to tune specificity and sensitivity. This ensures that critical events are captured effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  7: Proactive monitoring
&lt;/h2&gt;

&lt;p&gt;Given the large amounts of data an organization generates, teams often struggle to monitor all the organization's resources proactively. &lt;/p&gt;

&lt;p&gt;AI is well-equipped to address this issue at scale. By continuously monitoring and aggregating logs from across entire environments, an AI-based tool can identify anomalies before they become widespread, allowing teams to detect potential threats in their initial stages. &lt;/p&gt;

&lt;p&gt;For example, the &lt;a href="https://www.sumologic.com/solutions/threat-detection-investigation/" rel="noopener noreferrer"&gt;threat detection and investigation&lt;/a&gt; from Sumo Logic provides the visibility to address advanced threats before they affect operations. AI features such as this enable real-time monitoring, alerting, and data analysis across security tools, cloud infrastructures, and SaaS applications. This enables a DevSecOps team to investigate and respond to cyber threats swiftly.&lt;/p&gt;

&lt;h2&gt;
  
  
  8: Efficient incident response
&lt;/h2&gt;

&lt;p&gt;AI-driven alerting improves incident response by facilitating automatic resource allocation and gathering contextual information about an incident. This helps identify potential security threats faster, which in turn helps organizations respond more quickly. &lt;/p&gt;

&lt;p&gt;When AI-powered logging and observability platforms provide automated remediation features, teams can connect the dots: from continuously monitored logs to incident detection to remediation playbooks. Automated playbook execution means near-immediate response to an incident, whether that’s eliminating the root cause or alerting an engineer to begin an investigation.&lt;/p&gt;

&lt;p&gt;Remember, with every delay in responding to a security incident, the window for impact widens. Minimizing that delay with AI directly minimizes the impact of an incident.&lt;/p&gt;

&lt;h2&gt;
  
  
  Privacy concerns related to AI adoption
&lt;/h2&gt;

&lt;p&gt;One last note: as we’ve seen, AI shows great promise as a tool for improving logging. But remember that it’s still an emerging technology. &lt;/p&gt;

&lt;p&gt;As a &lt;a href="https://about.gitlab.com/press/releases/2023-09-05-devsecops-report-state-of-ai-in-software-development/" rel="noopener noreferrer"&gt;recent GitLab report&lt;/a&gt; makes clear, there are serious privacy concerns around AI adoption. While 83% of the teams surveyed said implementing AI in their development process was essential, 79% said they were highly concerned about privacy and IP when dealing with AI. But in many business contexts, AI tools need access to that private data for analysis. &lt;/p&gt;

&lt;p&gt;So take advantage of AI, but be aware and guard against any privacy concerns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Drowning in log data? AI is here to help.
&lt;/h2&gt;

&lt;p&gt;Logs are crucial. They’re a rich source of data for monitoring applications and infrastructure. But the multiple data sources and volume of log data—along with the sensitivity of some of that data in the logs—lead to some big challenges in managing log data, security, and privacy.&lt;/p&gt;

&lt;p&gt;AI is here to help. Modern log management and SIEM solutions are leveraging AI to automate analysis, enhance monitoring, and improve incident response. AI is making DevSecOps more efficient. And as AI solutions evolve, their role in log analysis will only grow, offering smarter, faster insights into logs.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>ai</category>
      <category>devsecops</category>
      <category>logging</category>
    </item>
    <item>
      <title>5 Simple Steps to Get Your Test Suite Running in Heroku CI</title>
      <dc:creator>Michael Bogan</dc:creator>
      <pubDate>Wed, 26 Jun 2024 13:27:49 +0000</pubDate>
      <link>https://forem.com/heroku/5-simple-steps-to-get-your-test-suite-running-in-heroku-ci-5326</link>
      <guid>https://forem.com/heroku/5-simple-steps-to-get-your-test-suite-running-in-heroku-ci-5326</guid>
      <description>&lt;p&gt;So, I’ve always thought about Heroku as just a place to run my code. They have a CLI. I can connect it to my GitHub repo, push my code to a Heroku remote, and bam… it’s deployed. No fuss. No mess.&lt;/p&gt;

&lt;p&gt;But I had always run my test suite… somewhere else: locally, or with CircleCI, or in GitHub Actions. How did I not know that Heroku has CI capabilities? You mean I can run my tests there? Where have I been for the last few years?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53fvyc6bnbyt482votrq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53fvyc6bnbyt482votrq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So that’s why I didn’t know about Heroku CI…&lt;/p&gt;

&lt;p&gt;CI is pretty awesome. You can build, test, and integrate new code changes. You get fast feedback on those code changes so that you can identify and fix issues early. Ultimately, you deliver higher-quality software.&lt;/p&gt;

&lt;p&gt;By doing it in Heroku, I get my test suite running in an environment much closer to my staging and production deployments. And if I piece together a &lt;a href="https://devcenter.heroku.com/articles/pipelines" rel="noopener noreferrer"&gt;pipeline&lt;/a&gt;, I can automate the progression from passing tests to a staging deployment and then promote that staged build to production.&lt;/p&gt;

&lt;p&gt;So, how do we get our application test suite up and running in Heroku CI? It will take you 5 steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Write your tests&lt;/li&gt;
&lt;li&gt;Deploy your Heroku app&lt;/li&gt;
&lt;li&gt;Push your code to Heroku&lt;/li&gt;
&lt;li&gt;Create a Heroku Pipeline to use Heroku CI&lt;/li&gt;
&lt;li&gt;Run your tests with Heroku CI&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We’ll walk through these steps by testing a simple Python application. If you want to follow along, you &lt;a href="https://github.com/capnMB/heroku-ci-demo" rel="noopener noreferrer"&gt;can clone my GitHub repo&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our python app: is it prime?
&lt;/h2&gt;

&lt;p&gt;We’ve built an API in Python that listens for GET requests on a single endpoint: /prime/{number}. It expects a number as a path parameter and then returns true or false based on whether that number is a prime number. Pretty simple.&lt;/p&gt;

&lt;p&gt;We have a modularized function in is_prime.py:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def is_prime(num):
    if num &amp;lt;= 1:
        return False
    if num &amp;lt;= 3:
        return True
    if num % 2 == 0 or num % 3 == 0:
        return False
    i = 5
    while i * i &amp;lt;= num:
        if num % i == 0 or num % (i + 2) == 0:
            return False
        i += 6
    return True
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then, our main.py file looks like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from fastapi import FastAPI, HTTPException
from is_prime import is_prime

app = FastAPI()

# Route to check if a number is a prime number
@app.get("/prime/{number}")
def check_if_prime(number: int):
    return is_prime(number)
    raise HTTPException(status_code=400, detail="Input invalid")

if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="localhost", port=8000)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;That’s all there is to it. We can start our API locally (python main.py) and send some requests to try it out:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~$ curl http://localhost:8000/prime/91
false

~$ curl http://localhost:8000/prime/97
true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;That looks pretty good. But we’d feel better with a unit test for the is_prime function. Let’s get to it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step #1: Write your tests
&lt;/h2&gt;

&lt;p&gt;With pytest added to our Python dependencies, we’ll write a file called test_is_prime.py and put it in a subfolder called tests. We have a set of numbers that we’ll test to make sure our function determines correctly if they are prime or not. Here’s our test file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from is_prime import is_prime

def test_1_is_not_prime():
    assert not is_prime(1)

def test_2_is_prime():
    assert is_prime(2)

def test_3_is_prime():
    assert is_prime(3)

def test_4_is_not_prime():
    assert not is_prime(4)

def test_5_is_prime():
    assert is_prime(5)

def test_991_is_prime():
    assert is_prime(991)

def test_993_is_not_prime():
    assert not is_prime(993)

def test_7873_is_prime():
    assert is_prime(7873)

def test_7802143_is_not_prime():
    assert not is_prime(7802143)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;When we run pytest from the command line, here’s what we see:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/project$ pytest
=========================== test session starts ===========================
platform linux -- Python 3.8.10, pytest-8.0.2, pluggy-1.4.0
rootdir: /home/michael/project/tests
plugins: anyio-4.3.0
collected 9 items                                                                                                                                                                                            

test_is_prime.py .........                                                                                                                                                                             [100%]

============================ 9 passed in 0.02s ============================
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Our tests pass! It looks like is_prime is doing what it’s supposed to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step #2: Deploy your Heroku app
&lt;/h2&gt;

&lt;p&gt;It’s time to wire up Heroku. Assuming you have a Heroku account and you’ve installed the Heroku CLI, creating your Heroku app is going to go pretty quickly.&lt;/p&gt;

&lt;p&gt;Heroku will look in our project root folder for a file called requirements.txt, listing the Python dependencies our project has. This is what the file should look like:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fastapi==0.110.1
pydantic==2.7.0
uvicorn==0.29.0
pytest==8.0.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, Heroku will look for a file called Procfile to determine how to start our Python application. Procfile should look like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;web: uvicorn main:app --host=0.0.0.0 --port=${PORT}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;With those files in place, let’s create our app.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/project$ heroku login

~/project$ heroku apps:create is-it-prime
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;That was it? Yeah. That was it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step #3: Push your code to Heroku
&lt;/h2&gt;

&lt;p&gt;Next, we push our project code to the git remote that the Heroku CLI set up when we created our app.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/project$ git push heroku main
…
remote: -----&amp;gt; Launching...
remote:        Released v3
remote:        https://is-it-prime-2f2e4fe7adc1.herokuapp.com/ deployed to Heroku

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;So, that’s done. Let’s check our API.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl https://is-it-prime-2f2e4fe7adc1.herokuapp.com/prime/91
false

$ curl https://is-it-prime-2f2e4fe7adc1.herokuapp.com/prime/7873
true

$ curl https://is-it-prime-2f2e4fe7adc1.herokuapp.com/prime/7802143
false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It works!&lt;/p&gt;

&lt;h2&gt;
  
  
  Step #4: Create a Heroku Pipeline to use Heroku CI
&lt;/h2&gt;

&lt;p&gt;Now, we want to create a Heroku Pipeline with Heroku CI enabled so that we can run our tests.&lt;/p&gt;

&lt;p&gt;We create the pipeline (called is-it-prime-pipeline), adding the app we created above to the staging phase of the pipeline.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ heroku pipelines:create \
  --app=is-it-prime \
  --stage=staging \
  is-it-prime-pipeline

Creating is-it-prime-pipeline pipeline... done
Adding ⬢ is-it-prime to is-it-prime-pipeline pipeline as staging... done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;With our pipeline created, we want to connect it to a GitHub repo so that our actions on the repo (such as new pull requests or merges) can trigger events in our pipeline (like automatically running the test suite).&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ heroku pipelines:connect is-it-prime-pipeline -r capnMB/heroku-ci-demo

Linking to repo... done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you can see, I’m connecting my pipeline to my GitHub repo. When something like a pull request or a merge occurs in my repo, it will trigger the Heroku CI to run the test suite.&lt;/p&gt;

&lt;p&gt;Next, we need to configure our test environment in an app.json manifest. Our file contents should look like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "environments": {
    "test": {
      "formation": {
        "test": {
          "quantity": 1,
          "size": "standard-1x"
        }
      },
      "scripts": {
        "test": "pytest"
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This manifest contains the script we would use to run through our test suite. It also specifies the dyno size we (standard-1x) would want to use for our test environment. We commit this file to our repo.&lt;/p&gt;

&lt;p&gt;Finally, in the web UI for Heroku, we navigate to the Tests page of our pipeline, and we click the Enable Heroku CI button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ubtn0a07uain6p04hpo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ubtn0a07uain6p04hpo.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After enabling Heroku CI, here’s what we see:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wg7b1avo5r2kp0fvdqz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wg7b1avo5r2kp0fvdqz.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step #5: Run your tests with Heroku CI
&lt;/h2&gt;

&lt;p&gt;Just to demonstrate it, we can manually trigger a run of our test suite using the Heroku CLI:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ heroku ci:run --pipeline is-it-prime-pipeline
…
-----&amp;gt; Running test command `pytest`...
========================= test session starts ============================
platform linux -- Python 3.12.3, pytest-8.0.2, pluggy-1.4.0
rootdir: /app
plugins: anyio-4.3.0
collected 9 items

tests/test_is_prime.py .........                                         [100%]

============================ 9 passed in 0.03s ============================
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;How does the test run look in our browser? We navigate to our pipeline and click Tests. There, we see our first test run in the left-side nav.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd05h4x3fbcedwlitgbk6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd05h4x3fbcedwlitgbk6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A closer inspection of our tests shows this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzh89ty7kf37j2yqz45wk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzh89ty7kf37j2yqz45wk.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Awesome. Now, let’s push some new code to a branch in our repo and watch the tests run!&lt;/p&gt;

&lt;p&gt;We create a new branch (called new-test), adding another test case to test_is_prime.py. As soon as we push our branch to GitHub, here’s what we see at Heroku:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3g53zgzvd4nypxsi8qd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3g53zgzvd4nypxsi8qd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Heroku CI detects the pushed code and automates a new run of the test suite. Not too long after, we see the successful results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gkm2t7fj57yxr84btte.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gkm2t7fj57yxr84btte.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Heroku CI for the win
&lt;/h2&gt;

&lt;p&gt;If you’re using Heroku for your production environment—and you’re ready to go all in with DevOps—then using &lt;a href="https://devcenter.heroku.com/articles/pipelines" rel="noopener noreferrer"&gt;pipelines&lt;/a&gt; and &lt;a href="https://devcenter.heroku.com/articles/heroku-ci" rel="noopener noreferrer"&gt;Heroku CI&lt;/a&gt; may be the way to go.&lt;/p&gt;

&lt;p&gt;Rather than using different tools and platforms for building, testing, reviewing, staging, and releasing to production… I can consolidate all these pieces in a single Heroku Pipeline. And with Heroku CI, I get automated testing with every push to my repo.&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>pipeline</category>
      <category>heroku</category>
      <category>testing</category>
    </item>
    <item>
      <title>How To Build a Simple GitHub Action To Deploy a Django Application to the Cloud</title>
      <dc:creator>Michael Bogan</dc:creator>
      <pubDate>Mon, 10 Jun 2024 14:25:13 +0000</pubDate>
      <link>https://forem.com/heroku/how-to-build-a-simple-github-action-to-deploy-a-django-application-to-the-cloud-4395</link>
      <guid>https://forem.com/heroku/how-to-build-a-simple-github-action-to-deploy-a-django-application-to-the-cloud-4395</guid>
      <description>&lt;p&gt;Continuous integration and continuous delivery (CI/CD) capabilities are basic expectations for modern development teams who want fast feedback on their changes and rapid deployment to the cloud. In recent years, we’ve seen the growing adoption of GitHub Actions, a feature-rich CI/CD system that dovetails nicely with cloud hosting platforms such as Heroku. In this article, we’ll demonstrate the power of these tools used in combination—specifically how GitHub Actions can be used to quickly deploy a Django application to the cloud. &lt;/p&gt;

&lt;h2&gt;
  
  
  A Quick Introduction to Django
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.djangoproject.com/" rel="noopener noreferrer"&gt;Django&lt;/a&gt; is a Python web application framework that’s been around since the early 2000s. It follows a model-view-controller (MVC) architecture and is known as the “batteries-included” web framework for Python. That’s because it has lots of capabilities, including a strong object-relational mapping (ORM) for abstracting database operations and models. It also has a rich templating system with many object-oriented design features.&lt;/p&gt;

&lt;p&gt;Instagram, Nextdoor, and Bitbucket are examples of applications built using Django. Clearly, if Django is behind Instagram, then we know that it can scale well. (Instagram hovers around being the fourth most visited site in the world!)&lt;/p&gt;

&lt;p&gt;Security is another built-in feature; authentication, cross-site scripting protection, and CSRF features all come out of the box and are easy to configure. Django is over 20 years old, which means it has a large dev community and documentation base—both helpful when you’re trying to figure out why something has gone awry.&lt;/p&gt;

&lt;p&gt;Downsides to Django? Yes, there are a few, with the biggest one being a steeper learning curve than other web application frameworks. You need to know parts of everything in the system to get it to work. For example, to get a minimal “hello world” page up in your browser, you need to set up the ORM, templates, views, routes, and a few other things. Contrast that with a framework like Flask (which is, admittedly, less feature-rich), where less than 20 lines of code can get your content displayed on a web page.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building Our Simple Django Application
&lt;/h2&gt;

&lt;p&gt;If you’re not familiar with Django, &lt;a href="https://docs.djangoproject.com/en/5.0/intro/tutorial01/" rel="noopener noreferrer"&gt;their tutorial&lt;/a&gt; is a good place to start learning how to get a base system configured and running. For this article, I’ve created a similar system using a PostgreSQL database and a few simple models and views. But we won’t spend time describing how to set up a complete Django application. That’s what the Django tutorial is for.&lt;/p&gt;

&lt;p&gt;My application here is different from the tutorial in that I use PostgreSQL—instead of the default SQLite—as the database engine. The trouble with SQLite (besides poor performance in a web application setting) is that it is file-based, and the file resides on the same server as the web application that uses it. Most cloud platforms assume a stateless deployment, meaning the container that holds the application is wiped clean and refreshed every deployment. So, your database should run on a separate server from the web application. PostgreSQL will provide that for us.&lt;/p&gt;

&lt;p&gt;The source code for this mini-demo project is available in &lt;a href="https://github.com/CapnMB/django-heroku-github-actions" rel="noopener noreferrer"&gt;this GitHub repository&lt;/a&gt;. &lt;/p&gt;

&lt;h3&gt;
  
  
  Install Python dependencies
&lt;/h3&gt;

&lt;p&gt;After you have cloned the repository, start up a virtual environment and install the Python dependencies for this project:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

(venv) ~/project$ pip install -r requirements.txt


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Set up Django to use PostgreSQL
&lt;/h3&gt;

&lt;p&gt;To use PostgreSQL with Django, we use the following packages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://pypi.org/project/psycopg2/" rel="noopener noreferrer"&gt;psycopg2&lt;/a&gt; provides the engine drivers for Postgres.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://pypi.org/project/dj-database-url/" rel="noopener noreferrer"&gt;dj-database-url&lt;/a&gt; helps us set up the database connection string from an environment variable (useful for local testing and cloud deployments).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In our Django app, we navigate to mysite/mysite/ and modify settings.py (around line 78) to use PostgreSQL.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

DATABASES = {"default": dj_database_url.config(conn_max_age=600, ssl_require=True)}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We’ll start by testing out our application locally. So, on your local PostgreSQL instance, create a new database.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

postgres=# create database django_test_db;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Assuming our PostgreSQL username is dbuser and the password is password, then our DATABASE_URL will look something like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

postgres://dbuser:password@localhost:5432/django_test_db


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;From here, we need to run our database migrations to set up our tables.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

(venv) ~/project$ \
  DATABASE_URL=postgres://dbuser:password@localhost:5432/django_test_db\
  python mysite/manage.py migrate

Operations to perform:
  Apply all migrations: admin, auth, contenttypes, movie_journal, sessions
Running migrations:
  Applying contenttypes.0001_initial... OK
  Applying auth.0001_initial... OK
  Applying admin.0001_initial... OK
  Applying admin.0002_logentry_remove_auto_add... OK
  Applying admin.0003_logentry_add_action_flag_choices... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying auth.0008_alter_user_username_max_length... OK
  Applying auth.0009_alter_user_last_name_max_length... OK
  Applying auth.0010_alter_group_name_max_length... OK
  Applying auth.0011_update_proxy_permissions... OK
  Applying auth.0012_alter_user_first_name_max_length... OK
  Applying movie_journal.0001_initial... OK
  Applying sessions.0001_initial... OK


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Test application locally
&lt;/h3&gt;

&lt;p&gt;Now that we have set up our database, we can spin up our application and test it in the browser.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

(venv) ~/project$ \
  DATABASE_URL=postgres://dbuser:password@localhost:5432/django_test_db\
  python mysite/manage.py runserver

…
Django version 4.2.11, using settings 'mysite.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In our browser, we visit &lt;a href="http://localhost:8000/movie-journal" rel="noopener noreferrer"&gt;http://localhost:8000/movie-journal&lt;/a&gt;. This is what we see:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpr0vbhpdxcehg8enfzzr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpr0vbhpdxcehg8enfzzr.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We’re up and running! We can go through the flow of creating a new journal entry.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7g0zzrbeis1ega868p8p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7g0zzrbeis1ega868p8p.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looking in our database, we see the record for our new entry.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

django_test_db=# select * from movie_journal_moviejournalentry;
-[ RECORD 1 ]+-------------------------------------------------------------
id           | 1
title        | Best of the Best
imdb_link    | https://www.imdb.com/title/tt0096913/
is_positive  | t
review       | Had some great fight scenes. The plot was amazing.
release_year | 1989
created_at   | 2024-03-29 09:36:59.24143-07
updated_at   | 2024-03-29 09:36:59.241442-07


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Our application is working. We’re ready to deploy. Let’s walk through how to deploy using GitHub Actions directly from our repository on commit.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Power of GitHub Actions
&lt;/h2&gt;

&lt;p&gt;Over the years, GitHub Actions has built up a large library of jobs/workflows, providing lots of reusable code and conveniences for developers.&lt;/p&gt;

&lt;p&gt;With CI/CD, a development team can get fast feedback as soon as code changes are committed and pushed. Typical jobs found in a CI pipeline include style checkers, static analysis tools, and unit test runners. All of these help enforce good coding practices and adherence to team standards. Yes, all these tools existed before. But now, developers don’t need to worry about manually running them or waiting for them to finish.&lt;/p&gt;

&lt;p&gt;Push your changes to the remote branch, and the job starts automatically. Go on to focus on your next coding task as GitHub runs the current jobs and displays their results as they come in. That’s the power of automation and the cloud, baby!&lt;/p&gt;

&lt;h3&gt;
  
  
  Plug-and-play GitHub Action workflows
&lt;/h3&gt;

&lt;p&gt;You can even have GitHub create your job configuration file for you. Within your repository on GitHub, click Actions. You’ll see an entire library of templates, giving you pre-built workflows that could potentially fit your needs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwvmx2p6ytcvalttw8wi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwvmx2p6ytcvalttw8wi.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s click on the Configure button for the Pylint workflow. It looks like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

name: Pylint

on: [push]

jobs:
  build:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        python-version: ["3.8", "3.9", "3.10"]
    steps:
    - uses: actions/checkout@v3
    - name: Set up Python ${{ matrix.python-version }}
      uses: actions/setup-python@v3
      with:
        python-version: ${{ matrix.python-version }}
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install pylint
    - name: Analysing the code with pylint
      run: |
        pylint $(git ls-files '*.py')



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This configuration directs GitHub Actions to create a new workflow in your repository named Pylint. It triggers a push to any branch. It has one job, build, that runs the latest Ubuntu image. Then, it runs all the steps for each of the three different versions of Python specified.&lt;/p&gt;

&lt;p&gt;The steps are where the nitty-gritty work is defined. In this example, the job checks out your code, sets up the Python version, installs dependencies, and then runs the linter over your code. &lt;/p&gt;

&lt;p&gt;Let’s create our own GitHub Action workflow to deploy our application directly to Heroku.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying to Heroku via a GitHub Action
&lt;/h2&gt;

&lt;p&gt;Here’s the good news: it’s easy. First, &lt;a href="https://www.googleadservices.com/pagead/aclk?sa=L&amp;amp;ai=DChcSEwiXtdaw-5mFAxXBLdQBHZJ5BJkYABABGgJvYQ&amp;amp;ase=2&amp;amp;gclid=Cj0KCQjwzZmwBhD8ARIsAH4v1gXSRHbXjzlc9n8gUY1x-3mgbp4AV3KsQznkWamh3S91LfOBGbOO0IYaArHsEALw_wcB&amp;amp;ohost=www.google.com&amp;amp;cid=CAESV-D2oE_02dbv0jHQ_lty1qqK8Lx5as8Sx0CGWY4yAwXX-iO_prj5-K-woiZ95Jwj1VyouwGSiQnd0ORFU-fyr8keQSYrY-2msOiEEe6x87QjOio6iFfYcw&amp;amp;sig=AOD64_07cvmaddkY6Suzb9QJWReiRtqCSQ&amp;amp;q&amp;amp;nis=4&amp;amp;adurl&amp;amp;ved=2ahUKEwivwc-w-5mFAxXbJUQIHTVzAGsQqyQoAHoECAoQEw" rel="noopener noreferrer"&gt;sign up for a Heroku account&lt;/a&gt; and &lt;a href="https://devcenter.heroku.com/articles/heroku-cli" rel="noopener noreferrer"&gt;install the Heroku CLI&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Login, create app, and PostgreSQL add-on
&lt;/h3&gt;

&lt;p&gt;With the Heroku CLI, we run the following commands to create our app and the PostgreSQL add-on:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ heroku login

$ heroku apps:create django-github
Creating ⬢ django-github... done
https://django-github-6cbf23e36b5b.herokuapp.com/ | https://git.heroku.com/django-github.git

$ heroku addons:create heroku-postgresql:mini --app django-github
Creating heroku-postgresql:mini on ⬢ django-github... ~$0.007/hour (max $5/month)
Database has been created and is available
 ! This database is empty. If upgrading, you can transfer
 ! data from another database with pg:copy


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Add Heroku app host to allowed hosts list in Django
&lt;/h3&gt;

&lt;p&gt;In our Django application settings, we need to update the list of &lt;a href="https://docs.djangoproject.com/en/5.0/ref/settings/#allowed-hosts" rel="noopener noreferrer"&gt;ALLOWED_HOSTS&lt;/a&gt;, which represent the host/domain names that your Django site can serve. We need to add the host from our newly created Heroku app. Edit mysite/mysite/settings.py, at around line 31, to add your Heroku app host. It will look similar to this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

ALLOWED_HOSTS = ["localhost", "django-github-6cbf23e36b5b.herokuapp.com"]


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Don’t forget to commit this file to your repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Procfile and requirements.txt
&lt;/h3&gt;

&lt;p&gt;Next, we need to add a Heroku-specific file called Procfile. This goes into the root folder of our repository. This file tells Heroku how to start up our app and run migrations. It should have the following contents:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

web: gunicorn --pythonpath mysite mysite.wsgi:application
release: cd mysite &amp;amp;&amp;amp; ./manage.py migrate --no-input



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Heroku will also need your requirements.txt file so it knows which Python dependencies to install.&lt;/p&gt;

&lt;h3&gt;
  
  
  Get your Heroku API key
&lt;/h3&gt;

&lt;p&gt;We will need our Heroku account API key. We’ll store this at GitHub so that our GitHub Action has authorization to deploy code to our Heroku app.&lt;/p&gt;

&lt;p&gt;In your Heroku account settings, find the auto-generated API key and copy the value.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmb6cffsiu1jg6jb8dnq9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmb6cffsiu1jg6jb8dnq9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, in your GitHub repository settings, navigate to Secrets and variables &amp;gt; Actions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0cwsoho9yg807p89kjmr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0cwsoho9yg807p89kjmr.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On that page, click New repository secret. Supply a name for your repository secret and. Then, paste in your Heroku API key and click Add secret.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frrbqajg0luxgjca2gtuf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frrbqajg0luxgjca2gtuf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your list of GitHub repository secrets should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsupreljvjoai37ih5r2t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsupreljvjoai37ih5r2t.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the job configuration file
&lt;/h2&gt;

&lt;p&gt;Let’s create our GitHub Action workflow. Typically, we configure CI/CD jobs with a YAML file. With GitHub Actions, this is no different.&lt;/p&gt;

&lt;p&gt;To add an action to your repository, create a .github subfolder in your project, and then create a workflows subfolder within that one. In .github/workflows/, we’ll create a file called django.yml. Your project tree should look like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

.
├── .git
│   └── …
├── .github
│   └── workflows
│       └── django.yml

├── mysite
│   ├── manage.py
│   ├── mysite
│   │   ├── …
│   │   └── settings.py
│   └── …
├── Procfile
└── requirements.txt


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Our django.yml file has the following contents:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

name: Django CI

on:
  push:
    branches: [ "main" ]

jobs:
  release:
     runs-on: ubuntu-latest
     steps:
       - uses: actions/checkout@v2
       - uses: akhileshns/heroku-deploy@v3.13.15
         with:
           heroku_api_key: ${{ secrets.HEROKU_API_KEY }}
           heroku_app_name: "&amp;lt;your-heroku-app-name&amp;gt;"
           heroku_email: "&amp;lt;your-heroku-email&amp;gt;"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This workflow builds off of the &lt;a href="https://github.com/marketplace/actions/deploy-to-heroku" rel="noopener noreferrer"&gt;Deploy to Heroku Action&lt;/a&gt; in the GitHub Actions library. In fact, using that pre-built action makes our Heroku deployment simple. The only things you need to configure in this file are your Heroku app name and account email.&lt;/p&gt;

&lt;p&gt;When we commit this file to our repo and push our main branch to GitHub, this kicks off our GitHub Action job for deploying to Heroku. In GitHub, we click the Actions tab and see the newly triggered workflow. When we click the release job in the workflow, this is what we see:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvilmmholm7h67ehanyb5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvilmmholm7h67ehanyb5.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Near the bottom of the output of the deploy step, we see results from the Heroku deploy:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfnxp61t78ww8pzrk7bm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfnxp61t78ww8pzrk7bm.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we look in our Heroku app logs, we also see the successful deploy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4qppz58bblcke32mbmh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4qppz58bblcke32mbmh.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And finally, when we test our Heroku-deployed app in our browser, we see that it’s up and running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5im1l1afiq0p8biwz3il.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5im1l1afiq0p8biwz3il.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congrats! You’ve successfully deployed your Django action to Heroku via a GitHub Action!&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we set up a simple Django application with a PostgreSQL database. Then, we walked through how to use GitHub Actions to deploy the application directly to your Heroku on commit.&lt;/p&gt;

&lt;p&gt;Django is a feature-rich web application framework for Python. Although for some cloud platforms, it can take some time to get things configured correctly, that’s not the case when you’re deploying to Heroku with GitHub Actions. Convenient off-the-shelf tools are available in both GitHub and Heroku, and they make deploying your Django application a breeze. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Working with Heroku Logplex for Comprehensive Application Logging</title>
      <dc:creator>Michael Bogan</dc:creator>
      <pubDate>Tue, 28 May 2024 15:13:48 +0000</pubDate>
      <link>https://forem.com/heroku/working-with-heroku-logplex-for-comprehensive-application-logging-89m</link>
      <guid>https://forem.com/heroku/working-with-heroku-logplex-for-comprehensive-application-logging-89m</guid>
      <description>&lt;p&gt;With the complexity of modern software applications, one of the biggest challenges for developers is simply understanding how their applications behave. Understanding the behavior of your app is key to maintaining its stability, performance, and security.&lt;/p&gt;

&lt;p&gt;This is a big reason why we do application logging: to capture and record events through an application’s lifecycle, so that we can gain valuable insights into our application. What kinds of insights? Application activity (user interactions, system events, and so on), errors and exceptions, resource usage, potential security threats, and more.&lt;/p&gt;

&lt;p&gt;When developers can capture and analyze these logs effectively, this improves application stability and security, which, in turn, improves the user experience. It’s a win-win for everybody.&lt;/p&gt;

&lt;p&gt;Application logging is easy—if you have the right tools. In this post, we’ll walk through using Heroku Logplex as a centralized logging solution. We’ll start by deploying a simple Python application to Heroku. Then, we’ll explore the different ways to use Logplex to view and filter our logs. Finally, we’ll show how to use Logplex to send your logs to an external service for further analysis.&lt;/p&gt;

&lt;p&gt;Ready to dive in? Let’s start with a brief introduction to Heroku Logplex.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing Heroku Logplex
&lt;/h2&gt;

&lt;p&gt;Heroku Logplex is a central hub that collects, aggregates, and routes log messages from various sources across your Heroku applications. Those sources include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dyno logs&lt;/strong&gt;: generated by your application running on Heroku dynos.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Heroku logs&lt;/strong&gt;: generated by Heroku itself, such as platform events and deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom sources&lt;/strong&gt;: generated by external sources, such as databases or third-party services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By consolidating logs in a single, central place, Logplex simplifies log management and analysis. You can find all your logs in one place for simplified monitoring and troubleshooting. You can perform powerful filtering and searching on your logs. And you can even route logs to different destinations for further processing and analysis.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core components
&lt;/h2&gt;

&lt;p&gt;At its heart, Heroku Logplex consists of three crucial components that work together to streamline application logging:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Log sources&lt;/strong&gt; are the starting points where log messages originate within your Heroku environment. They are your dyno logs, Heroku logs, and custom sources which we mentioned above.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Log drains&lt;/strong&gt; are the designated destinations for your log messages. Logplex allows you to configure drains to route your logs to various endpoints for further processing. Popular options for log drains include:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;External logging services&lt;/strong&gt; with advanced log management features, dashboards, and alerting capabilities. Examples are Datadog, Papertrail, and Sumo Logic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notification systems&lt;/strong&gt; that send alerts or notifications based on specific log entries, enabling real-time monitoring and troubleshooting.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom destinations&lt;/strong&gt; such as your own Syslog or web server.&lt;/li&gt;
&lt;li&gt;Log filters are powerful tools that act as checkpoints, allowing you to refine the log messages before they reach their final destinations. Logplex allows you to filter logs based on source, log level, and even message content.
By using filters, you can significantly reduce the volume of data sent to your drains, focusing only on the most relevant log entries for that specific destination.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Routing and processing
&lt;/h2&gt;

&lt;p&gt;As Logplex collects log messages from all your defined sources, it passes these messages through your configured filters, potentially discarding entries that don't match the criteria. Finally, filtered messages are routed to their designated log drains for further processing or storage.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Alright, enough talk. Show me how, already!&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating Logplex with Your Application
&lt;/h2&gt;

&lt;p&gt;Let’s walk through how to use Logplex for a simple Python application. To get started, make sure you have a Heroku account. Then, &lt;a href="https://devcenter.heroku.com/articles/heroku-cli" rel="noopener noreferrer"&gt;download and install the Heroku CLI&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Demo application
&lt;/h3&gt;

&lt;p&gt;You can find our very simple Python script (main.py) in the &lt;a href="https://github.com/CapnMB/heroku-logplex-python" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt; for this demo. Our script runs an endless integer counter, starting from zero. With each iteration, it emits a log message (cycling through log levels INFO, DEBUG, ERROR, and WARN). Whenever it detects a prime number, it emits an additional CRITICAL log event to let us know. We use &lt;a href="https://docs.sympy.org/latest/modules/ntheory.html#sympy.ntheory.primetest.isprime" rel="noopener noreferrer"&gt;isprime from the sympy library&lt;/a&gt; to help us determine if a number is prime.&lt;/p&gt;

&lt;p&gt;To run this Python application on your local machine, first clone the repository. Then, install the dependencies:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(venv) ~/project$ pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, start up the Python application. We use gunicorn to spin up a server that binds to a port, while our prime number logging continues to run in the background. (We do this because a Heroku deployment is designed to bind to a port, so that’s how we’ve written our application even though we’re focused on logging).&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(venv) ~/project$ gunicorn -w 1 --bind localhost:8000 main:app
[2024-03-25 23:18:59 -0700] [785441] [INFO] Starting gunicorn 21.2.0
[2024-03-25 23:18:59 -0700] [785441] [INFO] Listening at: http://127.0.0.1:8000 (785441)
[2024-03-25 23:18:59 -0700] [785441] [INFO] Using worker: sync
[2024-03-25 23:18:59 -0700] [785443] [INFO] Booting worker with pid: 785443
{"timestamp": "2024-03-25T23:18:59.507828Z", "level": "INFO", "name": "root", "message": "New number", "Number": 0}
{"timestamp": "2024-03-25T23:19:00.509182Z", "level": "DEBUG", "name": "root", "message": "New number", "Number": 1}
{"timestamp": "2024-03-25T23:19:01.510634Z", "level": "ERROR", "name": "root", "message": "New number", "Number": 2}
{"timestamp": "2024-03-25T23:19:02.512100Z", "level": "CRITICAL", "name": "root", "message": "Prime found!", "Prime Number": 2}
{"timestamp": "2024-03-25T23:19:05.515133Z", "level": "WARNING", "name": "root", "message": "New number", "Number": 3}
{"timestamp": "2024-03-25T23:19:06.516567Z", "level": "CRITICAL", "name": "root", "message": "Prime found!", "Prime Number": 3}
{"timestamp": "2024-03-25T23:19:09.519082Z", "level": "INFO", "name": "root", "message": "New number", "Number": 4}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Simple enough. Now, let’s get ready to deploy it and work with logs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the app
&lt;/h3&gt;

&lt;p&gt;We start by logging into Heroku through the CLI.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ heroku login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then, we create a new Heroku app. I’ve named my app logging-primes-in-python, but you can name yours whatever you’d like.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ heroku apps:create logging-primes-in-python
Creating ⬢ logging-primes-in-python... done
https://logging-primes-in-python-6140bfd3c044.herokuapp.com/ | https://git.heroku.com/logging-primes-in-python.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, we &lt;a href="https://devcenter.heroku.com/articles/git#create-a-heroku-remote" rel="noopener noreferrer"&gt;create a Heroku remote&lt;/a&gt; for our GitHub repo with this Python application.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ heroku git:remote -a logging-primes-in-python
set git remote heroku to https://git.heroku.com/logging-primes-in-python.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  A note on requirements.txt and Procfile
&lt;/h3&gt;

&lt;p&gt;We need to let Heroku know what dependencies our Python application needs, and also how it should start up our application. To do this, our repository has two files: requirements.txt and Procfile.&lt;/p&gt;

&lt;p&gt;The first file, requirements.txt, looks like this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python-json-logger==2.0.4
pytest==8.0.2
sympy==1.12
gunicorn==21.2.0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And Procfile looks like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;web: gunicorn -w 1 --bind 0.0.0.0:${PORT} main:app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;That’s it. Our entire repository has these files:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ tree
.
├── main.py
├── Procfile
└── requirements.txt

0 directories, 3 files
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Deploy the code
&lt;/h3&gt;

&lt;p&gt;Now, we’re ready to deploy our code. We run this command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git push heroku main
…
remote: Building source:
remote: 
remote: -----&amp;gt; Building on the Heroku-22 stack
remote: -----&amp;gt; Determining which buildpack to use for this app
remote: -----&amp;gt; Python app detected
…
remote: -----&amp;gt; Installing requirements with pip
…
remote: -----&amp;gt; Launching...
remote:        Released v3
remote:        https://logging-primes-in-python-6140bfd3c044.herokuapp.com/ deployed to Heroku
remote: 
remote: Verifying deploy... done.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Verify the app is running
&lt;/h3&gt;

&lt;p&gt;To verify that everything works as expected, we can dive into Logplex right away. Logplex is enabled by default for all Heroku applications.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ heroku logs --tail -a logging-primes-in-python
…
2024-03-22T04:34:15.540260+00:00 heroku[web.1]: Starting process with command `gunicorn -w 1 --bind 0.0.0.0:${PORT} main:app`
…
2024-03-22T04:34:16.425619+00:00 app[web.1]: {"timestamp": "2024-03-22T04:34:16.425552Z", "level": "INFO", "name": "root", "message": "New number", "taskName": null, "Number": 0}
2024-03-22T04:34:17.425987+00:00 app[web.1]: {"timestamp": "2024-03-22T04:34:17.425837Z", "level": "DEBUG", "name": "root", "message": "New number", "taskName": null, "Number": 1}
2024-03-22T04:34:18.000000+00:00 app[api]: Build succeeded
2024-03-22T04:34:18.426354+00:00 app[web.1]: {"timestamp": "2024-03-22T04:34:18.426205Z", "level": "ERROR", "name": "root", "message": "New number", "taskName": null, "Number": 2}
2024-03-22T04:34:19.426700+00:00 app[web.1]: {"timestamp": "2024-03-22T04:34:19.426534Z", "level": "CRITICAL", "name": "root", "message": "Prime found!", "taskName": null, "Prime Number": 2}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can see that logs are already being written. Heroku’s log format is following this scheme:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;timestamp source[dyno]: message
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Timestamp&lt;/strong&gt;: The date and time recorded at the time the dyno or component produced the log line. The timestamp is in the format specified by RFC5424 and includes microsecond precision.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source&lt;/strong&gt;: All of your app’s dynos (web dynos, background workers, cron) have the source app. Meanwhile, all of Heroku’s system components (HTTP router, dyno manager) have the source heroku.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dyno&lt;/strong&gt;: The name of the dyno or component that wrote the log line. For example, web dyno #1 appears as web.1, and the Heroku HTTP router appears as router.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Message&lt;/strong&gt;: The content of the log line. Logplex splits any lines generated by dynos that exceed 10,000 bytes into 10,000-byte chunks without extra trailing newlines. It submits each chunk that is submitted as a separate log line.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  View and filter logs
&lt;/h3&gt;

&lt;p&gt;We’ve seen the first option for examining our logs, the Heroku CLI. You can use command line arguments, such as --source and --dyno, to use filters and specify which logs to view.&lt;/p&gt;

&lt;p&gt;To specify the number of (most recent) log entries to view, do this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ heroku logs --num 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To filter down logs to a specific dyno or source, do this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ heroku logs --dyno web.1
$ heroku logs --source app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Of course, you can combine these filters, too:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ heroku logs --source app --dyno web.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The Heroku Dashboard is another place where you can look at your logs. On your app page, click More -&amp;gt; View logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9oagc40pogeaprx2blto.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9oagc40pogeaprx2blto.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is what we see:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fib4jbeoyqc9jo1mwy8sb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fib4jbeoyqc9jo1mwy8sb.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you look closely, you’ll see different sources: heroku and app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Log Drains
&lt;/h2&gt;

&lt;p&gt;Let’s demonstrate how to use a log drain. For this, we’ll use &lt;a href="https://betterstack.com/logs" rel="noopener noreferrer"&gt;BetterStack&lt;/a&gt; (formerly Logtail). We created a free account. After logging in, we navigated to the Source page, and clicked Connect source.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw2d03s20p081xdxcjb8z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw2d03s20p081xdxcjb8z.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We enter a name for our source and select Heroku as the source platform. Then, we click Create source.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxayw8et6ojnxf52uoyg2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxayw8et6ojnxf52uoyg2.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating our source, BetterStack provides the Heroku CLI command we would use to add a log drain for sending logs to BetterStack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgid8sqzj397k79nseqss.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgid8sqzj397k79nseqss.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Technically, this command adds an HTTPS drain that points to an HTTPS endpoint from BetterStack. We run the command in our terminal, and then we restart our application:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ heroku drains:add \
"https://in.logs.betterstack.com:6515/events?source_token=YKGWLN7****************" \
-a logging-primes-in-python


Successfully added drain https://in.logs.betterstack.com:6515/events?source_token=YKGWLN7*****************

$ heroku restart -a logging-primes-in-python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Almost instantly, we begin to see our Heroku logs appear on the Live tail page at BetterStack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pbxmefp85emktmcsesw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pbxmefp85emktmcsesw.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By using a log drain to send our logs from Heroku Logplex to an external service, we can take advantage of the features from BetterStack to work with our Heroku logs. For example, we can create visualization charts and configure alerts on certain log events.&lt;/p&gt;

&lt;h2&gt;
  
  
  Custom drains
&lt;/h2&gt;

&lt;p&gt;In our example above, we created a custom HTTPS log drain that happened to point to an endpoint from BetterStack. However, we can send our logs to any endpoint we want. We could even send our logs to another Heroku app! 🤯 Imagine building a web service on Heroku that only Heroku Logplex can make POST requests to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Logging best practices
&lt;/h2&gt;

&lt;p&gt;Before we conclude our walkthrough, let’s briefly touch on some logging best practices.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Focus on relevant events&lt;/strong&gt;: Log only the information that’s necessary to understand and troubleshoot your application's behavior. Prioritize logging application errors, user actions, data changes, and other crucial activities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enrich logs with context&lt;/strong&gt;: Include details that provide helpful context to logged events. Your future troubleshooting self will thank you. So, instead of just logging "User logged in," capture details like user ID, device information, and relevant data associated with the login event. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Embrace structured logging&lt;/strong&gt;: Use a standardized format like JSON to make your logs machine-readable. This allows easier parsing and analysis by logging tools, saving you time in analysis.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Protect sensitive data&lt;/strong&gt;: Never log anything that could compromise user privacy or violate data regulations. This includes passwords, credit card information, or other confidential data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Take advantage of log levels&lt;/strong&gt;: Use different log levels (like DEBUG, INFO, WARNING, and ERROR) to categorize log events based on their severity. This helps with issue prioritization, allowing you to focus on critical events requiring immediate attention.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Heroku Logplex empowers developers and operations teams with a centralized and efficient solution for application logging within the Heroku environment. While our goal in this article was to provide a basic foundation for understanding Heroku Logplex, remember that the platform offers a vast array of advanced features to explore and customize your logging based on your specific needs. &lt;br&gt;
As you dig deeper into Heroku’s documentation, you’ll come across advanced functionalities like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Customizable log processing&lt;/strong&gt;: Leverage plugins and filters to tailor log processing workflows for specific use cases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time alerting&lt;/strong&gt;: Configure alerts based on log patterns or events to proactively address potential issues.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advanced log analysis tools&lt;/strong&gt;: Integrate with external log management services for comprehensive log analysis, visualization, and anomaly detection.
By understanding the core functionalities and exploring the potential of advanced features, you can leverage Heroku Logplex to create a robust and efficient logging strategy. Ultimately, good logging will go a long way in enhancing the reliability, performance, and security of your Heroku applications.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>webdev</category>
      <category>logging</category>
      <category>heroku</category>
      <category>programming</category>
    </item>
    <item>
      <title>Caching RESTful API requests with Heroku’s Redis Add-on</title>
      <dc:creator>Michael Bogan</dc:creator>
      <pubDate>Wed, 17 Apr 2024 13:39:30 +0000</pubDate>
      <link>https://forem.com/heroku/caching-restful-api-requests-with-herokus-redis-add-on-3bg8</link>
      <guid>https://forem.com/heroku/caching-restful-api-requests-with-herokus-redis-add-on-3bg8</guid>
      <description>&lt;p&gt;Most software developers encounter two main problems: naming things, caching, and off-by-one errors. 🤦🏻‍♂️ &lt;/p&gt;

&lt;p&gt;In this tutorial, we’ll deal with caching. We’ll walk through how to implement RESTful request caching with an open source-licensed version of Redis. We’ll also set up and deploy this system easily with Heroku.&lt;/p&gt;

&lt;p&gt;For this demo, we’ll build a Node.js application with the Fastify framework, and we’ll integrate caching with Redis to reduce certain types of latency.&lt;/p&gt;

&lt;p&gt;Ready to dive in? Let’s go!&lt;/p&gt;

&lt;h2&gt;
  
  
  Node.js + Fastify + long-running tasks
&lt;/h2&gt;

&lt;p&gt;As I’m sure our readers know, Node.js is a very popular platform for building web applications. With its support for JavaScript (or TypeScript, or both at the same time!), Node.js allows you to use the same language for both the frontend and the backend of your application. It also has a rich event loop that makes asynchronous request handling more intuitive.&lt;/p&gt;

&lt;p&gt;The concurrency model in Node.js is very performant, able to &lt;a href="https://www.pixel506.com/insights/how-much-traffic-can-nodejs-handle"&gt;handle upwards of 15,000 requests per second&lt;/a&gt;. But even then, you might still run into situations where the request latency is unacceptably high. We’ll show this with our application.&lt;/p&gt;

&lt;p&gt;As you follow along, you can always browse the codebase for this mini demo at my &lt;a href="https://github.com/CapnMB/fastify-caching-redis"&gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Initialize the basic application
&lt;/h2&gt;

&lt;p&gt;By using &lt;a href="https://fastify.dev/"&gt;Fastify&lt;/a&gt;, you can quickly get a Node.js application up and running to handle requests. Assuming you have Node.js installed, you’ll start by initializing a new project. We’ll use &lt;a href="https://npmjs.com/"&gt;npm&lt;/a&gt; as our package manager.&lt;/p&gt;

&lt;p&gt;After initializing a new project, we will install our Fastify-related dependencies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/project$ npm i fastify fastify-cli fastify-plugin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, we update our package.json file to add two scripts and turn on ES module syntax. We make sure to have the following lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  "type": "module",
  "main": "app.js",
  "scripts": {
    "start": "fastify start -a 0.0.0.0 -l info app.js",
    "dev": "fastify start -p 8000 -w -l info -P app.js"
  },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From there, we create our first file (routes.js) with an initial route:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// routes.js

export default async function (fastify, _opts) {
  fastify.get("/api/health", async (_, reply) =&amp;gt; {
    return reply.send({ status: "ok" });
  });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, we create our app.js file that prepares a Fastify instance and registers the routes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// app.js
import routes from "./routes.js";

export default async (fastify, opts) =&amp;gt; {
  fastify.register(routes);
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These two simple files—our application and our route definitions—are all we need to get up and running with a small Fastify service that exposes one endpoint: /api/health. Our dev script in package.json is set to run the fastify-cli to start our server on localhost port 8000, which is good enough for now. We start up our server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/project$ npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, in another terminal window, we use &lt;a href="https://curl.se/"&gt;curl&lt;/a&gt; to hit the endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~$ curl http://localhost:8000/api/health 
{"status":"ok"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Add a simulated long-running process
&lt;/h2&gt;

&lt;p&gt;We’re off to a good start. Next, let’s add another route to simulate a long-running process. This will help us gather some latency data. In routes.js, we add another route handler within our exported default async function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; fastify.get("/api/user-data", async (_, reply) =&amp;gt; {
    await sleep(5000);
    const userData = readData();
    return reply.send({ data: userData });
  });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This exposes another endpoint: /api/user-data. Here, we have a method to simulate reading a lot of data from a database (readData) and a long-running process (sleep). We define those methods in routes.js as well. They look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import fs from "fs";

function readData() {
  try {
    const data = fs.readFileSync("data.txt", "utf8");
    return data;
  } catch (err) {
    console.error(err);
  }
}

function sleep(ms) {
  return new Promise((resolve) =&amp;gt; {
    setTimeout(resolve, ms);
  });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With our new route in place, we restart our server (npm run dev).&lt;/p&gt;

&lt;h2&gt;
  
  
  Measure latency with curl
&lt;/h2&gt;

&lt;p&gt;How do we measure latency? The simplest way is to use curl. Curl captures various time profiling metrics when it makes requests. We just need to format curl’s output so that we can easily see the various latency values available. To do this, we define the output we want to see with a text file (curl-format.txt):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    time_namelookup:  %{time_namelookup}s\n
        time_connect:  %{time_connect}s\n
     time_appconnect:  %{time_appconnect}s\n
    time_pretransfer:  %{time_pretransfer}s\n
       time_redirect:  %{time_redirect}s\n
  time_starttransfer:  %{time_starttransfer}s\n
  -------------------  ----------\n
          time_total:  %{time_total}s\n
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With our output format defined, we can use it with our next curl call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -w "@curl-format.txt" \
     -o /dev/null -s \
     "http://localhost:8000/api/user-data"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The response we receive looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  time_namelookup:  0.000028s
        time_connect:  0.000692s
     time_appconnect:  0.000000s
    time_pretransfer:  0.000772s
       time_redirect:  0.000000s
  time_starttransfer:  5.055683s
                       ----------
          time_total:  5.058479s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Well, that’s not good. Over five seconds is way too long for a transfer time (the time it takes the server to actually handle the request). Imagine if this endpoint was being hit hundreds or thousands of times per second! Your users would be frustrated, and your server may crash under the weight of continually re-doing this work. &lt;/p&gt;

&lt;h2&gt;
  
  
  Redis to the rescue!
&lt;/h2&gt;

&lt;p&gt;Caching your responses is the first line of defense to reduce your transfer time (assuming you’ve addressed any of the poor programming practices that might be causing the latency!). So, let’s assume we’ve done everything we can do to reduce latency, but our application still needs five seconds to put this complex data together and return it to the user.&lt;/p&gt;

&lt;p&gt;In our scenario, because &lt;u&gt;the data is the same every time&lt;/u&gt; for every request to /api/user-data, we have a perfect candidate for caching. With caching, we’ll perform the necessary computation once, cache the result, and return the cached value for all subsequent requests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://redis.com/"&gt;Redis&lt;/a&gt; is a performant, in-memory key/value store, and it’s a common tool used for caching. To leverage it, we first install Redis on our local machine. Then, we need to add &lt;a href="https://www.npmjs.com/package/@fastify/redis"&gt;Fastify’s Redis plugin&lt;/a&gt; to our project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/project$ npm i @fastify/redis
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Register the Redis plugin with Fastify
&lt;/h2&gt;

&lt;p&gt;We create a file, redis.js, which configures our Redis plugin and registers it with Fastify. Our file looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// redis.js

const REDIS_URL = process.env.REDIS_URL || "redis://127.0.0.1:6379";

import fp from "fastify-plugin";
import redis from "@fastify/redis";

const parseRedisUrl = (redisUrl) =&amp;gt; {
  const url = new URL(redisUrl);
  const password = url.password;
  return {
    host: url.hostname,
    port: url.port,
    password,
  };
};

export default fp(async (fastify) =&amp;gt; {
  fastify.register(redis, parseRedisUrl(REDIS_URL));
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most of the lines in this file are dedicated to parsing a REDIS_URL value into a host, port, and password. If we have REDIS_URL set properly at runtime as an environment variable, then registering Redis with Fastify is simple. After configuring our plugin, we just need to modify app.js to use it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// app.js

import redis from "./redis.js";
import routes from "./routes.js";

export default async (fastify, opts) =&amp;gt; {
  fastify.register(redis);
  fastify.register(routes);
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we have access to our Redis instance by referencing fastify.redis anywhere within our app. &lt;/p&gt;

&lt;h2&gt;
  
  
  Modify our endpoint to use caching
&lt;/h2&gt;

&lt;p&gt;With Redis in the mix, let’s change our /api/user-data endpoint to use caching:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  fastify.get("/api/user-data", async (_, reply) =&amp;gt; {
    const { redis } = fastify;

    // check if data is in cache
    const data = await redis.get("user-data", (err, val) =&amp;gt; {
      if (val) {
        return { data: val };
      }
      return null;
    });

    if (data) {
      return reply.send(data);
    }

    // simulate a long-running task
    await sleep(5000);
    const userData = readData();


    // add data to the cache
    redis.set("user-data", userData);


    return reply.send({ data: userData });
  });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, you see that we’ve hardcoded in Redis a single key, user-data, and stored our data under that key. Of course, our key could be a user ID or some other value that identifies a particular type of request or state. Also, we could &lt;a href="https://github.com/redis/ioredis?tab=readme-ov-file#expiration"&gt;set a timeout value&lt;/a&gt; to expire our key, in the case that we expect data to change after a certain window of time.&lt;/p&gt;

&lt;p&gt;If there is data in the cache, then we’ll return it and skip all the time-consuming work. Otherwise, do the long-running computation, add the result to the cache, and then return it to the user.&lt;/p&gt;

&lt;p&gt;What do our transfer times look like after hitting this endpoint two more times (the first one to add the data into the cache, and the second one to retrieve it)?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   time_namelookup:  0.000023s
        time_connect:  0.000560s
     time_appconnect:  0.000000s
    time_pretransfer:  0.000729s
       time_redirect:  0.000000s
  time_starttransfer:  0.044512s
                       ----------
          time_total:  0.047479s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Much better! We’ve reduced our request times from several seconds to milliseconds. That’s a huge improvement in performance! &lt;/p&gt;

&lt;p&gt;Redis has many more features that may be useful here, including having key/value pairs timeout after a certain amount of time; that’s a more common scenario in production environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Redis in your Heroku deployment
&lt;/h2&gt;

&lt;p&gt;Up to this point, we’ve only shown how this works in a local environment. Now, let’s go one step further and deploy it all to the cloud. Fortunately, Heroku provides many options for deploying web applications and working with Redis. Let’s walk through how to get set up there.&lt;/p&gt;

&lt;p&gt;After &lt;a href="https://signup.heroku.com/"&gt;signing up for a Heroku account&lt;/a&gt; and installing their &lt;a href="https://devcenter.heroku.com/articles/heroku-command-line"&gt;CLI tool&lt;/a&gt;, we’re ready to create a new app. In our case, we’ll call our app fastify-with-caching. Here are our steps:&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Login to Heroku
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/projects$ heroku login
...
Logging in... done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Create the Heroku app
&lt;/h3&gt;

&lt;p&gt;When we create our Heroku app, we’ll get back our Heroku app URL. We take note of this because we’ll use it in our subsequent curl requests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/project$ heroku create -a fastify-with-caching
Creating ⬢ fastify-with-caching... done
https://fastify-with-caching-3e247d11f4ad.herokuapp.com/ | https://git.heroku.com/fastify-with-caching.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Add the Redis add-on
&lt;/h3&gt;

&lt;p&gt;We need to set up a Redis add-on that meets our application’s needs. For our demo project, it’s sufficient to create a Mini-tier Redis instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/project$ heroku addons:create heroku-redis:mini -a fastify-with-caching
Creating heroku-redis:mini on ⬢ fastify-with-caching…
…
redis-transparent-98258 is being created in the background.
…
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Spinning up the Redis instance may take two or three minutes. We can check the status of our instance periodically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/project$ heroku addons:info redis-transparent-98258
...
State:        creating

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Not too long after, we see this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;State:        created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’re just about ready to go!&lt;/p&gt;

&lt;p&gt;When Heroku spins up our Redis add-on, it also adds our Redis credentials as config variables attached to our Heroku app. We can run the following command to see these config variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/project$ heroku config -a fastify-with-caching
=== fastify-with-caching Config Vars

REDIS_TLS_URL: rediss://:p171d98f7696ab7eb2319f7b78083af749a0d0bb37622fc420e6c1205d8c4579c@ec2-18-213-142-76.compute-1.amazonaws.com:15940
REDIS_URL:     redis://:p171d98f7696ab7eb2319f7b78083af749a0d0bb37622fc420e6c1205d8c4579c@ec2-18-213-142-76.compute-1.amazonaws.com:15939
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Your credentials, of course, will be unique and different from what you see above.)&lt;/p&gt;

&lt;p&gt;Notice that we have a REDIS_URL variable all set up for us. It’s a good thing our redis.js file is coded to properly parse an environment variable called REDIS_URL.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Create a Heroku remote
&lt;/h3&gt;

&lt;p&gt;Finally, we need to &lt;a href="https://devcenter.heroku.com/articles/git#create-a-heroku-remote"&gt;create a Heroku remote&lt;/a&gt; in our git repo so that we can easily deploy with git.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/project$ heroku git:remote -a fastify-with-caching
set git remote heroku to https://git.heroku.com/fastify-with-caching.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5: Deploy!
&lt;/h3&gt;

&lt;p&gt;Now, when we push our branch to our Heroku remote, Heroku will build and deploy our application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/project$ git push heroku main
...
remote: Building source:
remote: 
remote: -----&amp;gt; Building on the Heroku-22 stack
remote: -----&amp;gt; Determining which buildpack to use for this app
remote: -----&amp;gt; Node.js app detected
remote:        
remote: -----&amp;gt; Creating runtime environment
...
remote: -----&amp;gt; Compressing...
remote:        Done: 50.8M
remote: -----&amp;gt; Launching...
remote:        Released v4
remote:        https://fastify-with-caching-3e247d11f4ad.herokuapp.com/ deployed to Heroku
remote: 
remote: Verifying deploy... done.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our application is up and running. It’s time to test it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Test our deployed application
&lt;/h2&gt;

&lt;p&gt;We start with a basic curl request to our /api/health endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl https://fastify-with-caching-3e247d11f4ad.herokuapp.com/api/health
{"status":"ok"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Excellent. That looks promising.&lt;/p&gt;

&lt;p&gt;Next, let’s send our first request to the long-running process and capture the latency metrics:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl \
  -w "@curl-format.txt" \
  -o /dev/null -s \
  https://fastify-with-caching-3e247d11f4ad.herokuapp.com/api/user-data

     time_namelookup:  0.035958s
        time_connect:  0.101336s
     time_appconnect:  0.249308s
    time_pretransfer:  0.249389s
       time_redirect:  0.000000s
  time_starttransfer:  5.384986s
  -------------------  ----------
          time_total:  6.554382s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we send the same request a second time, here’s the result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl \
  -w "@curl-format.txt" \
  -o /dev/null -s \
  https://fastify-with-caching-3e247d11f4ad.herokuapp.com/api/user-data

     time_namelookup:  0.025807s
        time_connect:  0.091763s
     time_appconnect:  0.236050s
    time_pretransfer:  0.236119s
       time_redirect:  0.000000s
  time_starttransfer:  0.334859s
  -------------------  ----------
          time_total:  1.276264s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Much better! Caching allows us to bypass the long-running processes. From here, we can build out a much more robust caching mechanism for our application across all our routes and processes. We can continue to lean on Heroku and Heroku’s Redis add-on when we need to deploy our application to the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus Tip: Clearing the cache for future tests
&lt;/h2&gt;

&lt;p&gt;By the way, if you want to test this more than once, then you may occasionally need to delete the user-data key/value pair in Redis. You can use the Heroku CLI to access the Redis CLI for your Redis instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~$ heroku redis:cli -a fastify-with-caching
Connecting to redis-transparent-98258 (REDIS_TLS_URL, REDIS_URL):
ec2-18-213-142-76.compute-1.amazonaws.com:15940&amp;gt; DEL user-data
1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we explored how caching can greatly improve your web service's response time in cases where identical requests would produce identical responses. We looked at how to implement this with Redis, the industry-standard caching tool. We did this all with ease within a Node.js application that leverages the Fastify framework. Lastly, we deployed our demo application to Heroku, using their built-in Heroku Data for Redis instance management to cache in the cloud. &lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>redis</category>
      <category>heroku</category>
      <category>webdev</category>
      <category>restapi</category>
    </item>
    <item>
      <title>Getting to Know You - Speeding up Developer Onboarding with LLMs and Unblocked</title>
      <dc:creator>Michael Bogan</dc:creator>
      <pubDate>Mon, 08 Jan 2024 14:03:55 +0000</pubDate>
      <link>https://forem.com/mbogan/getting-to-know-you-speeding-up-developer-onboarding-with-llms-and-unblocked-2l5c</link>
      <guid>https://forem.com/mbogan/getting-to-know-you-speeding-up-developer-onboarding-with-llms-and-unblocked-2l5c</guid>
      <description>&lt;p&gt;As anyone who has hired new developers onto an existing software team can tell you, onboarding new developers is one of the most expensive things you can do. One of the most difficult things about onboarding junior developers is that it takes your senior developers away from their work. &lt;/p&gt;

&lt;p&gt;Even the best hires might get Imposter Syndrome, since they feel like they need to know more than they do and need to depend on their peers. You might have the best documentation, but it can be difficult to figure out where to start with onboarding.&lt;/p&gt;

&lt;p&gt;Onboarding senior developers takes time and resources as well.&lt;/p&gt;

&lt;p&gt;With the rise of LLMs, it seems like putting one on your code, documentation, chats, and ticketing systems would make sense. The ability to converse with an LLM trained on the right dataset would be like adding a team member who can make sure no one gets bogged down with sharing something that’s already documented. I thought I’d check out a new service called &lt;a href="https://docs.getunblocked.com/"&gt;Unblocked&lt;/a&gt; that does just this.&lt;/p&gt;

&lt;p&gt;In this article, we will take a spin through a code base I was completely unfamiliar with and see what it would be like to get going on a new team with this tool.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Sources
&lt;/h3&gt;

&lt;p&gt;If you’ve been following conversations around LLM development, then you know that they are only as good as the data they have access to. Fortunately, Unblocked allows you to connect a bunch of data sources to train your LLM. &lt;/p&gt;

&lt;p&gt;Additionally, because this LLM will be working on your specific code base and documentation, it wouldn’t even be possible to train it on another organization’s data. Unblocked isn’t trying to build a generic code advice bot. It’s personalized to your environment, so you don’t need to worry about data leaking to someone else.&lt;/p&gt;

&lt;p&gt;Setting up is pretty straightforward, thanks to lots of integrations with developer tools. After signing up for an account, you’ll be prompted to connect to the sources Unblocked supports.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JUdCavg_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/14fd757lqj69ohxed8nw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JUdCavg_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/14fd757lqj69ohxed8nw.png" alt="Image description" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You'll need to wait a few minutes or longer depending on the size of your team while Unblocked ingests your content and trains the model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting started
&lt;/h3&gt;

&lt;p&gt;I tried exploring some of the features of Unblocked. While there’s &lt;a href="https://docs.getunblocked.com/productGuides/dashboard.html"&gt;a web dashboard&lt;/a&gt; that you’ll interact with most of the time, I recommend you install &lt;a href="https://docs.getunblocked.com/productGuides/mac.html"&gt;the Unblocked Mac app&lt;/a&gt;, also. The app will run in your menu bar and allow you to ask Unblocked a question from anywhere. There are a bunch of other features for teammates interacting with Unblocked. I may write about those later, but for now, I just like that it gives me a universal shortcut (Command+Shift+U) to access Unblocked at any time.&lt;/p&gt;

&lt;p&gt;Another feature of the macOS menu bar app is that it provides a quick way to install &lt;a href="https://docs.getunblocked.com/productGuides/ide.html"&gt;the IDE Plugins&lt;/a&gt; based on what I have installed on my machine. Of course, you don’t have to install them this way (Unblocked does this install for you), but it takes some of the thinking out of it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iDHVIifz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dbq83llsw2fmw6jbo8dp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iDHVIifz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dbq83llsw2fmw6jbo8dp.png" alt="Image description" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Asking Questions
&lt;/h3&gt;

&lt;p&gt;Since I am working on a codebase that is already in Unblocked, I don’t need to wait for anything after getting my account set up on the platform. If you set up your code and documentation, then you won’t need your new developers to wait either. &lt;/p&gt;

&lt;p&gt;Let’s take this for a spin and look at what questions a new developer might ask the bot.&lt;/p&gt;

&lt;p&gt;I started by asking a question about setting up the frontend.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c2QiNJdL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/crvmdys1ots795odzrle.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c2QiNJdL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/crvmdys1ots795odzrle.png" alt="Image description" width="800" height="698"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This answer looks pretty good! It’s enough to get me going in a local environment without contacting anyone else on my team. Unblocked kept everyone else “unblocked” on their work and pointed me in the right direction all on its own.&lt;/p&gt;

&lt;p&gt;I decided to ask about how to get a development environment set up locally. Let’s see what Unblocked says if I ask about that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O9Rs65_e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qdjrwb9q40p5e6q15g1w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O9Rs65_e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qdjrwb9q40p5e6q15g1w.png" alt="Image description" width="800" height="876"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This answer isn’t what I was hoping for … but I can click on the README link and find that this is not really Unblocked’s fault. My team just hasn’t updated the README for the backend app, and Unblocked found the incorrect boilerplate setup instructions. Now that I know where to go to get the code, I’ll just update it after I have finished setting up the backend on my own. In the meantime, though, I will let Unblocked know that it didn’t give me the answer I hoped for.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gkrxqWjU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0vt8a2bs8mno612daxwd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gkrxqWjU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0vt8a2bs8mno612daxwd.png" alt="Image description" width="710" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since it isn’t really the bot’s fault that it’s wrong, I made sure to explain that in my feedback. &lt;/p&gt;

&lt;p&gt;I had a good start, but I wanted some more answers to my architectural questions. Let’s try something a little more complicated than reading the setup instructions from a README.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Dg7oUOvl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9y04gvkj01mt574rniva.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Dg7oUOvl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9y04gvkj01mt574rniva.png" alt="Image description" width="800" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a pretty good high-level overview, especially considering that I didn’t have to do anything, other than type them in. Unblocked generated these answers with links to the relevant resources for me to investigate more as needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Browse the code
&lt;/h3&gt;

&lt;p&gt;I actually cloned the repos for the frontend and backend of my app to my machine and opened them in VS Code. Let’s take a look at how Unblocked works with the repos there.&lt;/p&gt;

&lt;p&gt;As soon as I open the Unblocked plugin while viewing the backend repository, I’m presented with recommended insights asked by other members of my team. There are also some references to pull requests, Slack conversations, and Jira tasks that the bot thinks are relevant before I open a single file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K8NKx8Q---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k6c2ql19w0j781j05hwg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K8NKx8Q---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k6c2ql19w0j781j05hwg.png" alt="Image description" width="800" height="620"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is useful. As I open various files, the suggestions change with the context, too.&lt;/p&gt;

&lt;h3&gt;
  
  
  Browse components
&lt;/h3&gt;

&lt;p&gt;The VS Code plugin also called out some topics that it discovered about the app I’m trying out. I clicked on the Backend topic, and it took me to the following page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d3sSzCJ2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yy2dhoyetb8jsx3932mz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d3sSzCJ2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yy2dhoyetb8jsx3932mz.png" alt="Image description" width="800" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All of this is automatically generated, as Unblocked determines the experts for each particular part of the codebase. However, experts can also update their expertise when they configure their profiles on our organization. Now, in addition to having many questions I can look at about the backend application, I also know which of my colleagues to go to for questions.&lt;/p&gt;

&lt;p&gt;If I go to the Components page on the Web Dashboard, I can see a list of everything Unblocked thinks is important about this app. It also gives me a quick view of who I can talk to about these topics. Clicking on any one of them provides me with a little overview, and the experts on the system can manage these as needed. Again, all of this was automatically generated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y-t1ZkOz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eu7kiy1krxx4niwgxa4f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y-t1ZkOz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eu7kiy1krxx4niwgxa4f.png" alt="Image description" width="777" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;This was a great start with Unblocked. I’m looking forward to next trying this out on some of the things that I’ve been actively working on. Since the platform is not going to be leaking any of my secrets to other teams, I’m not very concerned at all about putting it on even the most secret of my projects and expect to have more to say about other use cases later.&lt;/p&gt;

&lt;p&gt;Unblocked is in public beta and &lt;a href="https://getunblocked.com/pricing"&gt;free&lt;/a&gt; and worth checking out!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>coding</category>
      <category>productivity</category>
      <category>github</category>
    </item>
    <item>
      <title>Exploring the Cadence Access Model: Fine-Grained permissions for flow contracts</title>
      <dc:creator>Michael Bogan</dc:creator>
      <pubDate>Mon, 13 Nov 2023 15:11:58 +0000</pubDate>
      <link>https://forem.com/mbogan/exploring-the-cadence-access-model-fine-grained-permissions-for-flow-contracts-4akd</link>
      <guid>https://forem.com/mbogan/exploring-the-cadence-access-model-fine-grained-permissions-for-flow-contracts-4akd</guid>
      <description>&lt;p&gt;**Flow **is a permissionless layer-1 blockchain built to support the high-scale use cases of games, virtual worlds, and the digital assets that power them. The blockchain was created by the team behind Cryptokitties, Dapper Labs, and NBA Top Shot. &lt;/p&gt;

&lt;p&gt;One core attribute that differentiates Flow from the other blockchains is its way of providing &lt;strong&gt;fine-grained permissions&lt;/strong&gt; to objects within the blockchain. Fine-grained access is a new method of providing access to users that follows the principle of least privilege. This makes it highly secure and fault-resistant. Flow is one of the unique blockchains to have this, as it is built on the fundamentals of Capability-based security.&lt;/p&gt;

&lt;p&gt;But how does this help you as a developer? If you’re writing smart contracts, then you likely have to regularly define access to objects and functions within your contract. Understanding the different types of access better will help you to define access types in code more accurately. This will allow you to write highly secure and efficient smart contracts.&lt;/p&gt;

&lt;p&gt;Let’s dive in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---GI0nks4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8wyixhjzajfoobvv7nfm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---GI0nks4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8wyixhjzajfoobvv7nfm.png" alt="Image description" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://techfi.tech/flow-blockchain/"&gt;Source&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding fine-grained permissions
&lt;/h2&gt;

&lt;p&gt;As opposed to Role-based access control (RBAC), where permissions are defined loosely for every role and not specifically for every individual, fine-grained permissions allow developers to define permissions for objects at the lowest level. This allows permissions to be defined for every object. This also allows you to add new permissions types, such as time-expiry and geography-based permissions.&lt;/p&gt;

&lt;p&gt;Let’s understand the core principle behind fine-grained permissions.&lt;/p&gt;

&lt;h3&gt;
  
  
  The principle of least privilege
&lt;/h3&gt;

&lt;p&gt;The concept of least privilege means that in a computing environment, any module (such as a process, a user, or a programmer) only has the necessary access that it needs to do its specific job. Any other unnecessary permissions for its specific job are either revoked or unavailable. &lt;/p&gt;

&lt;p&gt;For example, a programmer whose job is to create backups does not need to make user data updates, so it should only have read permissions on the database of users. This principle is the core principle behind fine-grained permissions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Capability-based Security
&lt;/h3&gt;

&lt;p&gt;A capability is an unforgeable token of authority. In capability-based authorization, your identity does not matter. If you’ve been sent an access token by the owner/admin that grants you the capability to access a resource and you can execute that capability, then you will have access. Hence, at runtime, the application does not check what your identity is but only that you can access the requested resource. Thus, fine-grained access can be implemented on the foundations of capability-based security.&lt;/p&gt;

&lt;h3&gt;
  
  
  History of fine-grained access
&lt;/h3&gt;

&lt;p&gt;In the 1980s, operating systems would define permissions on objects as “read”, “write”, and “execute”. This was called Access Control Lists. In the 1990s, Role-based Access Control was introduced, which allowed you to create groups and assign users to groups. Then, the concepts of attribute-based access control and capability-based access control arrived, which brought fine-grained access into the picture. In the world of blockchains, however, fine-grained access is new and brought by Cadence, the language used to write Flow smart contracts. &lt;/p&gt;

&lt;h2&gt;
  
  
  Fine-grained access in Cadence
&lt;/h2&gt;

&lt;p&gt;Cadence, the language used to write flow smart contracts, has built-in special keywords to allow for fine-grained permissions to resources. Below, we dive into the common keywords and the usage of these keywords to provide fine-grained access.&lt;/p&gt;

&lt;h3&gt;
  
  
  The access keywords
&lt;/h3&gt;

&lt;p&gt;Flow defines certain keywords, known as access keywords, with different parameters to define the access privileges of a certain resource. Below are the four primary keywords: &lt;/p&gt;

&lt;h3&gt;
  
  
  Pub or access(all)
&lt;/h3&gt;

&lt;p&gt;Using this keyword means the declaration is accessible in all scopes. However, this also means that the declaration is not updatable except when you’re updating the value at an index of an array. Pub or access(all) are the least restrictive types of access and should generally be avoided. Below is an example of using the pub keyword:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pub contract Car {
    pub let carBrand: String
    pub let carNameByBrand: {String: String}
    pub fun returnCarBrand(): String {
        return self.carBrand
    }
}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Following the above code example, if there’s an object called CarObject of type Car, then the statement below invocation is valid.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CarObject.carBrand //Equals the value of greeting in the object
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The values with access type pub are generally not updatable except when updating the value at a certain index of an array.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CarObject.carNameByBrand = {} //Not Valid
CarObject.carNameByBrand[“Porsche”] = “Carrera” //Valid
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  access(self)
&lt;/h3&gt;

&lt;p&gt;access(self) means that the declaration is only visible in the current and inner scopes. For example, an access(self) field can only be accessed by the functions of the type it is a part of.&lt;/p&gt;

&lt;p&gt;When using access(self), it is common practice to define setters and getter methods to allow other objects and contracts to access this field.&lt;/p&gt;

&lt;p&gt;This keyword makes the usage very safe but also restrictive. Below is an example of using access(self):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pub contract Car {
    access(self) let carBrand: String
    access(self) let carNameByBrand: {String: String}
    pub fun returnCarBrand(): String {
        return self.carBrand
    }
    pub fun returnCarName(_ brand: String): String? {
        return self.carNameByBrand[brand]
    }
}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  access(contract)
&lt;/h3&gt;

&lt;p&gt;access(contract) means that the declaration is only accessible in the scope of the contract that defines it. Hence, functions defined in other contracts under the same account or functions defined in contracts under other accounts will not be able to access it. This keyword is unique to Cadence and comes in handy when you have any contract-specific variables you don’t want to expose to other contracts.&lt;/p&gt;

&lt;p&gt;Below is an example of using the access(contract) keyword:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pub contract CarContract {
    pub struct Car {
        access(contract) var carBrand: String
    }
    pub fun returnCarBrand(_brand: Car): String {
        return Car.carBrand
    }
}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example, if you have a method in a different contract that has an object of the Car struct, you cannot get the carBrand variable directly. However, you can call the returnCarBrand function to get the value of carBrand.&lt;/p&gt;

&lt;h3&gt;
  
  
  access(account)
&lt;/h3&gt;

&lt;p&gt;This keyword means that the declaration is only accessible inside the entire account owned by the current smart contract. Hence, other contracts in the same account can access a variable freely. This type of access is very powerful, allowing you to keep a variable private to your account. You are also able to deploy a smart contract that needs to use the variable in the future, if needed. This way, several contracts in the same account can share information seamlessly with one another, promoting the modularity of code.&lt;/p&gt;

&lt;p&gt;Below is an example of how to use it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pub contract CarContract {
    pub struct Car {
        access(account) var carBrand: String
    }
    pub fun returnCarBrand(_brand: Car): String {
        return Car.carBrand
    }
}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example, if you have a method in a different contract that has an object of the Car struct, you can get the carBrand variable directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The concept of least privilege and capability-based security play a significant role in Flow's precise permissions. The access keywords provide clear definitions of variable and function access, as we have covered in this article.&lt;/p&gt;

&lt;p&gt;One of the most noteworthy features that fundamentally strengthens the security of the Flow blockchain is its fine-grained access control. Blockchain technology has long been plagued by bugs and hacks, most of which occur because of ill-conceived access controls. However, fine-grained access control provides developers with a remarkable opportunity to effortlessly and smoothly secure their smart contracts. Ultimately, this is what sets Flow apart in the world of blockchain.&lt;/p&gt;

&lt;p&gt;On the topic of fine-grained permissions, hopefully you found this in-depth exploration valuable and helpful. For more information, refer to the &lt;a href="https://cadence-lang.org/docs/language/access-control"&gt;official documentation&lt;/a&gt;. Or explore the &lt;a href="https://developers.flow.com/"&gt;Flow Docs&lt;/a&gt; and get some hands-on experience!&lt;/p&gt;

</description>
      <category>web3</category>
      <category>cryptocurrency</category>
      <category>blockchain</category>
      <category>webdev</category>
    </item>
    <item>
      <title>The Power of Resource-Oriented Programming in Flow/Cadence: A Deep Dive</title>
      <dc:creator>Michael Bogan</dc:creator>
      <pubDate>Wed, 01 Nov 2023 14:16:58 +0000</pubDate>
      <link>https://forem.com/mbogan/the-power-of-resource-oriented-programming-in-flowcadence-a-deep-dive-a9g</link>
      <guid>https://forem.com/mbogan/the-power-of-resource-oriented-programming-in-flowcadence-a-deep-dive-a9g</guid>
      <description>&lt;p&gt;&lt;a href="https://flow.com/"&gt;Flow&lt;/a&gt; is a permissionless layer-1 blockchain built to support the high-scale use cases of games, virtual worlds, and the digital assets that power them. The blockchain was created by the team behind CryptoKitties, Dapper Labs, and NBA Top Shot.&lt;/p&gt;

&lt;p&gt;One of the best features of Flow is that it supports the paradigm of &lt;strong&gt;resource-oriented programming&lt;/strong&gt;. Resource-oriented programming is a new way of managing memory where resources are held “in-situ” by the resource owner instead of in a separate ledger. This is very relevant for managing scarce and unreplicable digital resources on the blockchain.&lt;/p&gt;

&lt;p&gt;In this article, I’ll do a deep dive into resource-oriented programming—and allow you to better manage your resources in your Flow smart contracts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TP4su9fn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kkhfsosu20g71t32pif2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TP4su9fn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kkhfsosu20g71t32pif2.png" alt="Image description" width="794" height="447"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://nftnewspro.com/flow-blockchain-got-over-1-b-in-nft-sales/"&gt;Source&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  What is resource-oriented programming?
&lt;/h2&gt;

&lt;p&gt;In resource-oriented programming, objects are labeled as “resources”. Resources are a collection of variables and functions that only have exactly one copy at a time. Resources provide better composability than EVM and WASM. &lt;/p&gt;

&lt;p&gt;When something is called a resource, there are very specific rules for interacting with it. The three rules that apply to resources in resource-oriented programming are as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Each resource exists in only one memory location - Resources cannot be duplicated.&lt;/li&gt;
&lt;li&gt;Ownership of a resource is defined by where it is stored - No central ledger that keeps track of ownership&lt;/li&gt;
&lt;li&gt;Only the owner of the resource can access its methods.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To explain this better: In the real world, if you claim to own a watch, that ownership is proven by the fact that you possess it yourself. There is no central ledger that people refer to check if you are the owner. However, in programming, when we think of ownership, we think of a mapping somewhere in a ledger of the object ID to the owner ID. For example, in the case of ERC721 smart contracts, the ownership of digital assets is stored in a ledger owned by the main smart contract. &lt;/p&gt;

&lt;p&gt;Flow changes this paradigm of ownership by storing “resources” in the memory location owned by the owner itself (see image below). This means that if you’re holding a digital asset on the Flow blockchain, it is stored in the memory location of the account that you own.&lt;/p&gt;

&lt;p&gt;This has several benefits, which we’ll talk about in the coming section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DgQ0uE1w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ou1xnxospi3ulokhtohh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DgQ0uE1w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ou1xnxospi3ulokhtohh.png" alt="Image description" width="541" height="518"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;A depiction of memory allocation for resources in Cadence&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Resources in Cadence
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://developers.flow.com/cadence/intro"&gt;Cadence&lt;/a&gt; is the world’s first high-level resource-oriented programming language. &lt;a href="https://github.com/move-language/move"&gt;Move&lt;/a&gt;, created for Facebook’s now defunct Diem, is the only other language that is resource-oriented, but it is more low-level.&lt;/p&gt;

&lt;p&gt;In Cadence, resources are defined using the “resource” keyword. Within a resource, you can define the attributes of the resource, like variables and functions. This definition of a resource now becomes a type and can be used to create objects of that type.&lt;/p&gt;

&lt;p&gt;An example of creating a resource and creating an instance of that resource is shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Declare a resource that only includes one function.
    pub resource CarAsset {


        // A transaction can call this function to get the "Honk Honk!"
        // message from the resource.
        pub fun honkHorn(): String {
            return "Honk Horn!"
        }
    }


    // We're going to use the built-in create function to create a new instance
    // of the Car resource
    pub fun createCarAsset(): @CarAsset {
        return &amp;lt;-create CarAsset()
    }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Destroy function
&lt;/h3&gt;

&lt;p&gt;Resources in Cadence can be explicitly destroyed using the “destroy” keyword.  A resource object cannot go out of scope and be dynamically lost. It must be destroyed or moved before the end of scope.&lt;/p&gt;

&lt;p&gt;Below is a cadence example of how to use destroy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    let d &amp;lt;- create SomeResource(value: 20)
    destroy d

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Move Operator (&amp;lt;-)
&lt;/h3&gt;

&lt;p&gt;To make the moves of a resource explicit, the Move operator, such as “&amp;lt;-”, must be used when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initializing the value of a constant or variable with a resource&lt;/li&gt;
&lt;li&gt;Moving resources to a different variable&lt;/li&gt;
&lt;li&gt;Moving resources to the argument of a function&lt;/li&gt;
&lt;li&gt;Returning it from a function&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below is a cadence example of how to use the Move (“&amp;lt;-”) operator&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    let d &amp;lt;- create SomeResource(value: 20)
    let new_resource &amp;lt;- d   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Code Example of Creating a Resource in Cadence
&lt;/h3&gt;

&lt;p&gt;Below is an example of a Cadence script that creates a CryptoKitty collection as a resource and also stores individual Kitties as a resource.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;contract CryptoKitties {
    // Accounts store a collection in their account storage

    resource KittyCollection {
    // Each collection has functions to
    // move stored resources in and out

    fun withdraw(kittyId: int): CryptoKitty
        fun deposit(kitty: CryptoKitty)
    }

    // The resource objects that can be stored in the collection
    resource CryptoKitty {}
}


transaction(signer: Account) {
    // Removes the Kitty from signer's collection, and stores it
    // temporarily on the stack.


    let theKitty &amp;lt;- signer.kittyCollection.withdraw(kittyId: myKittyId)


    // Moves the Kitty into the receiver's account
    let receiver = getAccount(receiverAccountId)
    receiver.kittyCollection.deposit(kitty: &amp;lt;-theKitty)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code, we create a KittyCollection resource that has several CryptoKitty sub-resources within it. We then do a transfer transaction to transfer a Kitty from one user account to another.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other things to keep in mind
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Resources must be created using the “create” keyword.&lt;/li&gt;
&lt;li&gt;At the end of a function with resources, the resources should either be moved or destroyed.&lt;/li&gt;
&lt;li&gt;Accessing a field or a function of a resource does not destroy it.&lt;/li&gt;
&lt;li&gt;When a resource is moved, the constant or variable currently holding the resource becomes invalid.&lt;/li&gt;
&lt;li&gt;To make a resource type explicit, the prefix @ must be used in type annotations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Comparison and Benefits
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Resource-oriented programming versus ledger-based systems
&lt;/h3&gt;

&lt;p&gt;A common comparison of the ownership structure in resource-oriented programming is with ledger-based ownership. In the ledger model, like in ERC721 smart contracts, every time you have to update ownership, you need to update the mapping of the ID in a shared ledger. &lt;/p&gt;

&lt;p&gt;In resource-oriented programming, resources are stored within the account of the owner itself, so transferring a digital asset only means moving the resource from the first party to the second. This does not require updating any ledgers. Resource-oriented programming is more suitable for blockchain use cases like digital assets than the traditional ledger system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Jy8LrLE6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ii068t3pb899kbx9dzdq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Jy8LrLE6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ii068t3pb899kbx9dzdq.png" alt="Image description" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of resource-oriented programming
&lt;/h3&gt;

&lt;p&gt;There are several benefits that resource-oriented programming brings to the table. Let’s look at some of them below.&lt;/p&gt;

&lt;h3&gt;
  
  
  Makes capability-based security possible
&lt;/h3&gt;

&lt;p&gt;Resource types provide the foundation on which capability-based security can be established. Capabilities are one of the best methods to create highly secure blockchain authorization systems. This brings the utmost security to the Flow blockchain.&lt;/p&gt;

&lt;h3&gt;
  
  
  Better protection again reentrancy attacks
&lt;/h3&gt;

&lt;p&gt;Reentrancy attacks are one of the most severe smart contract vulnerabilities often exploited by hackers. In a reentrancy attack, a hacker calls the withdraw function on a smart contract recursively. This way, they are able to drain the entire funds of a smart contract. Due to the architecture of cadence, re-entrancy attacks are much less likely to occur. Read more in this article about the DAO hack and how cadence is safe from it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Better pricing and cost management
&lt;/h3&gt;

&lt;p&gt;In Ethereum, the CryptoKitties contract has 2 million digital assets collectively taking up more than 100 MB of space, living rent-free on the Ethereum blockchain. In an ideal world, it should be possible to charge the owners of Kitties based on how much of this data belongs to them on-chain. In Flow, since the resource is directly stored in the account of the owner, it is clear who needs to pay the cost for that data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Resource-oriented programming is a new and innovative programming paradigm that has been brought to the blockchain world by the Flow blockchain. It makes resource management very efficient, easy, and secure. It also helps provide a better cost-sharing model for running blockchain infrastructure.&lt;/p&gt;

&lt;p&gt;You can refer to the &lt;a href="https://cadence-lang.org/docs/language/resources"&gt;official docs page&lt;/a&gt; on Resources to learn more. I also recommend that you get your hands dirty by going through &lt;a href="https://developers.flow.com/"&gt;Flow Docs&lt;/a&gt; and this &lt;a href="https://cadence-lang.org/docs/tutorial/resources"&gt;tutorial on Resources&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>web3</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Giving Power Back to Your Users with Flow’s Account Model</title>
      <dc:creator>Michael Bogan</dc:creator>
      <pubDate>Wed, 25 Oct 2023 15:19:56 +0000</pubDate>
      <link>https://forem.com/mbogan/giving-power-back-to-your-users-with-flows-account-model-4g83</link>
      <guid>https://forem.com/mbogan/giving-power-back-to-your-users-with-flows-account-model-4g83</guid>
      <description>&lt;p&gt;Many alternative blockchains that have emerged recently are classified as “EVM” chains, meaning they operate exactly like Ethereum but have a different execution layer. This helps the cross-compatibility of smart contracts across chains, but it doesn’t solve some of the crucial problems embedded in the EVM system. In particular, &lt;strong&gt;it hasn’t improved how user accounts are handled and protected&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://flow.com/" rel="noopener noreferrer"&gt;Flow&lt;/a&gt;, a layer-1 blockchain, is trying to change that pattern with its account model. In this article, we’ll look in detail at that new account model, how it works, and how it might be able to solve some of the most difficult UX problems in blockchain.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do Accounts on Ethereum Work?
&lt;/h2&gt;

&lt;p&gt;One of the best ways to understand how the Flow account model excels is to compare it to Ethereum. There are two types of accounts on Ethereum: &lt;strong&gt;externally owned accounts&lt;/strong&gt; (EOA), which are your typical consumer wallets, and then &lt;strong&gt;smart contract accounts&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;EOAs have a public and private key pair, where the public key is derived from the private key and works as an address for the account, and the private key signs transactions on the blockchain. They can hold a balance and interact with other accounts, primarily smart contract accounts. &lt;/p&gt;

&lt;p&gt;These &lt;strong&gt;smart contract accounts&lt;/strong&gt; are compiled byte code that run on the Ethereum Virtual Machine. What’s really interesting is that any data created by the smart contract, such as tokens or NFTs, is stored in that smart contract. Instead of an EOA truly owning the token, the smart contract simply says who owns what and how much.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ethereum’s potential security gaps
&lt;/h2&gt;

&lt;p&gt;Not only is true digital ownership conceded, but smart contracts on Ethereum are hard to read and audit for security. A great number of scams and “rug pulls” have occurred on EVM chains because of this power imbalance. Such a scam would usually look like this: A website, which looks very similar to a popular NFT project, touts a new collection at a reasonable mint price. People visit the site, connect their wallet, and when they click that mint button, they will likely get a screen with a bunch of blockchain byte code that tells them nothing about what will happen if they sign. It could just mint an NFT, or it could completely deplete their wallet. Without the ability to read the transactions well, wallets are not able to give the end user much info as to what will happen when they sign. (Note that Flow, covered in more detail below, has a better transaction format that allows wallets to clearly tell the end user what will happen when they sign, giving more balance and control to the user.)&lt;/p&gt;

&lt;h2&gt;
  
  
  How do Accounts on Flow work?
&lt;/h2&gt;

&lt;p&gt;The Flow account model combines the concepts of the EOA and smart contract accounts from Ethereum into a single standard. In this model, the account and the public keys are decoupled. As stated earlier, Ethereum accounts only have a public and private key, and they are tied to the account itself. This is important as it enables better control and reduces potential mistakes made by the end user, ultimately helping protect their assets.&lt;/p&gt;

&lt;p&gt;With the &lt;a href="https://flow.com/account-abstraction" rel="noopener noreferrer"&gt;Flow account model&lt;/a&gt;, you can have a single account with multiple public and private keys. Having multiple keys is a huge advantage because you can revoke or rotate keys that might be compromised, again giving the user better control and security. Not only that, but these keys are weighted keys, giving them the ability to do more complex transactions such as multiple signatures.These signatures are used frequently in blockchain to let multiple users sign one transaction, similar to how you might need two keys to open a bank vault. In the EVM model this has to be built out manually, but with Flow it is readily available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4byaq9wpg312ei9b350r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4byaq9wpg312ei9b350r.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developers.flow.com/build/basics/accounts" rel="noopener noreferrer"&gt;Image courtesy of Flow Documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since there can be multiple keys for one account, the way Flow creates addresses is also unique. Unlike Ethereum—where addresses are derived from the public key—Flow account addresses are created with an internal on-chain checksum at the protocol level. This ensures addresses are unique. &lt;/p&gt;

&lt;p&gt;In addition, Flow gives developers and users the option of different signature and hash algorithm curves such as secp256k1 (used primarily by Bitcoin and other cryptocurrencies) or the more flexible P-256 (which is adopted by most cell phones and computers). These options provide better flexibility and compatibility with other protocols and ultimately give users top-notch security, all on devices they already use daily. And since encryption and cryptography are always evolving, it’s critical and necessary to adopt the latest standards.&lt;/p&gt;

&lt;h2&gt;
  
  
  Storage with Flow
&lt;/h2&gt;

&lt;p&gt;Another thing unique to Flow’s account model is its storage capabilities. On Ethereum, only smart contract accounts have the ability to store data and thus truly own assets. You can think of it like leasing a car: if you lease one you do get to drive it and park it at your house, but it's not truly yours. There’s a contract saying you own the car under certain terms, and the ultimate power is in that contract— and its terms may not always favor the lessee. &lt;/p&gt;

&lt;p&gt;On the other hand, Flow accounts allow for asset storage and for accounts to deploy their own smart contracts. The account storage used is calculated by the byte size of data currently stored in the account, and it is directly tied to the balance of Flow tokens on the account.&lt;/p&gt;

&lt;h2&gt;
  
  
  True digital ownership: buy, don’t lease
&lt;/h2&gt;

&lt;p&gt;This is truly special as it gives users the ability to truly own the digital items they purchase, rather than a smart contract putting their name on it. Flow manages this storage by the amount of Flow tokens the account has, including a minimum storage feed of 0.001 $FLOW which equates to 100Kb of data. (This is to make sure it can handle any incoming assets.) &lt;/p&gt;

&lt;p&gt;If for any reason the account attempts a transaction that would exceed its storage capacity, then the transaction will fail. In the Flow account model, that car is in your garage with no contracts tied to it! Paid in full.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvzzr7vkgqlob922utm6q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvzzr7vkgqlob922utm6q.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developers.flow.com/build/basics/accounts" rel="noopener noreferrer"&gt;Image courtesy of Flow Documentation&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Flow is trying to diverge from the normal EVM path. In the Ethereum world, there are many cases of users losing funds partly due to the account model on EVM chains, which makes it difficult for users to read and interpret what will happen when they press a button on a website. Flow is aiming to empower users to own their digital assets and ensure funds will only leave their account with their permission. Flow also gives developers flexibility, making things like multi-signature weighted keys native to the platform, as well as multiple signature and hashing algorithms to choose from for the best compatibility among devices and networks.&lt;/p&gt;

&lt;p&gt;It’s early, but these kinds of features are needed in web3 UX, and could be a potential game-changer.&lt;/p&gt;

</description>
      <category>web3</category>
      <category>ux</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Smart Contract Language Comparison - Cadence vs Solidity vs Move</title>
      <dc:creator>Michael Bogan</dc:creator>
      <pubDate>Wed, 27 Sep 2023 14:32:42 +0000</pubDate>
      <link>https://forem.com/mbogan/smart-contract-language-comparison-cadence-vs-solidity-vs-move-3gi6</link>
      <guid>https://forem.com/mbogan/smart-contract-language-comparison-cadence-vs-solidity-vs-move-3gi6</guid>
      <description>&lt;p&gt;When starting a new web3 project, it’s important to make the right choices about the blockchain and smart contract language. These choices can significantly impact the overall success of your project as well as your success as a developer. &lt;/p&gt;

&lt;p&gt;In this article, we'll compare three popular smart contract programming languages: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Solidity: used in Ethereum and other EVM-based platforms&lt;/li&gt;
&lt;li&gt;Cadence: used in the Flow blockchain&lt;/li&gt;
&lt;li&gt;Move: used in the Sui/Aptos blockchain&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We'll explain the concepts and characteristics of each language in simple terms, making it easier for beginners to understand. Let’s dive in. &lt;/p&gt;

&lt;h2&gt;
  
  
  Solidity: The Foundation of Ethereum Smart Contracts
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Solidity&lt;/strong&gt; is a high-level, object-oriented language used to build smart contracts in platforms like Ethereum. Initially, Solidity aimed to be user-friendly, attracting developers by resembling JavaScript and simplifying learning. While it still values user-friendliness, its focus has shifted to enhancing security. Currently, Solidity has &lt;a href="https://medium.com/@solidity101/solidity-security-pitfalls-best-practices-101-a9a64010310e"&gt;quite a few security pitfalls&lt;/a&gt; developers need to be aware of.&lt;/p&gt;

&lt;p&gt;Some Solidity features include: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Syntax and Simplicity&lt;/strong&gt;: Solidity uses clear, explicit code with a syntax similar to JavaScript, prioritizing ease of understanding for developers. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Focus&lt;/strong&gt;: Solidity emphasizes secure coding practices and highlights risky constructs like gas usage. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Statically Typed&lt;/strong&gt;: The language enforces data type declarations for variables and functions. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inheritance and Libraries&lt;/strong&gt;: Solidity supports features like inheritance, libraries, and complex user-defined types.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cadence: Empowering Digital Assets on Flow
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Cadence&lt;/strong&gt; is designed by Flow,  a blockchain known for helping to make web3 mainstream and working with major brands like Disney and the NBA. It ensures secure, clear, and approachable smart contract development.&lt;/p&gt;

&lt;p&gt;Some Cadence features include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Type Safety&lt;/strong&gt;: The language enforces strict type checking to prevent common errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource-oriented Programming&lt;/strong&gt;: Resources are unique, linear types that can only move between accounts and can’t ever be copied or implicitly discarded. If a function fails to store a Resource obtained from an account in the function scope during development, then semantic checks will flag an error. The run-time enforces the same strict rules in terms of allowed operations. Therefore, contract functions that do not properly handle Resources in scope before exiting will abort, reverting them to the original storage. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These Resource features make them perfect for representing both fungible and non-fungible tokens. Ownership is tracked according to where they are stored, and the assets can’t be duplicated or accidentally lost since the language itself enforces correctness.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Capability-based Security&lt;/strong&gt;: If one person wants to access another person's stored items, the first person needs a permission called a Capability. Using Capabilities allows you to let others access your stored items remotely.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are two types of Capabilities: public and private. If someone wants to allow everyone to access their items, they can share a public Capability. For instance, an account can use a public Capability to accept tokens from anyone. On the other hand, someone can also give private Capabilities to specific people, allowing them to access certain features. For example, in a project that involves unique digital items, the project owners might give specific people an "administrator Capability" that lets them create new items.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A1TNMJjf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0romlndywc3hz265cc5r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A1TNMJjf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0romlndywc3hz265cc5r.png" alt="Image description" width="800" height="544"&gt;&lt;/a&gt;&lt;br&gt;
(image from &lt;a href="https://cadence-lang.org/docs/solidity-to-cadence"&gt;Guide for Solidity developers | Flow Developer Portal&lt;/a&gt;)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Built-in Pre- and Post-Conditions&lt;/strong&gt;: Functions have predefined conditions for safer execution. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimized for Digital Assets&lt;/strong&gt;: Cadence's focus on resource-oriented programming makes it ideal for managing digital assets in areas such as onchain games.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Freedom from msg.sender&lt;/strong&gt;: To grasp the importance of these new ideas, let's take a quick look at some history. In 2018, the Dapper Labs team began working on Cadence as a new programming language. They faced challenges with Solidity because of its limitations. The main frustration in building decentralized apps came from the way contracts were accessed using addresses, making it difficult to combine contracts.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Composability in Web3
&lt;/h2&gt;

&lt;p&gt;Now, imagine contracts as Lego building blocks. Composability in web3 means one contract can be used as a foundation for others, adding their features together. &lt;/p&gt;

&lt;p&gt;For instance, if a contract records game results on a blockchain, another contract can be built to show the best players. Another one could go even further and use past game results to predict future game odds for betting. But here's the catch: Because of how Solidity works, contracts can only talk to one another if the first contract has permission to access the second one, even if users can access both.&lt;/p&gt;

&lt;p&gt;In Solidity, who can do what is controlled by protected functions in contracts. This means contracts know and check who is trying to access their protected areas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BCSYbeSj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xjpc5jtgy8h2x4t3fuoc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BCSYbeSj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xjpc5jtgy8h2x4t3fuoc.png" alt="Image description" width="800" height="662"&gt;&lt;/a&gt;&lt;br&gt;
(image from &lt;a href="https://cadence-lang.org/docs/solidity-to-cadence"&gt;Guide for Solidity developers | Flow Developer Portal&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Cadence changes how access works. Instead of using the old way where contracts need permission, it uses Capabilities. When you have a Capability, you can use it to get to a protected item such as a function or resource. This means the contract no longer has to define who's allowed access. You can only get to the protected item if you have a Capability, which you can use with "borrow()". So, the old "msg.sender" way isn't needed anymore!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0PCPIfsv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6cnn8ztvsmnzmqibq98l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0PCPIfsv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6cnn8ztvsmnzmqibq98l.png" alt="Image description" width="800" height="587"&gt;&lt;/a&gt;&lt;br&gt;
(image from &lt;a href="https://cadence-lang.org/docs/solidity-to-cadence"&gt;Guide for Solidity developers | Flow Developer Portal&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;The effects of composability are important. When contracts don't need to know beforehand who they're interacting with, users can easily interact with multiple contracts and their functions during a transaction if they have the right permissions (Capabilities). This also allows contracts to interact with one another directly, without needing special permissions or preparations. The only condition is that the calling contract must have the required Capabilities.&lt;/p&gt;
&lt;h2&gt;
  
  
  Move: Safeguarding Digital Assets on Sui/Aptos
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Move&lt;/strong&gt;, used in the Sui/Aptos blockchain, addresses challenges posed by established languages like Solidity. It ensures scarcity and access control for digital assets. &lt;br&gt;
Move's features include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Preventing Double-spending&lt;/strong&gt;: Move prevents the creation or use of assets more than once, ensuring robust blockchain applications. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ownership and Rights Control&lt;/strong&gt;: Developers have precise control over ownership and associated rights. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Module Structure&lt;/strong&gt;: In Move, a smart contract is called a module, emphasizing modularity and organization. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bytecode Verifier&lt;/strong&gt;: Move employs static analysis to reject invalid bytecode, enhancing security. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standard Library&lt;/strong&gt;: Move includes a standard library for common transaction scripts. &lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Creating Smart Contracts
&lt;/h2&gt;

&lt;p&gt;Let's illustrate the differences by comparing a simple smart contract that increments a value in Cadence, Solidity, and Move. &lt;/p&gt;
&lt;h3&gt;
  
  
  Solidity Example
&lt;/h3&gt;

&lt;p&gt;In Solidity, creating a contract that increments a value involves defining a contract, specifying the count variable, and creating functions to manipulate it. It uses explicit syntax for variable visibility and function declarations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// SPDC-License-Identifier: MIT
pragma solidity ^0.8.17;

contract Counter {
    uint public count;

    // function to get the current count
    function get() public view returns (uint) {
        return count;
    }

    // function to increment count by 1
    function inc() public {
        count += 1;
    }

    // function to decrement count by 1
    function dec() public {
        count -=1;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Cadence Example
&lt;/h3&gt;

&lt;p&gt;Cadence's approach to incrementing a value is similar but emphasizes clarity. It utilizes a resource-oriented structure and straightforward syntax, making it easier for developers to create and manage digital assets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pub contract Counter {
    pub var count: Int

    // function to increment count by 1
pub fun increment() {
        self.count = self.count +1
    }

    // function to decrement count by 1
    pub fun decrement() {
        self.count = self.count – 1
    }

    pub fun get(): Int {
        return self.count
    }

    init() {
        self.count = 0
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Solidity versus Cadence Syntax Differences
&lt;/h3&gt;

&lt;p&gt;In Solidity, the visibility keyword comes before or after variable/function names, whereas Cadence consistently follows the visibility-type-name sequence.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Scalability and Upgradability *&lt;/em&gt;&lt;br&gt;
Flow's network boasts higher transaction throughput than Ethereum, making Cadence more scalable. Additionally, Flow's support for contract updates enhances development. &lt;/p&gt;
&lt;h3&gt;
  
  
  Move Example
&lt;/h3&gt;

&lt;p&gt;Move introduces new concepts like modules, resources, and ownership control. A Move module creates an Incrementer resource, requiring the owner's signature for operations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module incrementer_addr::increment {

    use aptos_framework::account;
    use std::signer;

    struct Incrementer has key {
        count: u64,
    }

    public entry fun increment(account: &amp;amp;signer) acquires Incrementer {
        let signer_address = signer::address_of(account);
        let c_ref = &amp;amp;mut borrow_global_mut&amp;lt;Incrementer&amp;gt;(signer_address).count;
        *c_ref = *c_ref +1
    }

        public entry fun create_incrementer(account: &amp;amp;signer){
           let incrementer = Incrementer {
           count: 0
           };
           move_to(account, incrementer);
       }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;*&lt;em&gt;Composite Types and Turing Completeness *&lt;/em&gt;&lt;br&gt;
All three languages support composite types, allowing complex types from simpler ones. All are Turing complete, meaning they can handle any computation given enough resources. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource-Oriented versus Object-Oriented&lt;/strong&gt; &lt;br&gt;
While Solidity and Move require compilation, Cadence is interpreted. Cadence and Move employ a resource-oriented approach, securing digital ownership in one location. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Choosing the Right Programming Language
&lt;/h2&gt;

&lt;p&gt;When selecting a programming language like Solidity, Cadence, or Move, consider the needs of your project.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solidity&lt;/strong&gt;: Solidity is commonly used, but it might not be very easy to work with. Developers need to be careful of the security pitfalls and understand best practices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cadence&lt;/strong&gt;: Mainly used for digital assets, Cadence is a newer language that focuses on security, is easy to understand, and provides developers with a superior experience.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Move&lt;/strong&gt;: Move is based on Rust, a more complex language. Rust can be more difficult to learn and understand. Move is a new language, and it doesn't have many tools, resources, or a big community. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ultimately, your choice will impact your project's success, so make an informed decision and enjoy your journey as a web3 developer!&lt;/p&gt;

</description>
      <category>web3</category>
      <category>webdev</category>
      <category>programming</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
