<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Dario Castañé</title>
    <description>The latest articles on Forem by Dario Castañé (@dcc).</description>
    <link>https://forem.com/dcc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/dcc"/>
    <language>en</language>
    <item>
      <title>The Honest Climate Case for AI</title>
      <dc:creator>Dario Castañé</dc:creator>
      <pubDate>Fri, 10 Apr 2026 15:02:50 +0000</pubDate>
      <link>https://forem.com/dcc/the-honest-climate-case-for-ai-5hg5</link>
      <guid>https://forem.com/dcc/the-honest-climate-case-for-ai-5hg5</guid>
      <description>&lt;p&gt;Most articles about AI and the climate answer a question that doesn't matter. Defenders spend thousands of words proving your personal ChatGPT use is negligible, which is true but beside the point. Critics warn that AI will cook the planet, which is not supported by any serious projection. Both camps avoid the actual question: how fast is aggregate AI demand growing, what will power it, and what will it enable?&lt;/p&gt;

&lt;p&gt;Feeling guilty over a 0.3 Wh query is about as useful as holding your breath to fight climate change. Any industry that uses about 1.5% of the world's electricity, like air conditioning or industrial motors&lt;sup id="fnref1"&gt;1&lt;/sup&gt;, shouldn't be a cause for concern. The real story is somewhere in the middle.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 2026 snapshot
&lt;/h2&gt;

&lt;p&gt;Global data centers consumed roughly 415 TWh in 2024, about 1.5% of global electricity.&lt;sup id="fnref1"&gt;1&lt;/sup&gt; AI-specific servers accounted for around 93 TWh in 2025, or about 0.3% of global electricity.&lt;sup id="fnref2"&gt;2&lt;/sup&gt; For comparison, residential air conditioning uses more than six times that amount, industrial motors use around forty times, and global video streaming sits in roughly the same ballpark as AI at 100–120 TWh.&lt;/p&gt;

&lt;p&gt;A short query to a standard (non-reasoning) model like GPT-4o consumes about 0.3 Wh, roughly three seconds of microwave operation.&lt;sup id="fnref3"&gt;3&lt;/sup&gt;&lt;sup id="fnref4"&gt;4&lt;/sup&gt; If you sent a thousand such queries a day, you'd rack up ~110 kWh per year, which is about 3% of what a typical Spanish household consumes annually.&lt;sup id="fnref5"&gt;5&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;So on the narrow point, the "AI is fine" crowd is right. Someone who quits ChatGPT to "help the planet" while still eating beef and driving a combustion car is doing climate theater, not climate action. But this frame captures today's snapshot of a fast-moving target, and only for the simplest queries on the most efficient models. Three things make it misleading as a summary of where AI's climate profile actually sits.&lt;/p&gt;

&lt;h2&gt;
  
  
  The first thing: "average query" is a moving target
&lt;/h2&gt;

&lt;p&gt;The 0.3 Wh figure applies to a short, single-turn query on a non-reasoning model. The industry has dedicated the past eighteen months to the transition of users to reasoning models, including o3, DeepSeek R1, Claude with extended thinking, and GPT-5, which require 10 to 100 times more energy per query. Measured benchmarks put o3 at around 33 Wh, GPT-4.5 at around 30 Wh, and Claude 3.7 Sonnet with extended thinking at around 17 Wh.&lt;sup id="fnref6"&gt;6&lt;/sup&gt; These aren't edge cases. They're becoming the default.&lt;/p&gt;

&lt;p&gt;Agentic workflows compound the shift. A single user request to an AI agent ("book me a flight", "refactor this module") can trigger dozens or hundreds of inference calls as the agent plans, searches, verifies, and iterates. The unit of energy cost is no longer a prompt. It's a task, and tasks can be arbitrarily compute-intensive.&lt;/p&gt;

&lt;h2&gt;
  
  
  The second thing: demand outruns efficiency
&lt;/h2&gt;

&lt;p&gt;Every defender of AI's footprint eventually invokes efficiency, and the gains are real. NVIDIA's Blackwell is 25–50 times more efficient per token than Hopper.&lt;sup id="fnref7"&gt;7&lt;/sup&gt; Algorithmic efficiency in pre-training triples roughly every year.&lt;sup id="fnref8"&gt;8&lt;/sup&gt; Quantization, mixture-of-experts, and distillation all deliver genuine improvements.&lt;/p&gt;

&lt;p&gt;The problem is that demand has been compounding faster. When per-query efficiency was supposedly improving quickly, ChatGPT processed roughly 1 billion prompts per day in December 2024 and 2.5 billion by July 2025, a 150% increase in just seven months &lt;sup id="fnref9"&gt;9&lt;/sup&gt;. According to the IEA's central scenario, global data centers will be around 945 TWh by 2030 and 1,200 TWh by 2035. Their Lift-Off scenario reaches 1,700 TWh, about 4.4% of global electricity.&lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;This is Jevons paradox in real time. Meta spent 50% more on AI after DeepSeek showed off its cutting-edge capabilities in January 2025 at a fraction of the cost of training. Microsoft, Google, and Amazon held or increased capex. Satya Nadella posted about Jevons paradox on the same day DeepSeek dropped: "As AI gets more efficient and accessible, we will see its use skyrocket."&lt;sup id="fnref10"&gt;10&lt;/sup&gt; Token prices collapsed by over 90% across the industry in 2025, and total inference spending more than doubled.&lt;/p&gt;

&lt;p&gt;The streaming video analogy that sometimes gets invoked doesn't rescue this picture. Streaming kept energy flat despite exponential traffic growth because video is cached at the edge, so the marginal cost of one more viewer is close to zero.&lt;sup id="fnref11"&gt;11&lt;/sup&gt; AI inference can't be cached that way: every query needs fresh GPU computation. And unlike streaming, where humans have finite viewing hours, AI demand has no obvious ceiling because agents can generate queries continuously.&lt;/p&gt;

&lt;h2&gt;
  
  
  The third thing: the grid mix matters more than the chips
&lt;/h2&gt;

&lt;p&gt;Here's the fact that gets lost in per-query debates: AI's climate impact depends almost entirely on what kind of electricity is feeding the data center, not on how efficient the chips are. A 1,700 TWh AI sector powered by renewables and nuclear is a footnote in the energy transition. The same sector running on gas and coal is a real problem.&lt;/p&gt;

&lt;p&gt;In 2024, the carbon intensity of US data center electricity was approximately 48% higher than the national grid average, with a value of 548 gCO₂e/kWh compared to 369 gCO₂e/kWh. This is because data centers have clustered in gas-heavy regions such as Virginia.&lt;sup id="fnref12"&gt;12&lt;/sup&gt; The IEA's Lift-Off scenario predicts that fossil fuels will provide nearly half of the additional electricity for data centers between 2024 and 2030. Natural gas is expected to grow 1.5 times faster than the Base Case, with the United States experiencing the most significant absolute increase, and coal generation is expected to double, primarily in China.&lt;sup id="fnref13"&gt;13&lt;/sup&gt; Google abandoned its net-zero goal in July 2025. Microsoft's emissions have risen roughly 23% since 2020 despite record renewable procurement.&lt;sup id="fnref14"&gt;14&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;None of this is locked in. The same IEA report projects renewables meeting half of data center demand growth by 2030, with nuclear (including the first small modular reactors) contributing meaningfully after 2030.&lt;sup id="fnref1"&gt;1&lt;/sup&gt; Tech companies have committed over $10 billion to nuclear partnerships. Microsoft is restarting Three Mile Island. The money is real; the timelines are long. AI is forcing a binary choice: build clean firm generation fast enough to feed new demand, or lock in gas infrastructure that will be running in 2050. That choice is being made now in permit queues and interconnection agreements, and it has very little to do with whether you send another ChatGPT query today.&lt;/p&gt;

&lt;h2&gt;
  
  
  The positive case, stated without hype
&lt;/h2&gt;

&lt;p&gt;The IEA predicts that the widespread adoption of current AI applications, including grid optimization, materials science, logistics, precision agriculture, and building efficiency, could reduce global emissions by approximately 5% of energy-related CO₂ by 2035. That's larger than the emissions from the data centers running those applications, even in the Lift-Off scenario.&lt;sup id="fnref15"&gt;15&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The qualifications are important. The IEA calls this figure an "exploratory analysis" rather than a projection and explicitly warns there is "currently no momentum" ensuring widespread adoption.&lt;sup id="fnref15"&gt;15&lt;/sup&gt; It assumes aggressive deployment in sectors where deployment has been slow. It assumes rebound effects don't eat the savings, though rebound effects are well-documented: cheaper autonomous trips mean more trips, and cheaper AI-optimized shipping means more stuff shipped.&lt;/p&gt;

&lt;p&gt;Genuine examples are emerging. DeepMind's cooling AI reduced Google data center cooling energy by roughly 40%.&lt;sup id="fnref16"&gt;16&lt;/sup&gt; AlphaFold compressed decades of protein structure research into a few months.&lt;sup id="fnref17"&gt;17&lt;/sup&gt; At a fraction of the compute cost, GraphCast performs better than traditional weather models. &lt;sup id="fnref18"&gt;18&lt;/sup&gt; These aren't experimental; they're being used in real-world settings.&lt;/p&gt;

&lt;p&gt;The honest positive case is this: AI's potential climate benefits probably exceed its energy costs, but only if the benefits actually get deployed, the energy is clean, and rebound effects don't eat the gains. "AI will save the climate" is as lazy as "AI will destroy it."&lt;/p&gt;

&lt;h2&gt;
  
  
  What individuals should do
&lt;/h2&gt;

&lt;p&gt;Stop feeling guilty about prompts. Your Wh per query is not the lever that matters. You'll do more climate good by eating one less steak, taking one fewer flight, or voting for better energy policy than by boycotting LLMs.&lt;/p&gt;

&lt;p&gt;What matters at the individual level is where you direct your attention. Demand the acceleration of the deployment of clean generation to meet data center demand; grid interconnections, nuclear licensing, transmission lines, and permitting reform are the bottleneck, not GPUs. Advocate for transparency requirements on AI companies' operational emissions. Treat "AI for climate" claims with the same scrutiny you'd apply to any corporate sustainability marketing. Some of it is real; plenty is PR. And use AI where it genuinely makes you more effective at things that matter, including climate work. The question isn't "Is this query free?" (none are) but "Is this query worth it?" (many are).&lt;/p&gt;

&lt;h2&gt;
  
  
  What policymakers should do
&lt;/h2&gt;

&lt;p&gt;The meaningful levers are systemic: carbon-aware siting rules that require new data center builds to demonstrate grid capacity, water availability, and carbon intensity below thresholds; mandatory energy reporting at the workload level, not just the facility level, because you can't manage what you can't measure; clean firm power built at the pace of demand through nuclear, geothermal, long-duration storage, and accelerated permitting; water accountability in stressed regions; and a firm refusal to let efficiency metrics substitute for absolute emissions reductions. The climate responds to absolute emissions, so that's what policy should target.&lt;/p&gt;

&lt;h2&gt;
  
  
  The bottom line
&lt;/h2&gt;

&lt;p&gt;AI's current climate footprint is modest. It's comparable to streaming, smaller than air conditioning, and a rounding error next to steel, cement, transportation, or agriculture. Individual AI use is not a meaningful climate decision for most people.&lt;/p&gt;

&lt;p&gt;The trajectory is a different story. AI is the fastest-growing source of new electricity demand in advanced economies, one of the few sectors where emissions are rising while most are flat or declining,&lt;sup id="fnref1"&gt;1&lt;/sup&gt; and it concentrates grid strain in specific regions in ways that will shape what generation gets built this decade. Efficiency gains so far are being reinvested into more capability rather than banked as energy savings.&lt;/p&gt;

&lt;p&gt;The most honest thing that can be said is that whether AI turns out to be good, bad, or neutral for the climate depends almost entirely on the electricity mix feeding the data centers. Clean firm power, and the benefits will outweigh the costs. Gas and coal, and we'll have built out fossil infrastructure for a generation to run chatbots and image generators. At this very moment, that choice is being made in lines for grid interconnection, permit offices, and public utility commissions, not in your ChatGPT tab.&lt;/p&gt;

&lt;p&gt;If you care about this, pay attention to the infrastructure decisions. That's where the outcome actually gets determined.&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;IEA, "Energy and AI", April 2025. &lt;a href="https://www.iea.org/reports/energy-and-ai" rel="noopener noreferrer"&gt;https://www.iea.org/reports/energy-and-ai&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;Gartner, "Electricity Demand for Data Centers to Grow 16% in 2025 and Double by 2030", November 2025. &lt;a href="https://www.gartner.com/en/newsroom/press-releases/2025-11-17-gartner-says-electricity-demand-for-data-centers-to-grow-16-percent-in-2025-and-double-by-2030" rel="noopener noreferrer"&gt;https://www.gartner.com/en/newsroom/press-releases/2025-11-17-gartner-says-electricity-demand-for-data-centers-to-grow-16-percent-in-2025-and-double-by-2030&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;Epoch AI, "How much energy does ChatGPT use?", February 2025. &lt;a href="https://epoch.ai/gradient-updates/how-much-energy-does-chatgpt-use" rel="noopener noreferrer"&gt;https://epoch.ai/gradient-updates/how-much-energy-does-chatgpt-use&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn4"&gt;
&lt;p&gt;Andy Masley, "What's the full "hidden" climate cost of a ChatGPT prompt?", August 2025. &lt;a href="https://www.andymasley.com/writing/whats-the-full-hidden-climate-cost/" rel="noopener noreferrer"&gt;https://www.andymasley.com/writing/whats-the-full-hidden-climate-cost/&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn5"&gt;
&lt;p&gt;Endesa, "Household energy consumption in Spain (INE) and how to save", (~3,264 kWh/year). &lt;a href="https://www.endesa.com/en/blogs/endesa-s-blog/light/energy-consumption-in-spanish-households" rel="noopener noreferrer"&gt;https://www.endesa.com/en/blogs/endesa-s-blog/light/energy-consumption-in-spanish-households&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn6"&gt;
&lt;p&gt;Jegham et al., "How Hungry is AI?", arXiv:2505.09598, May 2025. &lt;a href="https://arxiv.org/abs/2505.09598" rel="noopener noreferrer"&gt;https://arxiv.org/abs/2505.09598&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn7"&gt;
&lt;p&gt;NVIDIA Developer Blog, "Introducing NVFP4 for Efficient and Accurate Low-Precision Inference", June 2025. &lt;a href="https://developer.nvidia.com/blog/introducing-nvfp4-for-efficient-and-accurate-low-precision-inference/" rel="noopener noreferrer"&gt;https://developer.nvidia.com/blog/introducing-nvfp4-for-efficient-and-accurate-low-precision-inference/&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn8"&gt;
&lt;p&gt;Epoch AI, "Trends in Artificial Intelligence", February 2026. &lt;a href="https://epoch.ai/trends/#training-runs" rel="noopener noreferrer"&gt;https://epoch.ai/trends/#training-runs&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn9"&gt;
&lt;p&gt;TechCrunch, "ChatGPT users send 2.5 billion prompts a day", July 2025. &lt;a href="https://techcrunch.com/2025/07/21/chatgpt-users-send-2-5-billion-prompts-a-day/" rel="noopener noreferrer"&gt;https://techcrunch.com/2025/07/21/chatgpt-users-send-2-5-billion-prompts-a-day/&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn10"&gt;
&lt;p&gt;NPR Planet Money, "Why the AI world is suddenly obsessed with a 160-year-old economics paradox", February 2025. &lt;a href="https://www.npr.org/sections/planet-money/2025/02/04/g-s1-46018/ai-deepseek-economics-jevons-paradox" rel="noopener noreferrer"&gt;https://www.npr.org/sections/planet-money/2025/02/04/g-s1-46018/ai-deepseek-economics-jevons-paradox&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn11"&gt;
&lt;p&gt;IEA, "The carbon footprint of streaming video: fact-checking the headlines", December 2020. &lt;a href="https://www.iea.org/commentaries/the-carbon-footprint-of-streaming-video-fact-checking-the-headlines" rel="noopener noreferrer"&gt;https://www.iea.org/commentaries/the-carbon-footprint-of-streaming-video-fact-checking-the-headlines&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn12"&gt;
&lt;p&gt;Gupta et al., "Environmental Burden of United States Data Centers in the Artificial Intelligence Era", arXiv:2411.09786. &lt;a href="https://arxiv.org/abs/2411.09786" rel="noopener noreferrer"&gt;https://arxiv.org/abs/2411.09786&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn13"&gt;
&lt;p&gt;IEA, &lt;em&gt;Energy and AI&lt;/em&gt;, "Energy supply for AI", April 2025. &lt;a href="https://www.iea.org/reports/energy-and-ai/energy-supply-for-ai" rel="noopener noreferrer"&gt;https://www.iea.org/reports/energy-and-ai/energy-supply-for-ai&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn14"&gt;
&lt;p&gt;MIT Technology Review, "We did the math on AI's energy footprint", May 2025. &lt;a href="https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/" rel="noopener noreferrer"&gt;https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn15"&gt;
&lt;p&gt;IEA, &lt;em&gt;Energy and AI&lt;/em&gt;, "AI and climate change", April 2025. &lt;a href="https://www.iea.org/reports/energy-and-ai/ai-and-climate-change" rel="noopener noreferrer"&gt;https://www.iea.org/reports/energy-and-ai/ai-and-climate-change&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn16"&gt;
&lt;p&gt;DeepMind, "DeepMind AI Reduces Google Data Centre Cooling Bill by 40%",  July 2016. &lt;a href="https://deepmind.google/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-by-40/" rel="noopener noreferrer"&gt;https://deepmind.google/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-by-40/&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn17"&gt;
&lt;p&gt;Demis Hassabis, John Jumper, Pushmeet Kohli and Anna Koivuniemi, "AlphaFold: Five years of impact", &lt;a href="https://deepmind.google/blog/alphafold-five-years-of-impact/" rel="noopener noreferrer"&gt;https://deepmind.google/blog/alphafold-five-years-of-impact/&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn18"&gt;
&lt;p&gt;Remi Lam, "GraphCast: AI model for faster and more accurate global weather forecasting", November 2023. &lt;a href="https://deepmind.google/blog/graphcast-ai-model-for-faster-and-more-accurate-global-weather-forecasting/" rel="noopener noreferrer"&gt;https://deepmind.google/blog/graphcast-ai-model-for-faster-and-more-accurate-global-weather-forecasting/&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>climate</category>
    </item>
    <item>
      <title>Hello world from a WASM module in a static binary</title>
      <dc:creator>Dario Castañé</dc:creator>
      <pubDate>Sat, 22 Feb 2025 12:27:38 +0000</pubDate>
      <link>https://forem.com/dcc/hello-world-from-a-wasm-module-in-a-static-binary-m5k</link>
      <guid>https://forem.com/dcc/hello-world-from-a-wasm-module-in-a-static-binary-m5k</guid>
      <description>&lt;p&gt;These are some quick notes documenting a Saturday morning's experiment. All the steps were executed on Linux.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR: What?
&lt;/h2&gt;

&lt;p&gt;Compiling a static binary that runs a compiled Ahead-of-Time (AOT) WASM module.&lt;/p&gt;

&lt;p&gt;Ideally it'd have been WASM bytecode compiled to native code, but I couldn't find a way that didn't rely on the WASM runtime, invoking it from the command line or embedded in a C program.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why?
&lt;/h2&gt;

&lt;p&gt;I've been &lt;a href="https://bsky.app/profile/did:plc:aj77r5uwt72o6oimdjfplqoz/post/3ligzkt3d5k2m" rel="noopener noreferrer"&gt;thinking&lt;/a&gt; and &lt;a href="https://bsky.app/profile/dario.cat/post/3lir4hjn3g226" rel="noopener noreferrer"&gt;learning&lt;/a&gt; this week about WebAssembly and its potential since I read &lt;a href="https://creston.blog/wasm-will-replace-containers/" rel="noopener noreferrer"&gt;"WASM will replace containers"&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I decided initially to use &lt;a href="https://wasmer.io/" rel="noopener noreferrer"&gt;Wasmer&lt;/a&gt; and ended filing a &lt;a href="https://github.com/wasmerio/wasmer/issues/5422" rel="noopener noreferrer"&gt;question on their repository&lt;/a&gt; because their own native binary build command doesn't work as expected.&lt;/p&gt;

&lt;p&gt;Finally, I landed on Bytecode Alliance's &lt;a href="https://github.com/bytecodealliance/wasm-micro-runtime" rel="noopener noreferrer"&gt;WebAssembly Micro Runtime (WAMR)&lt;/a&gt; &lt;code&gt;wamrc&lt;/code&gt; compiler.&lt;/p&gt;

&lt;h2&gt;
  
  
  How?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The code
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(module
  ;; Import the required WASI functions
  (import "wasi_snapshot_preview1" "fd_write" (func $fd_write (param i32 i32 i32 i32) (result i32)))

  ;; Define a buffer to store the message
  (memory (export "memory") 1)
  (data (i32.const 8) "Hello, World!\n")

  ;; Define the _start function
  (func $main (export "_start")
    ;; Setup the iovec array
    (i32.store (i32.const 0) (i32.const 8))    ;; pointer to the message
    (i32.store (i32.const 4) (i32.const 14))   ;; length of the message

    ;; Call fd_write
    (call $fd_write
      (i32.const 1)  ;; file_descriptor - 1 for stdout
      (i32.const 0)  ;; *iovs - pointer to the iovec array
      (i32.const 1)  ;; iovs_len - number of iovec entries
      (i32.const 20) ;; nwritten - where to store the number of bytes written
    )
    drop ;; Discard the result
  )
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is WebAssembly Text. Learn more &lt;a href="https://developer.mozilla.org/en-US/docs/WebAssembly/Guides/Understanding_the_text_format" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The tools
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/WebAssembly/wabt" rel="noopener noreferrer"&gt;wat2wasm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/WebAssembly/binaryen" rel="noopener noreferrer"&gt;wasm-opt&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/bytecodealliance/wasm-micro-runtime/tree/main/wamr-compiler" rel="noopener noreferrer"&gt;wamrc&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything had to be compiled from scratch.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;wasm-opt&lt;/code&gt; isn't really required, so I'll skip it. I learnt about it while looking at these &lt;a href="https://00f.net/2023/01/04/webassembly-benchmark-2023/" rel="noopener noreferrer"&gt;benchmarks&lt;/a&gt;, and I want these notes to keep track of the interesting tidbits I discovered.&lt;/p&gt;

&lt;h3&gt;
  
  
  The process
&lt;/h3&gt;

&lt;p&gt;First, let's generate WASM bytecode from our WebAssembly Text code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wat2wasm ./hello_world.wat &lt;span class="nt"&gt;-o&lt;/span&gt; ./hello_world.wasm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, compile it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wamrc &lt;span class="nt"&gt;--format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;aot &lt;span class="nt"&gt;-o&lt;/span&gt; ./hello_world.aot ./hello_world.wasm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the last step, we need some C glue to embed the WAMR runtime:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="cp"&gt;#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;stdio.h&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;stdlib.h&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;string.h&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
#include&lt;/span&gt; &lt;span class="cpf"&gt;"wasm_export.h"&lt;/span&gt;&lt;span class="c1"&gt;  // Now points to WAMR's core/iwasm/include/&lt;/span&gt;&lt;span class="cp"&gt;
#include&lt;/span&gt; &lt;span class="cpf"&gt;"hello_world.h"&lt;/span&gt;&lt;span class="cp"&gt;
&lt;/span&gt;
&lt;span class="c1"&gt;// Define a static pool buffer with proper alignment&lt;/span&gt;
&lt;span class="cp"&gt;#define POOL_SIZE (256 * 1024)  // 256 KB pool
&lt;/span&gt;&lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="kt"&gt;uint8_t&lt;/span&gt; &lt;span class="n"&gt;global_pool_buf&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;POOL_SIZE&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="n"&gt;__attribute__&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;aligned&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;)));&lt;/span&gt;  &lt;span class="c1"&gt;// 8-byte alignment&lt;/span&gt;

&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;RuntimeInitArgs&lt;/span&gt; &lt;span class="n"&gt;init_args&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;memset&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;init_args&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;sizeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;RuntimeInitArgs&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

    &lt;span class="c1"&gt;// Configure memory pool&lt;/span&gt;
    &lt;span class="n"&gt;init_args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mem_alloc_type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Alloc_With_Pool&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="c1"&gt;// Use pool allocator&lt;/span&gt;
    &lt;span class="n"&gt;init_args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mem_alloc_option&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;heap_buf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;global_pool_buf&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;init_args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mem_alloc_option&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;heap_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;sizeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;global_pool_buf&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Initialize the WAMR runtime&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;wasm_runtime_full_init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;init_args&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Runtime initialization failed.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// Load the AOT module from the embedded blob&lt;/span&gt;
    &lt;span class="kt"&gt;char&lt;/span&gt; &lt;span class="n"&gt;error_buf&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="n"&gt;wasm_module_t&lt;/span&gt; &lt;span class="n"&gt;module&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;wasm_runtime_load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;hello_world_aot&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hello_world_aot_len&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;error_buf&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;sizeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;error_buf&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;module&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Load failed: %s&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;error_buf&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// Instantiate the module&lt;/span&gt;
    &lt;span class="n"&gt;wasm_module_inst_t&lt;/span&gt; &lt;span class="n"&gt;inst&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;wasm_runtime_instantiate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;module&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;65536&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;65536&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;error_buf&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;sizeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;error_buf&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;inst&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to instantiate: %s&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;error_buf&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// Call a function named "main" exported from the WASM module&lt;/span&gt;
    &lt;span class="n"&gt;wasm_function_inst_t&lt;/span&gt; &lt;span class="n"&gt;func&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;wasm_runtime_lookup_function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;inst&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"_start"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;func&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Function '_start' not found.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;wasm_exec_env_t&lt;/span&gt; &lt;span class="n"&gt;env&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;wasm_runtime_create_exec_env&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;inst&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;65536&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to create environment.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// Execute the function&lt;/span&gt;
    &lt;span class="n"&gt;wasm_runtime_call_wasm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;func&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Cleanup&lt;/span&gt;
    &lt;span class="n"&gt;wasm_runtime_deinstantiate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;inst&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;wasm_runtime_unload&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;module&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;wasm_runtime_destroy&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The sharp reader may be asking "Where are the &lt;code&gt;wasm_export.h&lt;/code&gt; and &lt;code&gt;hello_world.h&lt;/code&gt; includes? The former is in &lt;a href="https://github.com/bytecodealliance/wasm-micro-runtime/tree/main/core/iwasm/include" rel="noopener noreferrer"&gt;&lt;code&gt;wasm-micro-runtime/core/iwasm/include&lt;/code&gt;&lt;/a&gt;, and the later needs to be generated:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xxd &lt;span class="nt"&gt;-i&lt;/span&gt; hello_world.aot &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; hello_world.h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we can compile and run it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcc &lt;span class="nt"&gt;-static&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/Code/hello-wasm/main.c &lt;span class="nv"&gt;$HOME&lt;/span&gt;/Code/wasm-micro-runtime/product-mini/platforms/linux/build/libiwasm.a &lt;span class="nt"&gt;-lm&lt;/span&gt; &lt;span class="nt"&gt;-I&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/Code/wasm-micro-runtime/core/iwasm/include &lt;span class="nt"&gt;-I&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/Code/hello-wasm &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/Code/hello-wasm/hello_world

&lt;span class="nv"&gt;$HOME&lt;/span&gt;/Code/hello-wasm/hello_world
Hello, World!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: it's not a perfect statically-linked binary. It depends on &lt;code&gt;getaddrinfo&lt;/code&gt;, but this is better left out of scope right now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.flickr.com/photos/duncan/10165341933" rel="noopener noreferrer"&gt;Image&lt;/a&gt; by Duncan Cumming.&lt;/p&gt;

</description>
      <category>webassembly</category>
    </item>
    <item>
      <title>A portal between Ruby and Go (using FFI)</title>
      <dc:creator>Dario Castañé</dc:creator>
      <pubDate>Tue, 06 Sep 2022 09:48:50 +0000</pubDate>
      <link>https://forem.com/dcc/a-portal-between-ruby-and-go-using-ffi-48gn</link>
      <guid>https://forem.com/dcc/a-portal-between-ruby-and-go-using-ffi-48gn</guid>
      <description>&lt;p&gt;Thinking about migrating some huge REST APIs from Ruby to Go, I researched how to replace the Ruby code progressively with Go code.&lt;/p&gt;

&lt;p&gt;I ended up with a possible FFI-based prototype. The solution is to have the Ruby code load a Go library and wrap the Go functions in Ruby classes and methods.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Ruby side
&lt;/h2&gt;

&lt;p&gt;The Ruby side is pretty straightforward. We must load the Go library and import the Go functions in Ruby classes and methods.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'ffi'&lt;/span&gt;

&lt;span class="k"&gt;module&lt;/span&gt; &lt;span class="nn"&gt;Portal&lt;/span&gt;
  &lt;span class="kp"&gt;extend&lt;/span&gt; &lt;span class="no"&gt;FFI&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Library&lt;/span&gt;

  &lt;span class="n"&gt;ffi_lib&lt;/span&gt; &lt;span class="s1"&gt;'./libexample.so'&lt;/span&gt;

  &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Example&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;FFI&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Struct&lt;/span&gt;
    &lt;span class="c1"&gt;# This must be completely in sync with the C struct defined in Go code.&lt;/span&gt;
    &lt;span class="n"&gt;layout&lt;/span&gt; &lt;span class="ss"&gt;:id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:prefix&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:pointer&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;initialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prefix&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="nb"&gt;self&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:prefix&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;FFI&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;MemoryPointer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prefix&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="nb"&gt;self&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;

    &lt;span class="c1"&gt;# This feels convoluted, but it hides the fact that our function is loaded&lt;/span&gt;
    &lt;span class="c1"&gt;# outside of the "struct mirror" class.&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;greet&lt;/span&gt;
      &lt;span class="no"&gt;Portal&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;greet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;self&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="n"&gt;attach_function&lt;/span&gt; &lt;span class="s1"&gt;'greet'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="no"&gt;Example&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;by_value&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="ss"&gt;:void&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="n"&gt;ex&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Portal&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Example&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'C'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;137&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;ex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;greet&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Go side
&lt;/h2&gt;

&lt;p&gt;The Go side is a bit more complex. We need to define a C-compatible struct and export the functions we want to use from Ruby.&lt;/p&gt;

&lt;p&gt;The cool thing is that we can define the functions with the struct as the receiver.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="c"&gt;/*
struct example {
    int ID;
    char *Prefix;
};
*/&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="s"&gt;"C"&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="s"&gt;"fmt"&lt;/span&gt;

&lt;span class="c"&gt;// This declaration is just an alias to the C struct.&lt;/span&gt;
&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;Example&lt;/span&gt; &lt;span class="n"&gt;C&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;struct_example&lt;/span&gt;

&lt;span class="c"&gt;//export greet&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt; &lt;span class="n"&gt;Example&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;greet&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Hello from %s-%d&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;C&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GoString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Prefix&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The build
&lt;/h2&gt;

&lt;p&gt;The build is also straightforward. We need to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Compile the Go code to a shared library. This command will generate the CGO bindings.&lt;/li&gt;
&lt;li&gt;Run the Ruby code.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go build &lt;span class="nt"&gt;-buildmode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;c-shared &lt;span class="nt"&gt;-o&lt;/span&gt; libexample.so example.go
ruby portal.rb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The result
&lt;/h2&gt;

&lt;p&gt;If we run the code, we get the following output:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hello from C-137&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The benefits
&lt;/h2&gt;

&lt;p&gt;The main benefit of this approach is that we can progressively migrate the Ruby code to Go. We can start by replacing part by part, until we are ready to switch to a pure Go service.&lt;/p&gt;

&lt;p&gt;It's not only possible to replace existing Ruby code. Now we can reuse RSpec/Minitest specs to test the new Go code. Instead of going for a full rewrite with no tests, we can start by testing the new code with the existing ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  The (possible) drawbacks
&lt;/h2&gt;

&lt;p&gt;Performance-wise this approach is not ideal. We are adding a layer of indirection. This probably adds some overhead to the calls. This is something we need to measure if applied to high-traffic services.&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>go</category>
      <category>ffi</category>
    </item>
    <item>
      <title>Simple Deno HTTP server running in Nanos unikernel</title>
      <dc:creator>Dario Castañé</dc:creator>
      <pubDate>Wed, 25 Aug 2021 21:11:15 +0000</pubDate>
      <link>https://forem.com/dcc/simple-deno-http-server-running-in-nanos-unikernel-2e3f</link>
      <guid>https://forem.com/dcc/simple-deno-http-server-running-in-nanos-unikernel-2e3f</guid>
      <description>&lt;p&gt;This short post shows how to get a &lt;a href="https://deno.land/manual@v1.13.2/tools/compiler"&gt;self-executable&lt;/a&gt; Deno HTTP server to run in &lt;a href="https://nanos.org/"&gt;Nanos&lt;/a&gt; using &lt;a href="https://nanovms.gitbook.io/ops/basic_usage#build-and-deploy-nanos-unikernel"&gt;Ops&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The server
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Deno&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;PORT&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;listener&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Deno&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`http://localhost:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;conn&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;listener&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;requests&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Deno&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;serveHttp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;respondWith&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;respondWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Hello world&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;})();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Compilation and local deployment
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;deno compile &lt;span class="nt"&gt;--allow-env&lt;/span&gt; &lt;span class="nt"&gt;--allow-net&lt;/span&gt; hello.ts
ops run &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/ &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;8000 &lt;span class="nt"&gt;-p&lt;/span&gt; 8000 hello
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The trick is to set the HOME variable. It can have any value, but &lt;code&gt;/&lt;/code&gt; feels right.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this post, we have seen a minimal Deno HTTP server working in a Nanos unikernel.&lt;/p&gt;

</description>
      <category>deno</category>
      <category>ops</category>
      <category>nanos</category>
      <category>typescript</category>
    </item>
    <item>
      <title>Hi, I'm Dario Castañé</title>
      <dc:creator>Dario Castañé</dc:creator>
      <pubDate>Thu, 22 Jun 2017 07:25:50 +0000</pubDate>
      <link>https://forem.com/dcc/hi-im-dario-casta</link>
      <guid>https://forem.com/dcc/hi-im-dario-casta</guid>
      <description>&lt;p&gt;I have been coding for too much years. I don't even remember. No, seriously, I was born in 1985 and I started to code in 1997.&lt;/p&gt;

&lt;p&gt;You can find me on GitHub as &lt;a href="https://github.com/imdario" rel="noopener noreferrer"&gt;imdario&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I live in Barcelona (Catalonia).&lt;/p&gt;

&lt;p&gt;I work for &lt;a href="http://loyal.guru" rel="noopener noreferrer"&gt;Loyal Guru&lt;/a&gt; as tech lead.&lt;/p&gt;

&lt;p&gt;I mostly program in these languages: Ruby, Go, etc. I consider myself a polyglot programmer.&lt;/p&gt;

&lt;p&gt;I am currently learning more about blockchain, webcomponents, IoT, etc.&lt;/p&gt;

&lt;p&gt;Nice to meet you and happy hacking &amp;lt;3&lt;/p&gt;

</description>
      <category>introduction</category>
    </item>
  </channel>
</rss>
