<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Rodrigo Sicarelli</title>
    <description>The latest articles on Forem by Rodrigo Sicarelli (@rsicarelli).</description>
    <link>https://forem.com/rsicarelli</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/rsicarelli"/>
    <language>en</language>
    <item>
      <title>Claude Code 101: Demystifying Language Models</title>
      <dc:creator>Rodrigo Sicarelli</dc:creator>
      <pubDate>Thu, 09 Apr 2026 11:34:24 +0000</pubDate>
      <link>https://forem.com/rsicarelli/claude-code-101-demystifying-language-models-3h8o</link>
      <guid>https://forem.com/rsicarelli/claude-code-101-demystifying-language-models-3h8o</guid>
      <description>&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;What is a token&lt;/li&gt;
&lt;li&gt;The context window&lt;/li&gt;
&lt;li&gt;How models generate text&lt;/li&gt;
&lt;li&gt;The attention mechanism&lt;/li&gt;
&lt;li&gt;How the model picks between options&lt;/li&gt;
&lt;li&gt;Model families&lt;/li&gt;
&lt;li&gt;What models can't do&lt;/li&gt;
&lt;li&gt;How much it costs&lt;/li&gt;
&lt;li&gt;Final thoughts&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;🇧🇷 &lt;a href="https://dev.to/rsicarelli/claude-code-101-desmistificando-os-modelos-de-linguagem-5big"&gt;Leia em português&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/rsicarelli/claude-code-101-introduction-to-agentic-programming-3p83"&gt;previous article&lt;/a&gt;, we built the entire factory: the evolution from manual production to autonomous machines, the ecosystem of agentic tools, the three pillars (prompt, context, and harness engineering). You know what the factory does, who works in it, and even how much revenue it pulls in.&lt;/p&gt;

&lt;p&gt;But the machines in the factory build things. And to understand how they build, the best analogy I know is LEGO. Standardized pieces that snap together one at a time, following (or ignoring) a manual, on a desk with limited space.&lt;/p&gt;

&lt;p&gt;This is the second article in the &lt;strong&gt;Claude Code 101&lt;/strong&gt; series, and here we take that mechanic apart. What tokens are, how the context window works, why models generate text the way they do, and why they sometimes get things wrong with unsettling confidence.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is a token
&lt;/h2&gt;

&lt;p&gt;Computers don't understand text. They understand numbers. Before a language model processes anything you wrote, every word, space, and punctuation mark has to become a sequence of integers. Those integers are &lt;strong&gt;tokens&lt;/strong&gt;: the standardized pieces the model works with.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4cmhiufrvy02bqod9yq4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4cmhiufrvy02bqod9yq4.png" alt="What is a token" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A token isn't necessarily a word. It can be a whole word ("hello" becomes 1 token), a chunk of a word ("tokenization" becomes several tokens), an isolated character, or even a single byte. The rule of thumb for English: &lt;strong&gt;1 token is roughly 4 characters&lt;/strong&gt;, or about 3/4 of a word. For Portuguese, it's closer to 1 token per 2.7 to 3 characters.&lt;/p&gt;

&lt;h3&gt;
  
  
  How the vocabulary is built
&lt;/h3&gt;

&lt;p&gt;Most LLMs use an algorithm called &lt;strong&gt;BPE&lt;/strong&gt; (Byte Pair Encoding) to build their vocabulary. The logic is straightforward: start with the 256 possible byte values, scan billions of training texts, find the most frequent byte pair, merge it into a new token, repeat. The result is a vocabulary ranging from ~100K to ~260K tokens, depending on the model.&lt;/p&gt;

&lt;p&gt;The detail that matters: this training corpus is dominated by English. Words like "the," "and," "great" become single tokens, whole pieces. Words in Portuguese get fragmented into smaller chunks, as if the kit came with pieces cut in half. Compare:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bd3cbahuboft1az19wg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bd3cbahuboft1az19wg.png" alt="Tokenization: English vs Portuguese" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The character "o" with an accent becomes a separate token because accented characters are rare in the training data. This isn't an irrelevant technical detail. It directly affects your wallet and the model's effective capacity when you work in Portuguese.&lt;/p&gt;

&lt;h3&gt;
  
  
  The linguistic tax on Portuguese
&lt;/h3&gt;

&lt;p&gt;A study by Petrov et al. presented at NeurIPS 2023 measured what they called the "tokenization premium" across languages [1]. The numbers:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tokenizer&lt;/th&gt;
&lt;th&gt;How much more Portuguese consumes vs English&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GPT-2 (&lt;code&gt;r50k_base&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;1.94x&lt;/strong&gt; (nearly double)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GPT-4 (&lt;code&gt;cl100k_base&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;1.48x&lt;/strong&gt; (~50% more)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GPT-4o (&lt;code&gt;o200k_base&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;~1.3-1.4x&lt;/strong&gt; (improved) *&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The GPT-2 and GPT-4 numbers come directly from Petrov et al. [1]. The GPT-4o estimate reflects the improvement trend with larger vocabularies, confirmed by subsequent studies.&lt;/p&gt;

&lt;p&gt;The good news: each generation of tokenizer narrows this gap. The news that actually matters: even in the best case, Portuguese still consumes at least 30% more pieces than English to build the same thing. This "tax" will come back when we talk about context windows and cost, because it compounds with every interaction.&lt;/p&gt;




&lt;h2&gt;
  
  
  The context window
&lt;/h2&gt;

&lt;p&gt;If tokens are the pieces, the context window is the desk where you build. Fixed size. Everything has to fit on it: the instructions you sent, the conversation history, reference files, and the response the model is constructing. When the desk fills up, that's it. The model doesn't "remember" anything left off the surface.&lt;/p&gt;

&lt;h3&gt;
  
  
  The desks of 2026
&lt;/h3&gt;

&lt;p&gt;The market has converged on &lt;strong&gt;1 million tokens&lt;/strong&gt; (1M) as the standard for frontier models [2]. In the table below, "K" means thousand and "M" means million:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Desk size&lt;/th&gt;
&lt;th&gt;Max response&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://docs.anthropic.com/en/docs/about-claude/models" rel="noopener noreferrer"&gt;Claude Opus 4.6&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1M tokens&lt;/td&gt;
&lt;td&gt;128K tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://docs.anthropic.com/en/docs/about-claude/models" rel="noopener noreferrer"&gt;Claude Sonnet 4.6&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1M tokens&lt;/td&gt;
&lt;td&gt;64K tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://docs.anthropic.com/en/docs/about-claude/models" rel="noopener noreferrer"&gt;Claude Haiku 4.5&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;200K tokens&lt;/td&gt;
&lt;td&gt;64K tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://platform.openai.com/docs/models" rel="noopener noreferrer"&gt;GPT-5.4&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1.05M tokens&lt;/td&gt;
&lt;td&gt;128K tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://platform.openai.com/docs/models" rel="noopener noreferrer"&gt;GPT-4.1&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1M tokens&lt;/td&gt;
&lt;td&gt;32K tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://ai.google.dev/gemini-api/docs/models" rel="noopener noreferrer"&gt;Gemini 2.5 Pro&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1M tokens&lt;/td&gt;
&lt;td&gt;65K tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://ai.meta.com/blog/llama-4-multimodal-intelligence/" rel="noopener noreferrer"&gt;Llama 4 Scout&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;10M tokens&lt;/td&gt;
&lt;td&gt;varies&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The desk is shared. Everything you send to the model (your question, files, conversation history) and everything it replies with must fit together on the same surface. A desk of 1 million tokens sounds enormous, but the response already reserves a chunk of it, and the rest is the maximum you can send.&lt;/p&gt;

&lt;p&gt;For a sense of scale: 1 million tokens is roughly 750,000 words in English, about 8 to 10 books. For Portuguese, because of the tokenization tax, that drops to around 500,000 words. About 7 books.&lt;/p&gt;

&lt;h3&gt;
  
  
  Advertised size vs. actual size
&lt;/h3&gt;

&lt;p&gt;Here's a point that rarely comes up. Having a desk of 1 million tokens doesn't mean the model uses all of that surface well.&lt;/p&gt;

&lt;p&gt;Recent research shows that the model's ability to pay attention (attention, a concept we'll dig into right below) drops as context grows, especially for information positioned in the middle of the text [3]. The phenomenon even has a name: &lt;strong&gt;"lost in the middle."&lt;/strong&gt; In practice, the model's effective attention span is significantly smaller than the advertised window. The NoLiMa benchmark (ICML 2025) showed that most LLMs fail more than half the time when they need to locate specific information in contexts beyond 32K tokens [9].&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbx5anl563phg8seq2te.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbx5anl563phg8seq2te.png" alt="The context window" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And the linguistic tax shows up again here. If the effective window of a model with 200K tokens is already significantly smaller than advertised, for content that's 100% in Portuguese, plan with a generous safety margin. The usable space can drop to roughly 80K-90K tokens of English-equivalent content. Larger pieces take up more room on the same surface.&lt;/p&gt;




&lt;h2&gt;
  
  
  How models generate text
&lt;/h2&gt;

&lt;p&gt;You already know what the pieces are (tokens) and the size of the desk (context window). Now comes the assembly process itself.&lt;/p&gt;

&lt;p&gt;The mechanic is surprisingly simple. The model looks at everything already on the desk, calculates a probability distribution over the entire vocabulary (between ~100K and ~260K possible pieces) to decide which one fits best in the sequence, places one, and repeats. One at a time, from beginning to end of the response. There's no master plan. This is called &lt;strong&gt;autoregressive generation&lt;/strong&gt; [4], and it's the core mechanic of the Transformer architecture published in 2017 by Vaswani et al.&lt;/p&gt;

&lt;p&gt;Example: the model receives &lt;strong&gt;"The sky is"&lt;/strong&gt; and needs to continue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56fqtnleftbf3a1zgvzo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56fqtnleftbf3a1zgvzo.png" alt="Autoregressive generation" width="800" height="447"&gt;&lt;/a&gt;&lt;br&gt;
Result: &lt;strong&gt;"The sky is blue today."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each piece placed depends on all the ones before it: both the original input and what the model has already built. That's why responses sometimes start well and derail halfway through. The model doesn't know where it will end up when it starts generating.&lt;/p&gt;

&lt;p&gt;If you read the &lt;a href="https://dev.to/rsicarelli/claude-code-101-introduction-to-agentic-programming-3p83"&gt;previous article&lt;/a&gt;, you might recognize this mechanism. Remember autocomplete, phase 1 of the evolution? The code completion that suggested the next line in the editor? The underlying mechanism is the same: next-token prediction. The difference is scale. Models like GPT-2 (2019) had 1.5 billion parameters and a tiny desk. Claude Opus 4.6 operates at a completely different scale, with a context window a thousand times larger. The assembly process is the same. The ability to build complex things is what changed.&lt;/p&gt;




&lt;h2&gt;
  
  
  The attention mechanism
&lt;/h2&gt;

&lt;p&gt;The assembly process explains that the model places one piece at a time. But we still need to understand how it decides the probabilities. If the input is "I need to check the bank by the river," how does the model know whether "bank" means the riverbank or a financial institution?&lt;/p&gt;

&lt;p&gt;The answer is the &lt;strong&gt;attention mechanism&lt;/strong&gt; (self-attention), introduced in the paper "Attention Is All You Need" [4]. It's the heart of the &lt;strong&gt;Transformer&lt;/strong&gt; architecture that powers every modern LLM.&lt;/p&gt;

&lt;h3&gt;
  
  
  How the model sees context
&lt;/h3&gt;

&lt;p&gt;Imagine that the building manual doesn't just show the next step. For each new piece, it highlights which parts of the construction matter for this decision: the foundation lights up bright (supports everything), the nearby towers glow (they define the pattern), the garden on the other side stays dim (irrelevant right now). The attention mechanism does exactly this: for each token, it "lights up" the preceding ones that carry the most weight and "dims" the ones that don't matter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1mhuku7ed86lkq33mvdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1mhuku7ed86lkq33mvdu.png" alt="Attention mechanism" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before Transformers, it was like building LEGO with someone dictating instructions: one piece at a time, no repeating, no going back. Missed step 12? Tough luck. The Transformer is like having the entire manual open on the desk, all pages visible at once. This ability to process everything in parallel was the leap that made it feasible to train models at today's scale.&lt;/p&gt;

&lt;h3&gt;
  
  
  Disambiguation in practice
&lt;/h3&gt;

&lt;p&gt;Go back to the example: "I need to check the bank by the river." The attention mechanism makes the token "bank" pay close attention to "river," and little attention to "need" and "to." It's like a build where a blue piece could be sky or ocean depending on what's around it. Context resolves the ambiguity.&lt;/p&gt;

&lt;p&gt;This mechanism has a price. For every piece on the desk, the model looks at all the others before deciding on the fit [4]. On a desk with 10 pieces, no problem. On a desk with 10,000, every decision requires looking at 10,000 pieces. Doubling the desk size doesn't double the work; it quadruples it.&lt;/p&gt;

&lt;p&gt;On top of that, building costs more than looking. When you send a question, the model reads everything at once, like scanning a full page of the manual at once. But when it constructs the response, it's one piece at a time, each one requiring a glance at the entire desk. That's why the price per output token is 3x to 5x higher than input.&lt;/p&gt;

&lt;h3&gt;
  
  
  What this means for you
&lt;/h3&gt;

&lt;p&gt;If attention works by weighing the relevance of each token against the others, clear and well-structured prompts make the model's job easier. Ambiguity in the input produces "confusion" in the attention: the model has to distribute weight across competing interpretations. A precise prompt is like clean code: the intent is obvious and the attention mechanism focuses on what matters. The more organized the desk, the more precise the next piece.&lt;/p&gt;

&lt;p&gt;This isn't abstract. It's the technical foundation for why prompt engineering works, something we'll explore in depth in Part 6 of this series.&lt;/p&gt;




&lt;h2&gt;
  
  
  How the model picks between options
&lt;/h2&gt;

&lt;p&gt;You now understand how the model weighs its options. But when several tokens have similar probabilities, what decides which one gets picked?&lt;/p&gt;

&lt;p&gt;The difference is between following the manual to the letter and improvising.&lt;/p&gt;

&lt;h3&gt;
  
  
  Temperature
&lt;/h3&gt;

&lt;p&gt;You're building a blue wall and need the next piece. In the box, the pieces are sorted: on top are the blue ones that fit perfectly, in the middle are some green ones that could work as an accent, and way at the bottom there's a red wheel that makes no sense at all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Temperature&lt;/strong&gt; controls how deep the model reaches into the box. At zero, it always grabs the piece on top: same choice, every time, no surprises. At 0.7, it sometimes fishes out a green one nobody expected, and it adds a nice touch to the build. Above 1.0, it pulls the red wheel and snaps it onto the wall anyway.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Temperature&lt;/th&gt;
&lt;th&gt;Completing "The recipe calls for..."&lt;/th&gt;
&lt;th&gt;Behavior&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;"flour, sugar, and cocoa."&lt;/td&gt;
&lt;td&gt;Always the same answer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;0.3&lt;/td&gt;
&lt;td&gt;"whole wheat flour, coconut oil, and cocoa."&lt;/td&gt;
&lt;td&gt;Minor variations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;0.7&lt;/td&gt;
&lt;td&gt;"exotic spices and a hint of Sicilian lemon."&lt;/td&gt;
&lt;td&gt;Creative&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1.5&lt;/td&gt;
&lt;td&gt;"melted dreams in dragon caramel."&lt;/td&gt;
&lt;td&gt;Incoherent&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodq8hfhxniznfvx9qqkc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodq8hfhxniznfvx9qqkc.png" alt="Temperature: how deep into the box" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Top-p
&lt;/h3&gt;

&lt;p&gt;Temperature isn't the only control. &lt;strong&gt;Top-p&lt;/strong&gt; works differently: instead of changing how deep the model reaches into the box, it removes pieces from the box before the model chooses. With top-p at 0.9, the bottom 10% most improbable pieces aren't even available. The effect is similar to temperature, and the recommendation from providers is to adjust one or the other, not both at the same time.&lt;/p&gt;

&lt;h3&gt;
  
  
  What this means in practice
&lt;/h3&gt;

&lt;p&gt;When you start using agentic tools to write code (from Part 5 of this series), these values come pre-calibrated. But understanding that they exist helps explain why the model sometimes surprises you with an unexpected response: someone let it dig deeper into the box.&lt;/p&gt;

&lt;p&gt;For what we care about in this series, the rule is simple: for code, the model works best in predictable mode. Right piece, right place, no improvisation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Model families
&lt;/h2&gt;

&lt;p&gt;Not every piece works for every build. LEGO Duplo (big pieces, simple) is perfect for getting started, but you can't build a working motor with it. LEGO Technic (gears, axles, complexity) enables sophisticated builds, but costs more and takes more time. The same logic applies to language models.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reasoning models
&lt;/h3&gt;

&lt;p&gt;The Technic kits: they "think before answering," spending internal tokens on step-by-step reasoning (extended thinking) before producing the final response. They're the most capable, slowest, and most expensive. The prices below are in dollars per MTok (1 million tokens): the first value is the input cost (what you send), the second is the output cost (what the model generates).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Opus 4.6&lt;/strong&gt; (Anthropic) reaches 80.8% on SWE-bench Verified [5]. $5/$25 per MTok.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;o3 / o3-pro&lt;/strong&gt; (OpenAI) are dedicated reasoning models. o3-pro costs $20/$80 per MTok.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini 2.5 Pro&lt;/strong&gt; (Google) lets you configure the "thinking budget." $1.25/$10 per MTok.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Fast models
&lt;/h3&gt;

&lt;p&gt;The Duplo kits: bigger pieces, quick assembly, immediate results. Built for low latency and high volume.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Haiku 4.5&lt;/strong&gt; (Anthropic): $1/$5 per MTok.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPT-4o-mini&lt;/strong&gt; (OpenAI): $0.15/$0.60 per MTok.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini 2.5 Flash&lt;/strong&gt; (Google): $0.30/$2.50 per MTok.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Code models
&lt;/h3&gt;

&lt;p&gt;Kits optimized for specific builds. Like the Creator Expert line, where each set is designed for a particular result.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GPT-4.1&lt;/strong&gt; (OpenAI): 1M context, $2/$8 per MTok. Explicitly optimized for code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://docs.anthropic.com/en/docs/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;&lt;/strong&gt; (Anthropic): uses Opus/Sonnet under the hood, but with an entire harness of tools to read, edit, execute, and commit.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Open-weight models
&lt;/h3&gt;

&lt;p&gt;Kits with all the pieces exposed: you build, take apart, and adapt however you want.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Llama 4 Scout&lt;/strong&gt; (Meta): 10M context, open-weight [6].&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Llama 4 Maverick&lt;/strong&gt; (Meta): 1M context, 17B active parameters out of 400B total.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DeepSeek R1&lt;/strong&gt;: open-source (MIT), 671B parameters, strong reasoning.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Choosing the right kit
&lt;/h3&gt;

&lt;p&gt;Using Opus to classify tweet sentiment is like buying a 4,000-piece Technic set to build a cube. The difference between Haiku ($1/$5 per MTok) and Opus ($5/$25 per MTok) is &lt;strong&gt;5x on input and 5x on output&lt;/strong&gt;. Plenty of tasks that seem to "need" a large model work perfectly well with a smaller one.&lt;/p&gt;

&lt;p&gt;The rule of thumb: always start with the cheapest model that could work. Test with Haiku, Flash, or mini. If quality falls short, move up. Opus and o3-pro are reserved for when you truly need them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpdb1z9byea747ujos8d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpdb1z9byea747ujos8d.png" alt="Model families" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But regardless of which kit you choose, they all share the same fundamental limitations.&lt;/p&gt;




&lt;h2&gt;
  
  
  What models can't do
&lt;/h2&gt;

&lt;p&gt;The build might look perfect. Visually flawless, every piece in place. But push the wall and it falls over. The model doesn't "know" whether what it built actually works. It snaps pieces where they seem to fit, following statistical patterns, and the result often holds up. But not always.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hallucinations aren't bugs
&lt;/h3&gt;

&lt;p&gt;When a model generates information that looks correct but is factually false, we call it a &lt;strong&gt;hallucination&lt;/strong&gt;. It's tempting to treat this as a defect, something that will be "fixed" in a future version. But hallucinations are a direct consequence of the design: the model snaps pieces where they seem to fit statistically, without checking whether the build makes sense in the real world [4].&lt;/p&gt;

&lt;p&gt;If the statistical pattern of "X wrote the book Y" is strong enough in the training data, the model will assert it even if it's false. It has no internal fact-checker. It doesn't distinguish between generating "Paris is the capital of France" and "Paris is the capital of Italy." Both are plausible token sequences; one is true, the other isn't, and the model doesn't know the difference.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5as6dasngjs70uq5whx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5as6dasngjs70uq5whx.png" alt="Limitations: hallucinations and a clean desk" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The concrete limitations
&lt;/h3&gt;

&lt;p&gt;The model's manual was printed on a specific date. Everything that happened after that doesn't exist for it. And worse: at the end of every build, the desk is wiped clean. The next conversation starts from zero, with no pieces from the previous one. If you need the model to remember something, you have to put it back on the desk yourself.&lt;/p&gt;

&lt;p&gt;The good news: with context engineering and harness engineering techniques (topics in Parts 7 and 8 of this series), you can automate what goes on the desk for each conversation. But for now, the important thing is knowing that the model on its own doesn't remember anything.&lt;/p&gt;

&lt;p&gt;Math remains a weak spot. The model can assemble a calculation that looks right but gets the result wrong. For computations that demand precision, it's safer to ask the model to write code that does the math rather than trusting the direct answer.&lt;/p&gt;

&lt;p&gt;And in the context of code, Veracode tests showed that &lt;strong&gt;45% of AI-generated code contains security flaws&lt;/strong&gt;, across evaluations of more than 100 LLMs [7]. The model builds fast, but if nobody checks the construction, poorly fitted pieces end up in the final product.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's getting better
&lt;/h3&gt;

&lt;p&gt;But the community isn't standing still. Improvements are coming from multiple fronts at the same time: models that "think step by step" before building, reducing errors on complex tasks. Systems that automatically populate the desk with the right pieces for your project, instead of you placing everything by hand. Agents that remember previous builds and learn from them. And infrastructure that gets faster and cheaper with every generation.&lt;/p&gt;

&lt;p&gt;Each of these fronts will show up in the upcoming articles of this series. For now, the takeaway is: the limitations are real, but they're shrinking.&lt;/p&gt;

&lt;p&gt;Even with these limitations, models are being used at scale. And scale has a cost.&lt;/p&gt;




&lt;h2&gt;
  
  
  How much it costs
&lt;/h2&gt;

&lt;p&gt;Every piece costs money. LLMs are billed per token processed, split into two categories: &lt;strong&gt;input tokens&lt;/strong&gt; (everything you send) and &lt;strong&gt;output tokens&lt;/strong&gt; (what the model generates). Building something new (output) always costs more than reading what already exists (input), typically 3x to 5x the price [8]. This makes sense: generating each token requires a full forward pass through the model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99kwiqd2yxmsixbat466.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99kwiqd2yxmsixbat466.png" alt="Input vs Output: consulting vs building" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Price table (April 2026)
&lt;/h3&gt;

&lt;p&gt;Prices in USD per 1 million tokens (MTok):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Input / MTok&lt;/th&gt;
&lt;th&gt;Output / MTok&lt;/th&gt;
&lt;th&gt;Cache read / MTok&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Claude Opus 4.6&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$5.00&lt;/td&gt;
&lt;td&gt;$25.00&lt;/td&gt;
&lt;td&gt;$0.50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Claude Sonnet 4.6&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$3.00&lt;/td&gt;
&lt;td&gt;$15.00&lt;/td&gt;
&lt;td&gt;$0.30&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Claude Haiku 4.5&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$1.00&lt;/td&gt;
&lt;td&gt;$5.00&lt;/td&gt;
&lt;td&gt;$0.10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GPT-5.4&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$2.50&lt;/td&gt;
&lt;td&gt;$15.00&lt;/td&gt;
&lt;td&gt;$0.25&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GPT-4.1&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$2.00&lt;/td&gt;
&lt;td&gt;$8.00&lt;/td&gt;
&lt;td&gt;$0.50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Gemini 2.5 Pro&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$1.25&lt;/td&gt;
&lt;td&gt;$10.00&lt;/td&gt;
&lt;td&gt;$0.125&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Gemini 2.5 Flash&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$0.30&lt;/td&gt;
&lt;td&gt;$2.50&lt;/td&gt;
&lt;td&gt;$0.03&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Notice the "Cache read" column. It'll be important in a moment.&lt;/p&gt;

&lt;h3&gt;
  
  
  The linguistic tax comes full circle
&lt;/h3&gt;

&lt;p&gt;Remember the tokenization cost for Portuguese? It translates directly into money. For the same content, Portuguese applications cost between &lt;strong&gt;30% and 50% more&lt;/strong&gt; in input tokens than the same application in English, depending on the tokenizer. On a $5,000/month bill, that's an extra $1,150 to $1,650 just because of the language.&lt;/p&gt;

&lt;p&gt;Throughout this article, one thread runs through three sections: Portuguese consumes more pieces to build the same thing (section 1), that takes up more room on the desk (section 2), and now it costs more (here). These aren't three problems. It's the same problem, in three layers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F170illizdu75s1l83hlw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F170illizdu75s1l83hlw.png" alt="The linguistic tax in three layers" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How to optimize
&lt;/h3&gt;

&lt;p&gt;The good news: there are concrete ways to bring this cost down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt caching&lt;/strong&gt; is the most impactful. All major providers offer cache reads at roughly 10% of the input price [8]. If your system prompt or reference context repeats across calls, caching can cut input costs by up to &lt;strong&gt;90%&lt;/strong&gt;. This significantly mitigates the Portuguese linguistic tax.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Batch API&lt;/strong&gt; offers a 50% discount in exchange for asynchronous processing (24-hour window). For tasks that aren't real-time (document analysis, bulk classification), it's easy savings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model selection&lt;/strong&gt; is the third lever. Many tasks running on Opus would work just as well on Haiku, at a fraction of the cost. Testing with the cheapest model first isn't premature optimization. It's responsible engineering.&lt;/p&gt;

&lt;p&gt;Combining batch + caching on Anthropic, the discount can reach &lt;strong&gt;95%&lt;/strong&gt; in ideal scenarios [8].&lt;/p&gt;

&lt;p&gt;This topic will be central in Part 4, when we talk about context engineering. Managing what goes on the desk is, in practice, managing money.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;Tokens, desk, attention, temperature, limitations, cost. It might seem like a lot, but everything connects. And none of this is technical trivia. It's the foundation of every decision you'll make with these tools: why one prompt works and another doesn't, why the response cut off midway, why the bill came in higher than expected.&lt;/p&gt;

&lt;p&gt;But there's a gap. Knowing how the pieces work doesn't explain how typing a paragraph into a terminal results in 50 edited files, passing tests, and a ready commit. Something is taking those pieces, arranging them on the desk, building, checking, tearing down what fails, and trying again. Something is turning a next-token engine into a system that actually builds software.&lt;/p&gt;

&lt;p&gt;That something is what tools like Claude Code, Codex CLI, and OpenCode do. They wrap the model, give it tools to act, and orchestrate the cycle of building, checking, and correcting. In the next article, we crack one of them open, piece by piece.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;🤖 This article was written with assistance from Claude (Anthropic).&lt;/p&gt;

&lt;p&gt;Content researched, verified, and edited by a human.&lt;/p&gt;

&lt;p&gt;Found an error or a missing credit? Send me a message!&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2305.15425" rel="noopener noreferrer"&gt;Petrov, A. et al. — "Language Model Tokenizers Introduce Unfairness Between Languages" (NeurIPS 2023)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.anthropic.com/en/docs/about-claude/models" rel="noopener noreferrer"&gt;Anthropic — Claude model documentation (2026)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2307.03172" rel="noopener noreferrer"&gt;Liu, N.F. et al. — "Lost in the Middle: How Language Models Use Long Contexts" (TACL 2024, vol. 12)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/1706.03762" rel="noopener noreferrer"&gt;Vaswani, A. et al. — "Attention Is All You Need" (NeurIPS 2017)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.swebench.com/" rel="noopener noreferrer"&gt;SWE-bench&lt;/a&gt; — Princeton NLP (ICLR 2024)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ai.meta.com/blog/llama-4-multimodal-intelligence/" rel="noopener noreferrer"&gt;Meta — Llama 4 announcement (2025)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.veracode.com/resources/analyst-reports/2025-genai-code-security-report/" rel="noopener noreferrer"&gt;Veracode — "GenAI and Code Security: What You Need to Know" (2025)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.anthropic.com/pricing" rel="noopener noreferrer"&gt;Anthropic — API Pricing (2026)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2502.05167" rel="noopener noreferrer"&gt;Kuratov, Y. et al. — "NoLiMa: Long-Context Evaluation Beyond Literal Matching" (ICML 2025)&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>claude</category>
    </item>
    <item>
      <title>Claude Code 101: Desmistificando os Modelos de Linguagem</title>
      <dc:creator>Rodrigo Sicarelli</dc:creator>
      <pubDate>Thu, 09 Apr 2026 10:50:29 +0000</pubDate>
      <link>https://forem.com/rsicarelli/claude-code-101-desmistificando-os-modelos-de-linguagem-5big</link>
      <guid>https://forem.com/rsicarelli/claude-code-101-desmistificando-os-modelos-de-linguagem-5big</guid>
      <description>&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;O que é um token&lt;/li&gt;
&lt;li&gt;A janela de contexto&lt;/li&gt;
&lt;li&gt;Como o modelo gera texto&lt;/li&gt;
&lt;li&gt;O mecanismo de atenção&lt;/li&gt;
&lt;li&gt;Como o modelo escolhe entre as opções&lt;/li&gt;
&lt;li&gt;As famílias de modelos&lt;/li&gt;
&lt;li&gt;O que modelos não conseguem fazer&lt;/li&gt;
&lt;li&gt;Quanto custa&lt;/li&gt;
&lt;li&gt;Considerações finais&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://dev.to/rsicarelli/claude-code-101-demystifying-language-models-3h8o"&gt;🌐 Read in English&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;No &lt;a href="https://dev.to/rsicarelli/claude-code-101-introducao-a-programacao-agentica-4mk1"&gt;artigo anterior&lt;/a&gt;, montamos a fábrica inteira: a evolução de produção manual pra máquinas autônomas, o ecossistema de ferramentas agênticas, os três pilares (prompt, context e harness engineering). Você sabe o que a fábrica faz, quem trabalha nela e até quanto fatura.&lt;/p&gt;

&lt;p&gt;Mas as máquinas da fábrica constroem coisas. E pra entender como elas constroem, a melhor analogia que eu conheço é LEGO. Peças padronizadas que se encaixam uma por vez, seguindo (ou não) um manual, numa mesa com espaço limitado.&lt;/p&gt;

&lt;p&gt;Este é o segundo artigo da série &lt;strong&gt;Claude Code 101&lt;/strong&gt;, e aqui a gente desmonta essa mecânica. O que são tokens, como funciona a context window, por que modelos geram texto do jeito que geram, e por que eles às vezes erram com uma confiança desconcertante.&lt;/p&gt;




&lt;h2&gt;
  
  
  O que é um token
&lt;/h2&gt;

&lt;p&gt;Computadores não entendem texto. Entendem números. Antes que um modelo de linguagem processe qualquer coisa que você escreveu, cada palavra, espaço e pontuação precisa virar uma sequência de inteiros. Esses inteiros são os &lt;strong&gt;tokens&lt;/strong&gt;: as peças padronizadas com as quais o modelo trabalha.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpr2fb07hjmm66rjab36x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpr2fb07hjmm66rjab36x.png" alt="O que é um token" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Um token não é necessariamente uma palavra. Pode ser uma palavra inteira ("hello" vira 1 token), um pedaço de palavra ("tokenização" vira vários tokens), um caractere isolado ou até um byte. A regra prática pro inglês: &lt;strong&gt;1 token corresponde a mais ou menos 4 caracteres&lt;/strong&gt;, ou cerca de 3/4 de uma palavra. Pro português, fica mais perto de 1 token pra cada 2.7 a 3 caracteres.&lt;/p&gt;

&lt;h3&gt;
  
  
  Como o vocabulário é construído
&lt;/h3&gt;

&lt;p&gt;A maioria dos LLMs usa um algoritmo chamado &lt;strong&gt;BPE&lt;/strong&gt; (Byte Pair Encoding) pra montar seu vocabulário. A lógica é simples: começa com os 256 valores possíveis de um byte, percorre os bilhões de textos usados pra treinar o modelo, encontra o par de bytes mais frequente, junta num token novo, e repete. O resultado é um vocabulário que varia entre ~100 mil e ~260 mil tokens, dependendo do modelo.&lt;/p&gt;

&lt;p&gt;O detalhe que importa: essa massa de textos é dominada por inglês. Palavras como "the", "and", "great" viram tokens únicos, peças inteiras. Palavras em português são fragmentadas em pedaços menores, como se o kit viesse com peças cortadas ao meio. Compare:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qazzv9nnm0543wn0wql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qazzv9nnm0543wn0wql.png" alt="Tokenização: inglês vs português" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;O caractere "ó" sozinho já vira um token separado porque acentos aparecem pouco nos textos de treinamento. Não é detalhe técnico irrelevante. Afeta diretamente o seu bolso e a capacidade efetiva do modelo quando você trabalha em português.&lt;/p&gt;

&lt;h3&gt;
  
  
  O imposto linguístico do português
&lt;/h3&gt;

&lt;p&gt;Um estudo de Petrov et al. apresentado no NeurIPS 2023 mediu o que eles chamaram de "prêmio de tokenização" entre idiomas [1]. Os números:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tokenizer&lt;/th&gt;
&lt;th&gt;Quanto a mais o português consome vs inglês&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GPT-2 (&lt;code&gt;r50k_base&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;1.94x&lt;/strong&gt; (quase o dobro)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GPT-4 (&lt;code&gt;cl100k_base&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;1.48x&lt;/strong&gt; (~50% a mais)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GPT-4o (&lt;code&gt;o200k_base&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;~1.3-1.4x&lt;/strong&gt; (melhorou) *&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Os números do GPT-2 e GPT-4 vêm diretamente do estudo de Petrov et al. [1]. A estimativa pro GPT-4o reflete a tendência de melhoria com vocabulários maiores, confirmada por estudos posteriores.&lt;/p&gt;

&lt;p&gt;A boa notícia: cada geração de tokenizer melhora essa disparidade. A notícia que importa: mesmo no melhor caso, português ainda consome pelo menos 30% mais peças que inglês pra construir a mesma coisa. Esse "imposto" vai reaparecer quando falarmos de context window e de custo, porque ele se acumula em cada interação.&lt;/p&gt;




&lt;h2&gt;
  
  
  A janela de contexto
&lt;/h2&gt;

&lt;p&gt;Se tokens são as peças, a context window é a mesa onde você monta. Tamanho fixo. Tudo precisa caber ali: as instruções que você mandou, o histórico da conversa, os arquivos de referência e a resposta que o modelo está construindo. Quando a mesa enche, acabou. O modelo não "lembra" de nada que ficou de fora.&lt;/p&gt;

&lt;h3&gt;
  
  
  As mesas de 2026
&lt;/h3&gt;

&lt;p&gt;O mercado convergiu pra &lt;strong&gt;1 milhão de tokens&lt;/strong&gt; (1M) como padrão nos modelos frontier [2]. Na tabela abaixo, "K" significa mil e "M" significa milhão:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Modelo&lt;/th&gt;
&lt;th&gt;Tamanho da mesa&lt;/th&gt;
&lt;th&gt;Resposta máxima&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://docs.anthropic.com/en/docs/about-claude/models" rel="noopener noreferrer"&gt;Claude Opus 4.6&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1M tokens&lt;/td&gt;
&lt;td&gt;128K tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://docs.anthropic.com/en/docs/about-claude/models" rel="noopener noreferrer"&gt;Claude Sonnet 4.6&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1M tokens&lt;/td&gt;
&lt;td&gt;64K tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://docs.anthropic.com/en/docs/about-claude/models" rel="noopener noreferrer"&gt;Claude Haiku 4.5&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;200K tokens&lt;/td&gt;
&lt;td&gt;64K tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://platform.openai.com/docs/models" rel="noopener noreferrer"&gt;GPT-5.4&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1.05M tokens&lt;/td&gt;
&lt;td&gt;128K tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://platform.openai.com/docs/models" rel="noopener noreferrer"&gt;GPT-4.1&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1M tokens&lt;/td&gt;
&lt;td&gt;32K tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://ai.google.dev/gemini-api/docs/models" rel="noopener noreferrer"&gt;Gemini 2.5 Pro&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1M tokens&lt;/td&gt;
&lt;td&gt;65K tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://ai.meta.com/blog/llama-4-multimodal-intelligence/" rel="noopener noreferrer"&gt;Llama 4 Scout&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;10M tokens&lt;/td&gt;
&lt;td&gt;varia&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;A mesa é compartilhada. Tudo que você manda pro modelo (sua pergunta, arquivos, histórico de conversa) e tudo que ele responde precisam caber juntos na mesma superfície. Uma mesa de 1 milhão de tokens parece enorme, mas o espaço da resposta já reserva uma parte, e o resto é o máximo que você pode mandar.&lt;/p&gt;

&lt;p&gt;Pra ter noção de escala: 1 milhão de tokens equivale a mais ou menos 750 mil palavras em inglês, algo como 8 a 10 livros. Pro português, por conta do imposto de tokenização, cai pra cerca de 500 mil palavras. Uns 7 livros.&lt;/p&gt;

&lt;h3&gt;
  
  
  O tamanho anunciado vs. o tamanho real
&lt;/h3&gt;

&lt;p&gt;Aqui entra um ponto que pouca gente discute. Ter uma mesa de 1 milhão de tokens não significa que o modelo usa bem toda essa superfície.&lt;/p&gt;

&lt;p&gt;Pesquisas recentes mostram que a capacidade do modelo de prestar atenção (attention, um conceito que vamos explorar logo abaixo) cai conforme o contexto cresce, especialmente pra informações posicionadas no meio do texto [3]. O fenômeno tem até nome: &lt;strong&gt;"lost in the middle"&lt;/strong&gt;. Na prática, a atenção efetiva do modelo é significativamente menor que a janela anunciada. O benchmark NoLiMa (ICML 2025) mostrou que a maioria dos LLMs erra mais da metade das vezes quando precisa encontrar uma informação específica em contextos a partir de 32K tokens [9].&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsoe94kpqb01vp156c4x9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsoe94kpqb01vp156c4x9.png" alt="A janela de contexto" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;E aqui o imposto linguístico aparece de novo. Se a janela efetiva de um modelo com 200K tokens já é significativamente menor que o anunciado, pra conteúdo 100% em português, planeje com uma margem generosa de desconto. O espaço útil pode cair pra algo entre 80K e 90K tokens de conteúdo equivalente ao inglês. Peças maiores ocupam mais espaço na mesma superfície.&lt;/p&gt;




&lt;h2&gt;
  
  
  Como o modelo gera texto
&lt;/h2&gt;

&lt;p&gt;Você já sabe quais são as peças (tokens) e o tamanho da mesa (context window). Agora vem o processo de montagem em si.&lt;/p&gt;

&lt;p&gt;A mecânica é surpreendentemente simples. O modelo olha pra tudo que já está na mesa, calcula uma distribuição de probabilidade sobre todo o vocabulário (entre ~100K e ~260K peças possíveis) pra decidir qual encaixa melhor na sequência, coloca uma, e repete. Uma de cada vez, do começo ao fim da resposta. Não existe um plano mestre. Isso se chama &lt;strong&gt;geração autorregressiva&lt;/strong&gt; [4], e é a mecânica central da arquitetura Transformer publicada em 2017 por Vaswani et al.&lt;/p&gt;

&lt;p&gt;Exemplo: o modelo recebe &lt;strong&gt;"O céu está"&lt;/strong&gt; e precisa continuar.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fetu8i4qrgnii8af2888l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fetu8i4qrgnii8af2888l.png" alt="Geração autorregressiva" width="800" height="447"&gt;&lt;/a&gt;&lt;br&gt;
Resultado: &lt;strong&gt;"O céu está azul hoje."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cada peça colocada depende de todas as anteriores: tanto o input original quanto o que o modelo já construiu. Por isso respostas às vezes começam bem e descarrilham no meio. O modelo não sabe onde vai terminar quando começa a gerar.&lt;/p&gt;

&lt;p&gt;Se você leu o &lt;a href="https://dev.to/rsicarelli/cc101-programacao-agentica"&gt;artigo anterior&lt;/a&gt;, pode reconhecer esse mecanismo. Lembra do autocomplete, a fase 1 da evolução? O code completion que sugeria a próxima linha no editor? O mecanismo por baixo é o mesmo: next-token prediction. A diferença é a escala. Modelos como o GPT-2 (2019) tinham 1,5 bilhão de parâmetros e uma mesa minúscula. O Claude Opus 4.6 opera numa escala completamente diferente, com uma janela de contexto mil vezes maior. O processo de montagem é o mesmo. A capacidade de construir coisas complexas é que mudou.&lt;/p&gt;




&lt;h2&gt;
  
  
  O mecanismo de atenção
&lt;/h2&gt;

&lt;p&gt;O processo de montagem explica que o modelo coloca uma peça por vez. Mas falta entender como ele decide as probabilidades. Se a entrada é "O banco está na margem do rio", como o modelo sabe que "banco" aqui é uma formação de areia e não uma instituição financeira?&lt;/p&gt;

&lt;p&gt;A resposta é o &lt;strong&gt;mecanismo de atenção&lt;/strong&gt; (self-attention), introduzido no paper "Attention Is All You Need" [4]. É o coração da arquitetura &lt;strong&gt;Transformer&lt;/strong&gt; que sustenta todos os LLMs modernos.&lt;/p&gt;

&lt;h3&gt;
  
  
  Como o modelo enxerga o contexto
&lt;/h3&gt;

&lt;p&gt;Imagine que o manual de montagem não mostra só o próximo passo. Pra cada peça nova, ele destaca quais partes da construção importam pra essa decisão: a base brilha forte (sustenta tudo), as torres ao redor acendem (definem o padrão), o jardim do outro lado fica apagado (irrelevante agora). O mecanismo de atenção faz exatamente isso: pra cada token, ele "acende" os anteriores que mais pesam e "apaga" os que não importam.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrc2coanytbovyd4tbv8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrc2coanytbovyd4tbv8.png" alt="Mecanismo de atenção" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Antes dos Transformers, era como montar LEGO com alguém ditando as instruções: uma peça por vez, sem repetir, sem voltar atrás. Perdeu o passo 12? Já era. O Transformer é ter o manual inteiro aberto na mesa, todas as páginas visíveis ao mesmo tempo. Essa capacidade de processar tudo em paralelo foi o salto que viabilizou treinar modelos na escala atual.&lt;/p&gt;

&lt;h3&gt;
  
  
  Desambiguação na prática
&lt;/h3&gt;

&lt;p&gt;Volte pro exemplo: "O banco está na margem do rio." O mecanismo de atenção faz com que o token "banco" preste muita atenção nos tokens "margem" e "rio", e pouca atenção em "está" e "na". É como numa montagem onde uma peça azul pode ser céu ou mar dependendo do que está ao redor. O contexto resolve a ambiguidade.&lt;/p&gt;

&lt;p&gt;Esse mecanismo tem um preço. Pra cada peça na mesa, o modelo olha pra todas as outras antes de decidir o encaixe [4]. Numa mesa com 10 peças, tranquilo. Numa mesa com 10 mil, cada decisão exige olhar pra 10 mil peças. Dobrar o tamanho da mesa não dobra o trabalho, quadruplica.&lt;/p&gt;

&lt;p&gt;Além disso, montar custa mais que olhar. Quando você manda uma pergunta, o modelo lê tudo de uma vez, como abrir o manual numa página. Mas quando ele constrói a resposta, é uma peça por vez, cada uma exigindo uma olhada na mesa inteira. Por isso o preço por token de resposta é 3x a 5x maior que o de entrada.&lt;/p&gt;

&lt;h3&gt;
  
  
  O que isso significa pra você
&lt;/h3&gt;

&lt;p&gt;Se a atenção funciona ponderando a relevância de cada token em relação aos outros, prompts claros e bem estruturados facilitam o trabalho do modelo. Ambiguidade no input produz "confusão" na atenção: o modelo precisa distribuir pesos entre interpretações concorrentes. Um prompt preciso é como código limpo: a intenção fica óbvia e o mecanismo de atenção foca no que importa. Quanto mais organizada a mesa, mais precisa a próxima peça.&lt;/p&gt;

&lt;p&gt;Isso não é abstração. É a base técnica de por que prompt engineering funciona, algo que vamos explorar a fundo na Parte 6 desta série.&lt;/p&gt;




&lt;h2&gt;
  
  
  Como o modelo escolhe entre as opções
&lt;/h2&gt;

&lt;p&gt;Você já entende como o modelo pesa as opções. Mas quando vários tokens têm probabilidades próximas, quem decide qual é escolhido?&lt;/p&gt;

&lt;p&gt;A diferença está entre seguir o manual ao pé da letra ou improvisar.&lt;/p&gt;

&lt;h3&gt;
  
  
  Temperatura
&lt;/h3&gt;

&lt;p&gt;Você tá montando uma parede azul e precisa da próxima peça. Na caixa, as peças estão organizadas: no topo ficam as azuis que encaixam perfeitamente, no meio aparecem umas verdes que até funcionariam como detalhe, e lá no fundo tem uma roda vermelha que não faz sentido nenhum.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;temperatura&lt;/strong&gt; controla o quão fundo o modelo enfia a mão na caixa. No zero, sempre pega a peça do topo: mesma escolha, toda vez, sem surpresa. No 0.7, às vezes pesca uma verde que ninguém esperava, mas que dá um charme na construção. Acima de 1.0, puxa a roda vermelha e encaixa na parede mesmo assim.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Temperatura&lt;/th&gt;
&lt;th&gt;Completando "A receita leva..."&lt;/th&gt;
&lt;th&gt;Comportamento&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;"farinha, açúcar e cacau."&lt;/td&gt;
&lt;td&gt;Sempre a mesma resposta&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;0.3&lt;/td&gt;
&lt;td&gt;"farinha de trigo, óleo de coco e cacau."&lt;/td&gt;
&lt;td&gt;Pequenas variações&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;0.7&lt;/td&gt;
&lt;td&gt;"especiarias exóticas e um toque de limão siciliano."&lt;/td&gt;
&lt;td&gt;Criativo&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1.5&lt;/td&gt;
&lt;td&gt;"sonhos derretidos em caramelo de dragão."&lt;/td&gt;
&lt;td&gt;Incoerente&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2hk8uxjy1l3ob3nf01c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2hk8uxjy1l3ob3nf01c.png" alt="Temperatura: o quão fundo na caixa" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Top-p
&lt;/h3&gt;

&lt;p&gt;Temperatura não é o único controle. O &lt;strong&gt;top-p&lt;/strong&gt; funciona diferente: em vez de mudar o quão fundo o modelo vai na caixa, ele tira peças da caixa antes da escolha. Com top-p de 0.9, as 10% mais improváveis nem ficam disponíveis. O efeito é parecido com temperatura, e a recomendação dos providers é ajustar um ou outro, não os dois ao mesmo tempo.&lt;/p&gt;

&lt;h3&gt;
  
  
  O que isso significa na prática
&lt;/h3&gt;

&lt;p&gt;Quando você começar a usar ferramentas agênticas pra escrever código (a partir da Parte 5 desta série), esses valores já vêm calibrados. Mas entender que eles existem ajuda a entender por que o modelo às vezes surpreende com uma resposta inesperada: alguém deixou a mão mais funda na caixa.&lt;/p&gt;

&lt;p&gt;Pro que nos interessa nesta série, a regra é simples: pra código, o modelo trabalha melhor no modo previsível. Peça certa no lugar certo, sem improviso.&lt;/p&gt;




&lt;h2&gt;
  
  
  As famílias de modelos
&lt;/h2&gt;

&lt;p&gt;Nem toda peça serve pra toda construção. LEGO Duplo (peças grandes, simples) é perfeito pra quem está começando, mas não dá pra construir um motor funcional com ele. LEGO Technic (engrenagens, eixos, complexidade) permite construções sofisticadas, mas é mais caro e exige mais tempo. A mesma lógica vale pra modelos de linguagem.&lt;/p&gt;

&lt;h3&gt;
  
  
  Modelos de raciocínio
&lt;/h3&gt;

&lt;p&gt;Os kits Technic: "pensam antes de responder", gastando tokens internos em raciocínio passo a passo (extended thinking) antes de produzir a resposta final. São os mais capazes, mais lentos e mais caros. Os preços abaixo estão em dólares por MTok (1 milhão de tokens): o primeiro valor é o custo de entrada (o que você manda), o segundo é o de resposta (o que o modelo gera).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Opus 4.6&lt;/strong&gt; (Anthropic) alcança 80.8% no SWE-bench Verified [5]. $5/$25 por MTok.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;o3 / o3-pro&lt;/strong&gt; (OpenAI) são modelos de raciocínio dedicados. O o3-pro custa $20/$80 por MTok.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini 2.5 Pro&lt;/strong&gt; (Google) permite configurar o "orçamento de raciocínio". $1.25/$10 por MTok.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Modelos rápidos
&lt;/h3&gt;

&lt;p&gt;Os kits Duplo: peças maiores, encaixe rápido, resultado imediato. Projetados pra latência baixa e alto volume.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Haiku 4.5&lt;/strong&gt; (Anthropic): $1/$5 por MTok.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPT-4o-mini&lt;/strong&gt; (OpenAI): $0.15/$0.60 por MTok.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini 2.5 Flash&lt;/strong&gt; (Google): $0.30/$2.50 por MTok.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Modelos de código
&lt;/h3&gt;

&lt;p&gt;Kits otimizados pra construções específicas. Como a linha Creator Expert, onde cada set é projetado pra um resultado particular.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GPT-4.1&lt;/strong&gt; (OpenAI): 1M de contexto, $2/$8 por MTok. Explicitamente otimizado pra código.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://docs.anthropic.com/en/docs/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;&lt;/strong&gt; (Anthropic): usa Opus/Sonnet por baixo, mas com um harness inteiro de ferramentas pra ler, editar, executar e fazer commit.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Modelos open-weight
&lt;/h3&gt;

&lt;p&gt;Kits com todas as peças expostas: você monta, desmonta e adapta como quiser.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Llama 4 Scout&lt;/strong&gt; (Meta): 10M de contexto, open-weight [6].&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Llama 4 Maverick&lt;/strong&gt; (Meta): 1M de contexto, 17B parâmetros ativos de 400B total.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DeepSeek R1&lt;/strong&gt;: open-source (MIT), 671B parâmetros, raciocínio forte.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Escolhendo o kit certo
&lt;/h3&gt;

&lt;p&gt;Usar Opus pra classificar sentimento de tweets é como comprar um Technic de 4 mil peças pra construir um cubo. A diferença entre Haiku ($1/$5 por MTok) e Opus ($5/$25 por MTok) é &lt;strong&gt;5x no input e 5x no output&lt;/strong&gt;. Muita tarefa que parece "precisar" de um modelo grande funciona perfeitamente com um modelo menor.&lt;/p&gt;

&lt;p&gt;A regra prática: comece sempre pelo modelo mais barato que pode funcionar. Teste com Haiku, Flash ou mini. Se a qualidade não for suficiente, suba. Opus e o3-pro ficam reservados pra quando realmente necessário.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2y2g5d8ccc1wu47caih.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2y2g5d8ccc1wu47caih.png" alt="Famílias de modelos" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mas independente do kit que você escolher, todos compartilham as mesmas limitações fundamentais.&lt;/p&gt;




&lt;h2&gt;
  
  
  O que modelos não conseguem fazer
&lt;/h2&gt;

&lt;p&gt;A construção pode parecer perfeita. Visualmente impecável, cada peça no lugar. Mas empurre a parede e ela cai. O modelo não "sabe" se o que construiu funciona. Ele encaixa peças onde elas parecem caber, seguindo padrões estatísticos, e o resultado frequentemente se sustenta. Mas nem sempre.&lt;/p&gt;

&lt;h3&gt;
  
  
  Alucinações não são bugs
&lt;/h3&gt;

&lt;p&gt;Quando um modelo gera informação que parece correta mas é factualmente falsa, chamamos de &lt;strong&gt;alucinação&lt;/strong&gt; (hallucination). É tentador tratar isso como defeito, algo que será "consertado" numa versão futura. Mas alucinações são uma consequência direta do design: o modelo encaixa peças onde elas parecem caber estatisticamente, sem checar se a construção faz sentido no mundo real [4].&lt;/p&gt;

&lt;p&gt;Se o padrão estatístico de "X escreveu o livro Y" é forte o bastante nos dados de treinamento, o modelo vai afirmar isso mesmo se for falso. Ele não tem um verificador de fatos interno. Não distingue entre gerar "Paris é a capital da França" e "Paris é a capital da Itália". Ambas são sequências de tokens plausíveis; uma é verdade, a outra não, e o modelo não sabe a diferença.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55qbcwl4wxf0xienabtm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55qbcwl4wxf0xienabtm.png" alt="Limitações: alucinações e mesa limpa" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  As limitações concretas
&lt;/h3&gt;

&lt;p&gt;O manual do modelo foi impresso numa data específica. Tudo que aconteceu depois não existe pra ele. E pior: no final de cada montagem, a mesa é limpa. A próxima conversa começa do zero, sem nenhuma peça da anterior. Se você precisa que o modelo lembre de algo, você mesmo precisa colocar de volta na mesa.&lt;/p&gt;

&lt;p&gt;A boa notícia: com técnicas de Context Engineering e Harness Engineering (temas das Partes 7 e 8 desta série), dá pra automatizar o que vai na mesa a cada conversa. Mas por enquanto, o importante é saber que o modelo sozinho não lembra de nada.&lt;/p&gt;

&lt;p&gt;Matemática continua sendo um ponto fraco. O modelo pode montar uma conta que parece certa mas erra o resultado. Pra cálculos que exigem precisão, é mais seguro pedir pro modelo escrever código que faça a conta do que confiar na resposta direta.&lt;/p&gt;

&lt;p&gt;E no contexto de código, testes da Veracode mostraram que &lt;strong&gt;45% do código gerado por IA contém falhas de segurança&lt;/strong&gt;, em avaliações com mais de 100 LLMs [7]. O modelo monta rápido, mas se ninguém confere a construção, peças mal encaixadas vão parar no produto final.&lt;/p&gt;

&lt;h3&gt;
  
  
  O que está melhorando
&lt;/h3&gt;

&lt;p&gt;Mas a comunidade não tá parada. As melhorias estão vindo de várias frentes ao mesmo tempo: modelos que "pensam passo a passo" antes de montar, reduzindo erros em tarefas complexas. Sistemas que preenchem a mesa automaticamente com as peças certas pro seu projeto, em vez de você colocar tudo na mão. Agentes que lembram das montagens anteriores e aprendem com elas. E uma infraestrutura que fica mais rápida e mais barata a cada geração.&lt;/p&gt;

&lt;p&gt;Cada uma dessas frentes vai aparecer nos próximos artigos da série. Por enquanto, o importante é saber: as limitações são reais, mas estão encolhendo.&lt;/p&gt;

&lt;p&gt;Mesmo com essas limitações, os modelos estão sendo usados em escala. E escala tem um custo.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quanto custa
&lt;/h2&gt;

&lt;p&gt;Cada peça custa dinheiro. LLMs são cobrados por token processado, dividido em duas categorias: &lt;strong&gt;input tokens&lt;/strong&gt; (tudo que você envia) e &lt;strong&gt;output tokens&lt;/strong&gt; (o que o modelo gera). Construir algo novo (output) sempre custa mais que consultar o que já existe (input), geralmente entre 3x e 5x o preço [8]. Faz sentido: gerar cada token exige um forward pass completo pelo modelo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff79ek4ffbgmlqipw59z5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff79ek4ffbgmlqipw59z5.png" alt="Input vs Output: consultar vs construir" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Tabela de preços (abril 2026)
&lt;/h3&gt;

&lt;p&gt;Preços em USD por 1 milhão de tokens (MTok):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Modelo&lt;/th&gt;
&lt;th&gt;Input / MTok&lt;/th&gt;
&lt;th&gt;Output / MTok&lt;/th&gt;
&lt;th&gt;Cache read / MTok&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Claude Opus 4.6&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$5.00&lt;/td&gt;
&lt;td&gt;$25.00&lt;/td&gt;
&lt;td&gt;$0.50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Claude Sonnet 4.6&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$3.00&lt;/td&gt;
&lt;td&gt;$15.00&lt;/td&gt;
&lt;td&gt;$0.30&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Claude Haiku 4.5&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$1.00&lt;/td&gt;
&lt;td&gt;$5.00&lt;/td&gt;
&lt;td&gt;$0.10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GPT-5.4&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$2.50&lt;/td&gt;
&lt;td&gt;$15.00&lt;/td&gt;
&lt;td&gt;$0.25&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GPT-4.1&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$2.00&lt;/td&gt;
&lt;td&gt;$8.00&lt;/td&gt;
&lt;td&gt;$0.50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Gemini 2.5 Pro&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$1.25&lt;/td&gt;
&lt;td&gt;$10.00&lt;/td&gt;
&lt;td&gt;$0.125&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Gemini 2.5 Flash&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$0.30&lt;/td&gt;
&lt;td&gt;$2.50&lt;/td&gt;
&lt;td&gt;$0.03&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Repare na coluna "Cache read". Ela vai ser importante daqui a pouco.&lt;/p&gt;

&lt;h3&gt;
  
  
  O imposto linguístico fecha o ciclo
&lt;/h3&gt;

&lt;p&gt;Lembra do custo de tokenização do português? Ele se traduz diretamente em dinheiro. Pro mesmo conteúdo, aplicações em português custam entre &lt;strong&gt;30% e 50% a mais&lt;/strong&gt; em tokens de input do que a mesma aplicação em inglês, dependendo do tokenizer. Numa conta de $5.000/mês, isso representa entre $1.150 e $1.650 extras só por causa do idioma.&lt;/p&gt;

&lt;p&gt;Ao longo deste artigo, um fio conecta três seções: o português consome mais peças pra construir a mesma coisa (seção 1), isso ocupa mais espaço na mesa (seção 2), e agora cobra mais caro (aqui). Não são três problemas. É o mesmo problema, em três camadas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft68d3oevxdmvho5yzlzp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft68d3oevxdmvho5yzlzp.png" alt="O imposto linguístico em três camadas" width="800" height="1192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Como otimizar
&lt;/h3&gt;

&lt;p&gt;A boa notícia: existem formas concretas de reduzir esse custo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt caching&lt;/strong&gt; é a mais impactante. Todos os grandes providers oferecem cache reads a cerca de 10% do preço de input [8]. Se o seu system prompt ou contexto de referência se repete entre chamadas, caching pode reduzir o custo de input em até &lt;strong&gt;90%&lt;/strong&gt;. Isso mitiga significativamente o imposto linguístico do português.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Batch API&lt;/strong&gt; oferece 50% de desconto em troca de processamento assíncrono (janela de 24h). Pra tarefas que não são real-time (análise de documentos, classificação em massa), é dinheiro fácil de economizar.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Seleção de modelo&lt;/strong&gt; é a terceira alavanca. Muitas tarefas que rodam em Opus funcionariam tão bem em Haiku, a uma fração do custo. Testar com o modelo mais barato primeiro não é otimização prematura. É engenharia responsável.&lt;/p&gt;

&lt;p&gt;Combinando batch + caching na Anthropic, o desconto pode chegar a &lt;strong&gt;95%&lt;/strong&gt; em cenários ideais [8].&lt;/p&gt;

&lt;p&gt;Esse tema vai ser central na Part 4, quando falarmos de context engineering. Gerenciar o que vai na mesa é, na prática, gerenciar dinheiro.&lt;/p&gt;




&lt;h2&gt;
  
  
  Considerações finais
&lt;/h2&gt;

&lt;p&gt;Tokens, mesa, atenção, temperatura, limitações, custo. Pode parecer muita coisa, mas tudo se conecta. E nada disso é trivia técnica. É a base de cada decisão que você vai tomar com essas ferramentas: por que um prompt funciona e outro não, por que a resposta travou no meio, por que a conta veio mais cara do que o esperado.&lt;/p&gt;

&lt;p&gt;Mas tem um gap. Saber como as peças funcionam não explica como digitar um parágrafo no terminal resulta em 50 arquivos editados, testes passando e um commit pronto. Alguma coisa tá pegando essas peças, organizando na mesa, montando, conferindo, desmontando quando erra e tentando de novo. Alguma coisa tá transformando um motor de próximo-token em um sistema que realmente constrói software.&lt;/p&gt;

&lt;p&gt;Essa alguma coisa é o que ferramentas como Claude Code, Codex CLI e OpenCode fazem. São elas que envolvem o modelo, dão ferramentas pra ele agir, e orquestram o ciclo de montar, conferir e corrigir. No próximo artigo, a gente abre uma delas por dentro, peça por peça.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;🤖 Este artigo foi escrito com assistência do Claude (Anthropic).&lt;/p&gt;

&lt;p&gt;Conteúdo pesquisado, verificado e editado por um humano.&lt;/p&gt;

&lt;p&gt;Encontrou algum erro ou crédito faltando? Me manda uma mensagem!&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Referências
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2305.15425" rel="noopener noreferrer"&gt;Petrov, A. et al. — "Language Model Tokenizers Introduce Unfairness Between Languages" (NeurIPS 2023)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.anthropic.com/en/docs/about-claude/models" rel="noopener noreferrer"&gt;Anthropic — Claude model documentation (2026)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2307.03172" rel="noopener noreferrer"&gt;Liu, N.F. et al. — "Lost in the Middle: How Language Models Use Long Contexts" (TACL 2024, vol. 12)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/1706.03762" rel="noopener noreferrer"&gt;Vaswani, A. et al. — "Attention Is All You Need" (NeurIPS 2017)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.swebench.com/" rel="noopener noreferrer"&gt;SWE-bench&lt;/a&gt; — Princeton NLP (ICLR 2024)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ai.meta.com/blog/llama-4-multimodal-intelligence/" rel="noopener noreferrer"&gt;Meta — Llama 4 announcement (2025)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.veracode.com/resources/analyst-reports/2025-genai-code-security-report/" rel="noopener noreferrer"&gt;Veracode — "GenAI and Code Security: What You Need to Know" (2025)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.anthropic.com/pricing" rel="noopener noreferrer"&gt;Anthropic — API Pricing (2026)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2502.05167" rel="noopener noreferrer"&gt;Kuratov, Y. et al. — "NoLiMa: Long-Context Evaluation Beyond Literal Matching" (ICML 2025)&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>claude</category>
      <category>braziliandevs</category>
    </item>
    <item>
      <title>Claude Code 101: Introduction to Agentic Programming</title>
      <dc:creator>Rodrigo Sicarelli</dc:creator>
      <pubDate>Mon, 06 Apr 2026 10:00:39 +0000</pubDate>
      <link>https://forem.com/rsicarelli/claude-code-101-introduction-to-agentic-programming-3p83</link>
      <guid>https://forem.com/rsicarelli/claude-code-101-introduction-to-agentic-programming-3p83</guid>
      <description>&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Software development as we know it&lt;/li&gt;
&lt;li&gt;The era of AI assistance&lt;/li&gt;
&lt;li&gt;From assistant to agent&lt;/li&gt;
&lt;li&gt;The agentic tooling ecosystem&lt;/li&gt;
&lt;li&gt;Agentic programming: the paradigm shift&lt;/li&gt;
&lt;li&gt;Final thoughts&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;🇧🇷 &lt;a href="https://dev.to/rsicarelli/claude-code-101-introducao-a-programacao-agentica-4mk1"&gt;Leia em português&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;September 2025. I was leading a critical dependency upgrade on a mobile app with millions of users. The kind of change that breaks tests in a cascade. The deadline was October: if it wasn't ready, the app wouldn't ship to the store.&lt;/p&gt;

&lt;p&gt;The problem: nearly 10,000 tests needed to be adapted for the new version. Code owned by over 20 teams, spread across hundreds of modules.&lt;/p&gt;

&lt;p&gt;I thought "what if I give these AI tools everyone keeps talking about a real shot?" After watching a few videos and reading the docs, I spun up four terminals running Claude Code in parallel, each one migrating a slice of the tests. One week later: 2,000+ files changed, 50,000 lines of code, 85% migrated on the first pass. The following week, I cleaned up the rest.&lt;/p&gt;

&lt;p&gt;When I merged everything with the owning teams' approval, one thing was clear: I &lt;strong&gt;never&lt;/strong&gt; could have done that alone. Not in two weeks, maybe not even in two months. It was cognitively impossible.&lt;/p&gt;

&lt;p&gt;That's when I understood something had genuinely changed: my role had shifted, and I needed to understand how this works under the hood.&lt;/p&gt;

&lt;p&gt;This is the first article in the &lt;strong&gt;Claude Code 101&lt;/strong&gt; series. Throughout it, we'll break down what's behind &lt;strong&gt;agentic programming&lt;/strong&gt;: where it came from, how it works, what tools exist, and what you need to learn to use this for real. Together we'll build a mental model (the &lt;strong&gt;factory analogy&lt;/strong&gt;) and meet the three pillars that hold everything up: prompt engineering, context engineering, and harness engineering.&lt;/p&gt;




&lt;h2&gt;
  
  
  Software development as we know it
&lt;/h2&gt;

&lt;p&gt;The flow every dev knows: read requirements, design a solution, write code, run tests, fix bugs, deploy. Nothing happens unless you're actively involved. You're the one reading the docs, the one searching the (late) Stack Overflow, the one keeping the project's context in your head, the one typing every character.&lt;/p&gt;

&lt;p&gt;Software only moves forward when you're sitting in front of the screen, working on it. Every line of code is a product assembled by hand, and the pace of production depends on how fast your fingers move and how much your memory can hold.&lt;/p&gt;

&lt;h3&gt;
  
  
  We are the bottleneck
&lt;/h3&gt;

&lt;p&gt;Cognitive science has some uncomfortable data for us. Our working memory holds roughly 7 items at a time [1]. After an interruption (that tap on the shoulder, that Slack message) it takes an average of 23 minutes to regain focus [2]. And if you actually measure it, you'll find that you spend only about 30% of your time effectively writing code. The rest is reading, navigating, debugging, and meetings.&lt;/p&gt;

&lt;p&gt;The bottleneck isn't just the compiler. It's not CI. It's not the server. The bottleneck is us.&lt;/p&gt;

&lt;p&gt;For decades, we accepted this as the natural cost of building software. The factory produces at the speed of whoever operates it. Period. Until artificial intelligence started changing that equation.&lt;/p&gt;




&lt;h2&gt;
  
  
  The era of AI assistance
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Intelligent autocomplete (2021-2022)
&lt;/h3&gt;

&lt;p&gt;In June 2021, GitHub launched the technical preview of &lt;strong&gt;Copilot&lt;/strong&gt;, which suggested lines of code in real time inside the editor [3]. Along with &lt;a href="https://www.tabnine.com/" rel="noopener noreferrer"&gt;TabNine&lt;/a&gt; and similar tools, the code completion paradigm was born: single- or multi-line suggestions based on the current file's context.&lt;/p&gt;

&lt;p&gt;Did it change things? Sure. Less typing, boilerplate filled in automatically, suggestions that often nailed what you were about to write. But the role of the developer didn't change one bit. You still decide what to write, where to write it, when to run tests, how to fix errors. The AI suggests; you execute.&lt;/p&gt;

&lt;p&gt;Think of the factory: the production line got a conveyor belt to move parts faster. Useful? Absolutely. But you're still assembling everything by hand.&lt;/p&gt;

&lt;h3&gt;
  
  
  The chat paradigm (2022-2023)
&lt;/h3&gt;

&lt;p&gt;November 2022: &lt;a href="https://openai.com/index/chatgpt/" rel="noopener noreferrer"&gt;ChatGPT&lt;/a&gt; launches and changes how devs seek help [4]. The pattern quickly became familiar: copy a code snippet, paste it into the chat, describe the problem, get a suggestion, copy it back to the editor. &lt;a href="https://www.langchain.com/" rel="noopener noreferrer"&gt;LangChain&lt;/a&gt;, launched a month earlier, was already experimenting with connecting LLMs to external tools. A precursor of what was to come.&lt;/p&gt;

&lt;p&gt;Chat was more versatile than autocomplete: it explained concepts, suggested refactors, generated tests. But the limitations were fundamental. No access to project files, no awareness of the folder structure, no ability to run commands. You became a messenger between the AI and your code.&lt;/p&gt;

&lt;p&gt;It's like having a really good consultant sitting next to you. Someone who reads any manual in seconds and has an answer for almost anything. Except this consultant can't touch the machines. The manual work is still on you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhu3n4v00t6zkej41k05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhu3n4v00t6zkej41k05.png" alt="Autocomplete vs Chat" width="800" height="725"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Why assistance wasn't enough
&lt;/h3&gt;

&lt;p&gt;At the end of the day, both paradigms hit the same wall: &lt;strong&gt;the AI is not agentic&lt;/strong&gt; (meaning it can't act on its own). It doesn't read your project. It doesn't run commands. It doesn't execute tests. It doesn't iterate on errors.&lt;/p&gt;

&lt;p&gt;The conveyor belt and the consultant help, but nobody operates the machines for you. When you leave, production stops.&lt;/p&gt;

&lt;p&gt;In 2024, the natural question came: &lt;strong&gt;what if AI could do more than suggest?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  From Assistant to Agent
&lt;/h2&gt;

&lt;p&gt;Looking back, the evolution of AI tools for code follows four distinct phases. Each phase is like an upgrade to the factory: first comes the conveyor belt, then the consultant, then better machines, until you reach ones that operate on their own.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqo4nueem1nsdt16vdc4k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqo4nueem1nsdt16vdc4k.png" alt="The Evolution of AI-Assisted Coding" width="800" height="513"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We've already covered phases 1 and 2. The real leap starts at phase 3.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 3: Multi-file editing (2024)
&lt;/h3&gt;

&lt;p&gt;In 2024, &lt;strong&gt;&lt;a href="https://www.cursor.com/" rel="noopener noreferrer"&gt;Cursor&lt;/a&gt;&lt;/strong&gt; (an editor built as a VS Code fork) popularized a new category: multi-file editing via natural language. You describe what you want; the AI proposes coordinated changes across multiple files at once.&lt;/p&gt;

&lt;p&gt;The growth speaks for itself: from &lt;strong&gt;$1 million in annual revenue in January 2024&lt;/strong&gt; to &lt;strong&gt;over $1 billion by November 2025&lt;/strong&gt; [5]. Devs wanted more than line-by-line suggestions.&lt;/p&gt;

&lt;p&gt;But the fundamental model didn't change: the human orchestrates, the AI executes. You say what to change, the AI changes it, you verify. The machines got more sophisticated, yes, but you're still operating every control panel, step by step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 4: Agentic coding (2024-present)
&lt;/h3&gt;

&lt;p&gt;This is where the paradigm flips. Instead of you orchestrating the AI, &lt;strong&gt;you set the goal and the AI orchestrates itself&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This shift didn't happen out of nowhere. A sequence of advances made it possible:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Function calling&lt;/strong&gt; (OpenAI, June 2023). For the first time, models could invoke external tools. This is the technical prerequisite for any agentic behavior [6].&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Andrew Ng coins "agentic"&lt;/strong&gt; (late 2023). He chose an adjective on purpose, not a noun: &lt;em&gt;"Unlike the noun 'agent,' the adjective 'agentic' lets us think of systems as being more or less agent-like, in varying degrees."&lt;/em&gt; [7]
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model Context Protocol&lt;/strong&gt; (Anthropic, November 2024). The open standard for connecting agents to external tools, rapidly adopted across the industry [8].&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;"Building Effective Agents"&lt;/strong&gt; (Anthropic, December 2024). The paper that became the most cited reference in the field, distinguishing &lt;strong&gt;workflows&lt;/strong&gt; (predefined paths) from &lt;strong&gt;agents&lt;/strong&gt; (dynamic, self-directed processes) [9].&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What makes something "agentic"? Five things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Autonomy&lt;/strong&gt;: decides what to do without step-by-step guidance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool use&lt;/strong&gt;: reads and writes files, executes commands, runs tests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Planning&lt;/strong&gt;: breaks goals into subtasks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Loop reasoning&lt;/strong&gt;: iterates, doesn't just respond once&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-correction&lt;/strong&gt;: makes mistakes, notices, adjusts, and tries again&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fza0cklyl5mjrbob72oq7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fza0cklyl5mjrbob72oq7.png" alt="The agentic loop" width="800" height="1226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is where the factory analogy changes completely. You're no longer on the production line tightening bolts and moving parts. Now you &lt;strong&gt;design&lt;/strong&gt; the factory. You program the machines, set up quality controls, oversee production. The agents execute, report problems, and self-correct. Your productivity is no longer limited by the speed of your hands; it depends on the quality of your instructions.&lt;/p&gt;

&lt;p&gt;As Anthropic put it: development shifts "from &lt;em&gt;'write code, run tests, read errors, fix, repeat'&lt;/em&gt; to &lt;em&gt;'set goal, review changes, approve implementation.'&lt;/em&gt;"&lt;/p&gt;

&lt;h3&gt;
  
  
  The full timeline
&lt;/h3&gt;

&lt;p&gt;Five years, accelerating dramatically in 2024-2025:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;2021&lt;/strong&gt;: GitHub Copilot (preview) — code completion with AI is born&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2022&lt;/strong&gt;: LangChain (October), ChatGPT (November) — chat is born&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2023&lt;/strong&gt;: Function calling (June), &lt;a href="https://github.com/Significant-Gravitas/AutoGPT" rel="noopener noreferrer"&gt;AutoGPT&lt;/a&gt; goes viral, Andrew Ng coins "agentic"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2024&lt;/strong&gt;: &lt;a href="https://devin.ai/" rel="noopener noreferrer"&gt;Devin AI&lt;/a&gt; jumps the &lt;a href="https://www.swebench.com/" rel="noopener noreferrer"&gt;SWE-bench&lt;/a&gt; from 1.96% to 13.86%. MCP is launched. "Building Effective Agents" is published&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2025&lt;/strong&gt;: &lt;a href="https://docs.anthropic.com/en/docs/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;, Copilot Agent Mode, &lt;a href="https://github.com/openai/codex" rel="noopener noreferrer"&gt;Codex CLI&lt;/a&gt;. Agents solve &lt;strong&gt;over 80%&lt;/strong&gt; of SWE-bench, up from 13.86% just 18 months earlier [10]
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  "Vibe coding" vs. the real deal
&lt;/h3&gt;

&lt;p&gt;An important distinction here. Typing vague prompts without checking the output (what the community calls "vibe coding") is not agentic programming. It's just... laziness with a nice interface. It's the difference between turning on a machine without reading the manual and configuring everything properly before pressing the button.&lt;/p&gt;

&lt;p&gt;Professional agentic programming, as Tweag's engineering blog defines it, involves &lt;em&gt;"qualified professionals who write prompts intentionally, validate rigorously, and guide the output within clear architectural boundaries"&lt;/em&gt; [11].&lt;/p&gt;

&lt;p&gt;That's the approach this series teaches. And to use it properly, it helps to know the ecosystem of available tools.&lt;/p&gt;




&lt;h2&gt;
  
  
  The agentic tooling ecosystem
&lt;/h2&gt;

&lt;p&gt;There's no shortage of tools in this space. In 2025, AI coding tools generated &lt;strong&gt;$7.37 billion in revenue&lt;/strong&gt;, accounting for 55% of all enterprise AI investment [12]. Google already attributes 50% of its code to agents [13]. And 84% of devs say they use or plan to use AI tools [14].&lt;/p&gt;

&lt;p&gt;The tools fall into three categories: &lt;strong&gt;CLI&lt;/strong&gt; (terminal: Claude Code, Codex CLI, Cursor CLI, Aider, OpenCode), &lt;strong&gt;IDE&lt;/strong&gt; (editor with built-in AI: Cursor, &lt;a href="https://windsurf.com/" rel="noopener noreferrer"&gt;Windsurf&lt;/a&gt;), and &lt;strong&gt;hybrid&lt;/strong&gt; (plugin + cloud: GitHub Copilot).&lt;/p&gt;

&lt;p&gt;Here are the main ones:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/anthropics/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;&lt;/strong&gt; operates in the terminal: reads the entire codebase, edits files, runs commands, creates commits, opens PRs. The extensions and plugins are open source on GitHub (108K+ stars). It reached $2.5 billion in annual revenue in nine months, the fastest growth in enterprise software history [15]. It runs exclusively on Anthropic models (Opus, Sonnet, Haiku).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.cursor.com/" rel="noopener noreferrer"&gt;Cursor&lt;/a&gt;&lt;/strong&gt; started as a VS Code fork rebuilt around AI, and now also has a &lt;a href="https://www.cursor.com/cli" rel="noopener noreferrer"&gt;CLI&lt;/a&gt; for those who prefer the terminal. It supports Claude, GPT, and Gemini simultaneously. It went from $1M to $1B+ in annual revenue in under two years, with over 1 million daily active users [5].&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/features/copilot" rel="noopener noreferrer"&gt;GitHub Copilot&lt;/a&gt;&lt;/strong&gt; is the most widely adopted: 20 million users, 90% of Fortune 100 companies. Its Coding Agent generates around 1.2 million PRs per month [16].&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/nichochar/opencode" rel="noopener noreferrer"&gt;OpenCode&lt;/a&gt;&lt;/strong&gt; (MIT) became the most-starred code agent on GitHub, with ~129K stars. It runs in the terminal, supports 75+ LLM providers, and has native LSP integration. Completely free (you only pay for the model's API) [18].&lt;/p&gt;

&lt;p&gt;On the open source side, two more stand out. &lt;strong&gt;&lt;a href="https://github.com/paul-gauthier/aider" rel="noopener noreferrer"&gt;Aider&lt;/a&gt;&lt;/strong&gt; (Apache 2.0) works with any LLM and is free. Fun fact: Aider writes 70–88% of its own code in each release [17]. &lt;strong&gt;&lt;a href="https://github.com/openai/codex" rel="noopener noreferrer"&gt;Codex CLI&lt;/a&gt;&lt;/strong&gt; from OpenAI (Apache 2.0) has 2M+ weekly users [19].&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Interface&lt;/th&gt;
&lt;th&gt;Models&lt;/th&gt;
&lt;th&gt;Open Source&lt;/th&gt;
&lt;th&gt;Price&lt;/th&gt;
&lt;th&gt;Approx. Users&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Claude Code&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;CLI + extensions&lt;/td&gt;
&lt;td&gt;Anthropic&lt;/td&gt;
&lt;td&gt;Source-available&lt;/td&gt;
&lt;td&gt;$20/mo&lt;/td&gt;
&lt;td&gt;108K+ stars&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cursor&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;IDE + CLI&lt;/td&gt;
&lt;td&gt;Multi-model&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;$20/mo&lt;/td&gt;
&lt;td&gt;1M+ DAU&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GitHub Copilot&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;IDE plugin + cloud&lt;/td&gt;
&lt;td&gt;Multi-model&lt;/td&gt;
&lt;td&gt;Partial&lt;/td&gt;
&lt;td&gt;$10/mo&lt;/td&gt;
&lt;td&gt;20M+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OpenCode&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;CLI/TUI&lt;/td&gt;
&lt;td&gt;75+ providers&lt;/td&gt;
&lt;td&gt;Yes (MIT)&lt;/td&gt;
&lt;td&gt;Free (bring your API key)&lt;/td&gt;
&lt;td&gt;129K stars&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Codex CLI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;CLI&lt;/td&gt;
&lt;td&gt;OpenAI&lt;/td&gt;
&lt;td&gt;Yes (Apache 2.0)&lt;/td&gt;
&lt;td&gt;$20/mo&lt;/td&gt;
&lt;td&gt;2M+ weekly&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Aider&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;CLI&lt;/td&gt;
&lt;td&gt;Any LLM&lt;/td&gt;
&lt;td&gt;Yes (Apache 2.0)&lt;/td&gt;
&lt;td&gt;Free (bring your API key)&lt;/td&gt;
&lt;td&gt;42.5K stars&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;What stands out: these are machines from different manufacturers, but they ultimately do the same thing. Multi-file editing, terminal execution, extensibility via MCP, and an iterative loop. What truly differentiates each one is how the agentic behavior is implemented and orchestrated under the hood: the tool system, the way context is managed, the planning and self-correction loop. Beyond that, what matters is the delivery model (CLI vs IDE vs cloud), the ecosystem, and the price. And the open source alternatives prove that the agentic paradigm doesn't require expensive tools. Just a capable LLM.&lt;/p&gt;

&lt;p&gt;But tools are only half the story. The other half is how &lt;strong&gt;you&lt;/strong&gt; work with them.&lt;/p&gt;




&lt;h2&gt;
  
  
  Agentic programming: the paradigm shift
&lt;/h2&gt;

&lt;p&gt;In traditional development, you are simultaneously the brain and the hands of the operation. You think through the solution, type the code, run the tests, read the errors, fix, repeat. Your productivity has a physical ceiling: typing speed, memory capacity, how many hours you can stay focused.&lt;/p&gt;

&lt;p&gt;In agentic programming, the role changes. You define the goal, provide context, configure the environment. The agent executes. You're at the beginning (definition) and at the end (review). The middle cycle is autonomous.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vbn6lqp5dps9470f0yj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vbn6lqp5dps9470f0yj.png" alt="Before vs After" width="800" height="927"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I know what you might be thinking: "so the AI does the work and I just... do what?"&lt;/p&gt;

&lt;p&gt;The better question might be: do what &lt;em&gt;differently&lt;/em&gt;. When the compiler came along, nobody needed to write Assembly by hand anymore. Programmers didn't lose their purpose. They moved up a level of abstraction. They started thinking about business logic instead of memory registers. They became &lt;em&gt;more&lt;/em&gt; productive, &lt;em&gt;more&lt;/em&gt; strategic, &lt;em&gt;more&lt;/em&gt; valuable.&lt;/p&gt;

&lt;p&gt;The same thing is happening now: you're not being replaced, you're leveling up. Whoever designs the factory isn't less important than whoever operates it — quite the opposite — but the skills are different.&lt;/p&gt;

&lt;p&gt;So what are those skills?&lt;/p&gt;

&lt;h3&gt;
  
  
  The three pillars
&lt;/h3&gt;

&lt;p&gt;If the new role is orchestrating agents, three skills become essential. I call them &lt;strong&gt;the three pillars of agentic engineering&lt;/strong&gt;, and each one will get a dedicated article in this series.&lt;/p&gt;

&lt;p&gt;The first is &lt;strong&gt;prompt engineering&lt;/strong&gt;. It's how you communicate intent to the agent. Not just "write a good prompt," but structured communication: clear goals, explicit constraints, examples of what you want and what you don't. Back to the factory: these are the instructions you hand to the machine operator. The more precise they are, the better the output.&lt;/p&gt;

&lt;p&gt;The second is &lt;strong&gt;context engineering&lt;/strong&gt;. It's the discipline of curating what the agent knows. Which files are relevant? What documentation should be accessible? How do you structure project rules in the &lt;code&gt;CLAUDE.md&lt;/code&gt; file? Context is the most precious resource in an agentic system, and it's limited and expensive. Think of the documentation you hand to someone new on the team: without quality information, even the best machine produces junk.&lt;/p&gt;

&lt;p&gt;The third is &lt;strong&gt;harness engineering&lt;/strong&gt;: the configuration of everything that surrounds and orchestrates the agent. Automation hooks, MCP servers to connect external services, permissions, custom tools. In the factory, it's the infrastructure: the conveyor belts, the sensors, the safety systems. The factory that produces the most isn't the one with the best workforce; it's the one with the best structure around it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgtu9rxto5wod9j9ec0k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgtu9rxto5wod9j9ec0k.png" alt="The three pillars" width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The numbers: good and bad
&lt;/h3&gt;

&lt;p&gt;It's worth looking at the data honestly, because the landscape has real contradictions.&lt;/p&gt;

&lt;p&gt;On one hand, the growth is undeniable: $7.37 billion in revenue in 2025, 84% of devs adopting, SWE-bench jumping from 1.96% to over 80% in 18 months. GitHub Copilot generates 1.2 million PRs per month. In LLM API calls for coding, Claude leads with a 54% share [20].&lt;/p&gt;

&lt;p&gt;On the other hand, independent data is more sobering. The METR study (a rigorous randomized trial with 16 experienced devs) found that AI tools made the team &lt;strong&gt;19% slower&lt;/strong&gt;, even though participants perceived themselves as 20% faster [21]. AI-generated code carries &lt;strong&gt;2.74x more vulnerabilities&lt;/strong&gt; according to Veracode [22]. And Gartner predicts that &lt;strong&gt;40% of agentic AI projects will be canceled&lt;/strong&gt; before reaching production by 2027 [23].&lt;/p&gt;

&lt;p&gt;The most honest take I've read on this came from the DORA 2025 report: &lt;em&gt;"AI amplifies the strengths of high-performing organizations and the dysfunctions of those that are struggling."&lt;/em&gt; [24]&lt;/p&gt;

&lt;p&gt;The automated factory produces more, but without quality control, it produces defects faster too. The outcome depends on who configures and oversees it. That's exactly what the three pillars address.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;You now know what agentic programming is, where it came from, and what tools exist. You know that the role has shifted: from the one who writes every line to the one who designs, guides, and oversees. And you know that three pillars (prompt engineering, context engineering, and harness engineering) separate those who use AI haphazardly from those who use it with consistency.&lt;/p&gt;

&lt;p&gt;But knowing the &lt;em&gt;what&lt;/em&gt; isn't enough. To actually use this, you need to understand the &lt;em&gt;how&lt;/em&gt;. And the how starts with a question few people stop to ask: how does this technology work under the hood?&lt;/p&gt;

&lt;p&gt;What are tokens? What is a context window? Why do models hallucinate with such conviction? Understanding this completely changes how you interact with any agentic tool.&lt;/p&gt;

&lt;p&gt;That's exactly what we'll break down in the next article: &lt;strong&gt;Demystifying Language Models&lt;/strong&gt;.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;🤖 This article was written with assistance from Claude (Anthropic).&lt;/p&gt;

&lt;p&gt;Content researched, verified, and edited by a human.&lt;/p&gt;

&lt;p&gt;Found an error or a missing credit? Send me a message!&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://psycnet.apa.org/record/1957-02914-001" rel="noopener noreferrer"&gt;Miller, G.A. — "The Magical Number Seven, Plus or Minus Two" (1956)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dl.acm.org/doi/10.1145/1357054.1357072" rel="noopener noreferrer"&gt;Mark, G. et al. — "The Cost of Interrupted Work: More Speed and Stress" (2008)&lt;/a&gt; — ACM CHI 2008&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/features/copilot" rel="noopener noreferrer"&gt;GitHub Copilot&lt;/a&gt; — documentation&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://openai.com/blog/chatgpt" rel="noopener noreferrer"&gt;ChatGPT&lt;/a&gt; — OpenAI launch announcement (November 2022)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.cnbc.com/2025/11/13/cursor-ai-startup-funding-round-valuation.html" rel="noopener noreferrer"&gt;Cursor / Anysphere — CNBC (November 2025)&lt;/a&gt; — $1B+ ARR, $29.3B valuation&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://openai.com/index/function-calling-and-other-api-updates/" rel="noopener noreferrer"&gt;OpenAI Function Calling&lt;/a&gt; — API update (June 2023)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.deeplearning.ai/the-batch/how-agents-can-improve-llm-performance/" rel="noopener noreferrer"&gt;Andrew Ng — "Agentic Design Patterns" (2024)&lt;/a&gt; — The Batch newsletter; &lt;a href="https://www.youtube.com/watch?v=sal78ACtGTc" rel="noopener noreferrer"&gt;Sequoia AI Ascent talk&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;Model Context Protocol (MCP)&lt;/a&gt; — Anthropic (November 2024)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.anthropic.com/engineering/building-effective-agents" rel="noopener noreferrer"&gt;Building Effective Agents&lt;/a&gt; — Anthropic (December 2024)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.swebench.com/" rel="noopener noreferrer"&gt;SWE-bench&lt;/a&gt; — Princeton NLP (ICLR 2024)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.tweag.io/blog/2025-10-23-agentic-coding-intro/" rel="noopener noreferrer"&gt;Tweag — "Introduction to Agentic Coding" (2025)&lt;/a&gt;; &lt;a href="https://tweag.github.io/agentic-coding-handbook/" rel="noopener noreferrer"&gt;Agentic Coding Handbook&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.mordorintelligence.com/industry-reports/artificial-intelligence-code-tools-market" rel="noopener noreferrer"&gt;Mordor Intelligence — AI Code Tools Market Report (2025)&lt;/a&gt; — US$ 7.37B market size&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.fool.com/earnings/call-transcripts/2026/02/04/alphabet-googl-q4-2025-earnings-call-transcript/" rel="noopener noreferrer"&gt;Alphabet Q4 2025 Earnings Call (February 2026)&lt;/a&gt; — CFO Anat Ashkenazi: "about 50% of our code is written by coding agents"&lt;/li&gt;
&lt;li&gt;&lt;a href="https://survey.stackoverflow.co/2025/" rel="noopener noreferrer"&gt;Stack Overflow Developer Survey 2025&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.anthropic.com/en/docs/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt; — Anthropic documentation&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.blog/ai-and-ml/github-copilot/copilot-faster-smarter-and-built-for-how-you-work-now/" rel="noopener noreferrer"&gt;GitHub Blog — "Copilot: Faster, smarter, and built for how you work now" (October 2025)&lt;/a&gt; — 20M+ users, 1.2M PRs/month&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/paul-gauthier/aider" rel="noopener noreferrer"&gt;Aider&lt;/a&gt; — GitHub repository&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/nichochar/opencode" rel="noopener noreferrer"&gt;OpenCode&lt;/a&gt; — GitHub repository&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/openai/codex" rel="noopener noreferrer"&gt;OpenAI Codex CLI&lt;/a&gt; — GitHub repository&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://menlovc.com/perspective/2025-the-state-of-generative-ai-in-the-enterprise/" rel="noopener noreferrer"&gt;Menlo Ventures — "2025: The State of Generative AI in the Enterprise" (December 2025)&lt;/a&gt; — Claude: 54% coding market share&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/" rel="noopener noreferrer"&gt;METR — "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity" (2025)&lt;/a&gt;; &lt;a href="https://arxiv.org/abs/2507.09089" rel="noopener noreferrer"&gt;arXiv&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.veracode.com/resources/analyst-reports/2025-genai-code-security-report/" rel="noopener noreferrer"&gt;Veracode — "GenAI and Code Security: What You Need to Know" (2025)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027" rel="noopener noreferrer"&gt;Gartner — "Over 40% of Agentic AI Projects Will Be Canceled by End of 2027" (June 2025)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dora.dev/" rel="noopener noreferrer"&gt;DORA State of DevOps Report 2025&lt;/a&gt; — Google&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>claude</category>
      <category>ai</category>
    </item>
    <item>
      <title>Claude Code 101: Introdução à Programação Agêntica</title>
      <dc:creator>Rodrigo Sicarelli</dc:creator>
      <pubDate>Sat, 04 Apr 2026 20:07:58 +0000</pubDate>
      <link>https://forem.com/rsicarelli/claude-code-101-introducao-a-programacao-agentica-4mk1</link>
      <guid>https://forem.com/rsicarelli/claude-code-101-introducao-a-programacao-agentica-4mk1</guid>
      <description>&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;O desenvolvimento de software como conhecemos&lt;/li&gt;
&lt;li&gt;A era da assistência por IA&lt;/li&gt;
&lt;li&gt;De assistente a agente&lt;/li&gt;
&lt;li&gt;O ecossistema de ferramentas agênticas&lt;/li&gt;
&lt;li&gt;Programação agêntica — a mudança de paradigma&lt;/li&gt;
&lt;li&gt;Considerações finais&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;🌐 &lt;a href="https://dev.to/rsicarelli/claude-code-101-introduction-to-agentic-programming-3p83"&gt;Read in English&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Setembro de 2025. Eu estava tocando a atualização de uma dependência crítica num app mobile com milhões de usuários. O tipo de mudança que quebra testes em cascata. O prazo era Outubro: se não ficasse pronto, o app não subia pra loja.&lt;/p&gt;

&lt;p&gt;O problema: quase 10.000 testes precisavam ser adaptados pra nova versão. Código de mais de 20 times, espalhado por centenas de módulos. &lt;/p&gt;

&lt;p&gt;Pensei "e se eu der uma chance pra essas ferramentas de IA que todo mundo fala?" Depois de alguns vídeos e documentação, coloquei quatro terminais rodando em paralelo com Claude Code, cada um migrando uma fatia dos testes. Em uma semana: 2.000+ arquivos alterados, 50 mil linhas de código, 85% migrado de primeira. Na semana seguinte, pente fino no restante.&lt;/p&gt;

&lt;p&gt;Quando mergiei tudo com aprovação dos times responsáveis, uma coisa ficou clara: eu &lt;strong&gt;nunca&lt;/strong&gt; teria feito aquilo sozinho. Não em duas semanas, talvez nem em dois meses. Era cognitivamente impossível.&lt;/p&gt;

&lt;p&gt;Foi aí que eu entendi que alguma coisa tinha mudado de verdade: meu papel tinha mudado, e eu precisava entender como isso funciona por dentro.&lt;/p&gt;

&lt;p&gt;Este é o primeiro artigo da série &lt;strong&gt;Claude Code 101&lt;/strong&gt;. Ao longo dela, vamos desmontar o que está por trás da &lt;strong&gt;programação agêntica&lt;/strong&gt;: de onde veio, como funciona, quais ferramentas existem e o que você precisa aprender pra usar isso de verdade. Vamos construir juntos um modelo mental (a &lt;strong&gt;analogia da fábrica&lt;/strong&gt;) e conhecer os três pilares que sustentam tudo: prompt engineering, context engineering e harness engineering.&lt;/p&gt;




&lt;h2&gt;
  
  
  O desenvolvimento de software como conhecemos
&lt;/h2&gt;

&lt;p&gt;O fluxo que todo dev conhece: ler requisito, projetar solução, escrever código, rodar testes, corrigir bugs, fazer deploy. Nada acontece sem que você esteja ativamente envolvido. É você quem lê a documentação, quem busca no (finado 😅) Stack Overflow, quem mantém o contexto do projeto na cabeça, quem digita cada letra.&lt;/p&gt;

&lt;p&gt;O software só avança quando você está sentado na frente da tela, trabalhando nele. Cada linha de código é um produto montado à mão, e o ritmo de produção depende da velocidade dos seus dedos e da capacidade da sua memória.&lt;/p&gt;

&lt;h3&gt;
  
  
  O gargalo somos nós
&lt;/h3&gt;

&lt;p&gt;A ciência cognitiva tem dados desconfortáveis pra gente. Nossa memória de trabalho retém, em média, 7 itens ao mesmo tempo [1]. Depois de uma interrupção (aquele tapinha no ombro, aquela mensagem no Slack) levamos em média 23 minutos pra retomar o foco [2]. E se você parar pra medir, vai perceber que passa apenas uns 30% do tempo efetivamente escrevendo código. O resto é leitura, navegação, debugging e reuniões.&lt;/p&gt;

&lt;p&gt;O gargalo não é só o compilador. Não é o CI. Não é o servidor. O gargalo é a gente.&lt;/p&gt;

&lt;p&gt;Por décadas, aceitamos isso como o custo natural de construir software. A fábrica produz na velocidade de quem opera. E ponto. Até que a inteligência artificial começou a mudar essa equação.&lt;/p&gt;




&lt;h2&gt;
  
  
  A era da assistência por IA
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Autocomplete Inteligente (2021-2022)
&lt;/h3&gt;

&lt;p&gt;Em junho de 2021, o GitHub lançou a preview técnica do &lt;strong&gt;Copilot&lt;/strong&gt;, que sugeria linhas de código em tempo real dentro do editor [3]. Junto com o &lt;a href="https://www.tabnine.com/" rel="noopener noreferrer"&gt;TabNine&lt;/a&gt; e outras ferramentas similares, nascia o paradigma do code completion: sugestões de uma ou mais linhas baseadas no contexto do arquivo atual.&lt;/p&gt;

&lt;p&gt;Mudou algo? Claro. Menos digitação, boilerplate preenchido automaticamente, sugestões que muitas vezes acertavam o que você ia escrever. Mas o papel de quem desenvolve não mudou em nada. Você ainda decide o quê escrever, onde escrever, quando rodar testes, como corrigir erros. A IA sugere; você executa.&lt;/p&gt;

&lt;p&gt;Pense na fábrica: a linha de produção ganhou uma esteira pra mover peças mais rápido. Útil? Sem dúvida. Mas você continua montando tudo à mão.&lt;/p&gt;

&lt;h3&gt;
  
  
  O paradigma do chat (2022-2023)
&lt;/h3&gt;

&lt;p&gt;Novembro de 2022: &lt;a href="https://openai.com/index/chatgpt/" rel="noopener noreferrer"&gt;ChatGPT&lt;/a&gt; é lançado e muda a forma como devs buscam ajuda [4]. O padrão se tornou familiar pra qualquer um: copiar um trecho de código, colar no chat, descrever o problema, receber uma sugestão, copiar de volta pro editor. O &lt;a href="https://www.langchain.com/" rel="noopener noreferrer"&gt;LangChain&lt;/a&gt;, lançado um mês antes, já ensaiava conectar LLMs a ferramentas externas. Um precursor do que viria depois.&lt;/p&gt;

&lt;p&gt;O chat era mais versátil que o autocomplete: explicava conceitos, sugeria refatorações, gerava testes. Mas as limitações eram fundamentais. Sem acesso aos arquivos do projeto, sem contexto da estrutura de pastas, sem capacidade de executar comandos. Você virava um mensageiro entre a IA e o código.&lt;/p&gt;

&lt;p&gt;É como ter um consultor muito bom sentado ao seu lado. Alguém que lê qualquer manual em segundos e tem resposta pra quase tudo. Só que esse consultor não pode tocar nas máquinas. O trabalho manual continua sendo seu.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu6pfo6fz7wbb74fuxfcy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu6pfo6fz7wbb74fuxfcy.png" alt="Autocomplete vs Chat" width="800" height="715"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Por que assistência não bastava
&lt;/h3&gt;

&lt;p&gt;No fim das contas, os dois paradigmas esbarram no mesmo problema: &lt;strong&gt;a IA não é agêntica&lt;/strong&gt; (ou seja, não tem capacidade de agir por conta própria). Não lê seu projeto. Não roda comandos. Não executa testes. Não itera sobre erros.&lt;/p&gt;

&lt;p&gt;A esteira e o consultor ajudam, mas ninguém opera as máquinas por você. Quando você vai embora, a produção para.&lt;/p&gt;

&lt;p&gt;Em 2024, a pergunta natural veio: &lt;strong&gt;e se a IA pudesse fazer mais do que sugerir?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  De Assistente a Agente
&lt;/h2&gt;

&lt;p&gt;Olhando pra trás, a evolução das ferramentas de IA para código segue quatro fases bem definidas. Cada fase é como um upgrade na fábrica: primeiro vem a esteira, depois o consultor, depois máquinas melhores, até chegar nas que operam sozinhas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3sdmt1zghzbqyqjgith.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3sdmt1zghzbqyqjgith.png" alt="A evolução da codificação com IA" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As fases 1 e 2 já vimos. O salto real começa na fase 3.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fase 3 — Edição multi-arquivo (2024)
&lt;/h3&gt;

&lt;p&gt;Em 2024, o &lt;strong&gt;&lt;a href="https://www.cursor.com/" rel="noopener noreferrer"&gt;Cursor&lt;/a&gt;&lt;/strong&gt; (um editor construído como fork do VS Code) popularizou uma nova categoria: edição multi-arquivo via linguagem natural. Você descreve o que quer, a IA propõe mudanças coordenadas em vários arquivos de uma vez.&lt;/p&gt;

&lt;p&gt;O crescimento fala por si: de &lt;strong&gt;US$ 1 milhão de receita anual em janeiro de 2024&lt;/strong&gt; para &lt;strong&gt;mais de US$ 1 bilhão em novembro de 2025&lt;/strong&gt; [5]. Devs queriam mais do que sugestões linha a linha.&lt;/p&gt;

&lt;p&gt;Mas o modelo fundamental não mudou: o humano orquestra, a IA executa. Você diz o que mudar, a IA muda, você confere. As máquinas ficaram mais sofisticadas, sim — mas você ainda opera cada painel de controle, passo a passo.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fase 4 — A codificação agêntica (2024-presente)
&lt;/h3&gt;

&lt;p&gt;Aqui o paradigma inverte. Em vez de você orquestrar a IA, &lt;strong&gt;você define o objetivo e a IA se orquestra sozinha&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Essa virada não aconteceu do nada. Teve uma sequência de avanços que a tornaram possível:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Function calling&lt;/strong&gt; (OpenAI, junho de 2023). Pela primeira vez, modelos podiam invocar ferramentas externas. É o pré-requisito técnico pra qualquer comportamento agêntico [6].&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Andrew Ng cunha "agentic"&lt;/strong&gt; (final de 2023). Ele escolheu um adjetivo de propósito, não um substantivo: &lt;em&gt;"Diferente do substantivo 'agent', o adjetivo 'agentic' nos permite pensar em sistemas como sendo mais ou menos parecidos com agentes, em diferentes graus."&lt;/em&gt; [7]
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model Context Protocol&lt;/strong&gt; (Anthropic, novembro de 2024). O padrão aberto pra conectar agentes a ferramentas externas, adotado rapidamente por toda a indústria [8].&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;"Building Effective Agents"&lt;/strong&gt; (Anthropic, dezembro de 2024). O paper que virou a referência mais citada do campo, diferenciando &lt;strong&gt;workflows&lt;/strong&gt; (caminhos predefinidos) de &lt;strong&gt;agents&lt;/strong&gt; (processos dinâmicos, autodirigidos) [9].&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;O que faz algo ser "agêntico"? Cinco coisas:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Autonomia&lt;/strong&gt;: decide o que fazer sem guia passo a passo&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool use&lt;/strong&gt;: lê e escreve arquivos, executa comandos, roda testes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Planejamento&lt;/strong&gt;: quebra objetivos em subtarefas&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Raciocínio em loop&lt;/strong&gt;: itera, não responde uma vez só&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Autocorreção&lt;/strong&gt;: erra, percebe, ajusta e tenta de novo&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqo9dlid9cq127r7h6xn9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqo9dlid9cq127r7h6xn9.png" alt="O loop agêntico" width="800" height="1221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aqui é onde a analogia da fábrica muda completamente. Você não está mais na linha de produção apertando parafuso e movendo peça. Agora você &lt;strong&gt;projeta&lt;/strong&gt; a fábrica. Programa as máquinas, configura os controles de qualidade, supervisiona a produção. Os agentes executam, reportam problemas e se autocorrigem. Sua produtividade deixa de ser limitada pela velocidade das suas mãos e passa a depender da qualidade das suas instruções.&lt;/p&gt;

&lt;p&gt;Como a Anthropic resumiu: o desenvolvimento vai "de &lt;em&gt;'escrever código, rodar testes, ler erros, corrigir, repetir'&lt;/em&gt; para &lt;em&gt;'definir objetivo, revisar mudanças, aprovar implementação.'&lt;/em&gt;"&lt;/p&gt;

&lt;h3&gt;
  
  
  A linha do tempo completa
&lt;/h3&gt;

&lt;p&gt;Cinco anos, acelerando absurdamente em 2024-2025:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;2021&lt;/strong&gt;: GitHub Copilot (preview) — nasce o code completion com IA&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2022&lt;/strong&gt;: LangChain (outubro), ChatGPT (novembro) — nasce o chat&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2023&lt;/strong&gt;: Function calling (junho), &lt;a href="https://github.com/Significant-Gravitas/AutoGPT" rel="noopener noreferrer"&gt;AutoGPT&lt;/a&gt; viraliza, Andrew Ng cunha "agentic"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2024&lt;/strong&gt;: &lt;a href="https://devin.ai/" rel="noopener noreferrer"&gt;Devin AI&lt;/a&gt; salta o &lt;a href="https://www.swebench.com/" rel="noopener noreferrer"&gt;SWE-bench&lt;/a&gt; de 1,96% pra 13,86%. MCP é lançado. "Building Effective Agents" é publicado&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2025&lt;/strong&gt;: &lt;a href="https://docs.anthropic.com/en/docs/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;, Copilot Agent Mode, &lt;a href="https://github.com/openai/codex" rel="noopener noreferrer"&gt;Codex CLI&lt;/a&gt;. Agentes resolvem &lt;strong&gt;mais de 80%&lt;/strong&gt; do SWE-bench — contra 13,86% apenas 18 meses antes [10]
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  "Vibe coding" vs. o negócio sério
&lt;/h3&gt;

&lt;p&gt;Vale uma distinção importante aqui. Digitar prompts vagos sem conferir o resultado (o que a comunidade chama de "vibe coding") não é programação agêntica. É só... preguiça com interface bonita. É a diferença entre ligar a máquina sem ler o manual e configurar tudo direitinho antes de apertar o botão.&lt;/p&gt;

&lt;p&gt;Programação agêntica profissional, como o blog de engenharia da Tweag define, envolve &lt;em&gt;"profissionais qualificados que escrevem prompts intencionalmente, validam rigorosamente e guiam a saída dentro de limites arquiteturais claros"&lt;/em&gt; [11].&lt;/p&gt;

&lt;p&gt;É essa abordagem que esta série ensina. E pra usá-la direito, vale conhecer o ecossistema de ferramentas disponíveis.&lt;/p&gt;




&lt;h2&gt;
  
  
  O ecossistema de ferramentas agênticas
&lt;/h2&gt;

&lt;p&gt;Não falta ferramenta nesse espaço. Em 2025, ferramentas de IA pra código geraram &lt;strong&gt;US$ 7,37 bilhões em receita&lt;/strong&gt;, o equivalente a 55% de todo o investimento empresarial em IA [12]. O Google já atribui 50% do seu código a agentes [13]. E 84% de devs dizem que usam ou planejam usar ferramentas de IA [14].&lt;/p&gt;

&lt;p&gt;As ferramentas se dividem em três categorias: &lt;strong&gt;CLI&lt;/strong&gt; (terminal: Claude Code, Codex CLI, Cursor CLI, Aider, OpenCode), &lt;strong&gt;IDE&lt;/strong&gt; (editor com IA integrada: Cursor, &lt;a href="https://windsurf.com/" rel="noopener noreferrer"&gt;Windsurf&lt;/a&gt;) e &lt;strong&gt;híbrido&lt;/strong&gt; (plugin + nuvem: GitHub Copilot).&lt;/p&gt;

&lt;p&gt;Eis as principais:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/anthropics/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;&lt;/strong&gt; opera no terminal: lê o codebase inteiro, edita arquivos, roda comandos, cria commits, abre PRs. As extensões e plugins são open source no GitHub (108K+ stars). Atingiu US$ 2,5 bilhões de receita anual em nove meses, o crescimento mais rápido da história de software empresarial [15]. Roda exclusivamente com modelos da Anthropic (Opus, Sonnet, Haiku).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.cursor.com/" rel="noopener noreferrer"&gt;Cursor&lt;/a&gt;&lt;/strong&gt; começou como um fork do VS Code reconstruído em torno de IA e hoje também tem uma &lt;a href="https://www.cursor.com/cli" rel="noopener noreferrer"&gt;CLI&lt;/a&gt; pra quem prefere o terminal. Suporta Claude, GPT e Gemini ao mesmo tempo. Passou de US$ 1M pra US$ 1B+ de receita anual em menos de dois anos, com mais de 1 milhão de usuários ativos por dia [5].&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/features/copilot" rel="noopener noreferrer"&gt;GitHub Copilot&lt;/a&gt;&lt;/strong&gt; é o mais amplamente adotado: 20 milhões de usuários, 90% das empresas Fortune 100. Seu Coding Agent gera cerca de 1,2 milhão de PRs por mês [16].&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/nichochar/opencode" rel="noopener noreferrer"&gt;OpenCode&lt;/a&gt;&lt;/strong&gt; (MIT) virou o agente de código mais estrelado do GitHub com ~129K stars. Roda no terminal, suporta 75+ provedores de LLM e tem integração nativa de LSP. Totalmente gratuito (você só paga a API do modelo) [18].&lt;/p&gt;

&lt;p&gt;No lado open source, mais dois se destacam. &lt;strong&gt;&lt;a href="https://github.com/paul-gauthier/aider" rel="noopener noreferrer"&gt;Aider&lt;/a&gt;&lt;/strong&gt; (Apache 2.0) funciona com qualquer LLM e é gratuito. Detalhe curioso: o Aider escreve 70-88% do seu próprio código em cada release [17]. &lt;strong&gt;&lt;a href="https://github.com/openai/codex" rel="noopener noreferrer"&gt;Codex CLI&lt;/a&gt;&lt;/strong&gt; da OpenAI (Apache 2.0) tem 2M+ de usuários semanais [19].&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Ferramenta&lt;/th&gt;
&lt;th&gt;Interface&lt;/th&gt;
&lt;th&gt;Modelos&lt;/th&gt;
&lt;th&gt;Open Source&lt;/th&gt;
&lt;th&gt;Preço&lt;/th&gt;
&lt;th&gt;Usuários aprox.&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Claude Code&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;CLI + extensões&lt;/td&gt;
&lt;td&gt;Anthropic&lt;/td&gt;
&lt;td&gt;Source-available&lt;/td&gt;
&lt;td&gt;$20/mês&lt;/td&gt;
&lt;td&gt;108K+ stars&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cursor&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;IDE + CLI&lt;/td&gt;
&lt;td&gt;Multi-modelo&lt;/td&gt;
&lt;td&gt;Não&lt;/td&gt;
&lt;td&gt;$20/mês&lt;/td&gt;
&lt;td&gt;1M+ DAU&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GitHub Copilot&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Plugin IDE + cloud&lt;/td&gt;
&lt;td&gt;Multi-modelo&lt;/td&gt;
&lt;td&gt;Parcial&lt;/td&gt;
&lt;td&gt;$10/mês&lt;/td&gt;
&lt;td&gt;20M+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OpenCode&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;CLI/TUI&lt;/td&gt;
&lt;td&gt;75+ provedores&lt;/td&gt;
&lt;td&gt;Sim (MIT)&lt;/td&gt;
&lt;td&gt;Grátis (usa sua API key)&lt;/td&gt;
&lt;td&gt;129K stars&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Codex CLI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;CLI&lt;/td&gt;
&lt;td&gt;OpenAI&lt;/td&gt;
&lt;td&gt;Sim (Apache 2.0)&lt;/td&gt;
&lt;td&gt;$20/mês&lt;/td&gt;
&lt;td&gt;2M+ semanais&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Aider&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;CLI&lt;/td&gt;
&lt;td&gt;Qualquer LLM&lt;/td&gt;
&lt;td&gt;Sim (Apache 2.0)&lt;/td&gt;
&lt;td&gt;Grátis (usa sua API key)&lt;/td&gt;
&lt;td&gt;42.5K stars&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;O que chama atenção: são máquinas de fabricantes diferentes, mas no fim fazem a mesma coisa. Edição multi-arquivo, execução no terminal, extensibilidade via MCP e loop iterativo. O que realmente diferencia cada uma é como o comportamento agêntico é implementado e orquestrado por baixo dos panos: o sistema de tools, a forma de gerenciar contexto, o loop de planejamento e autocorreção. Fora isso, pesam o modelo de entrega (CLI vs IDE vs nuvem), o ecossistema e o preço. E as alternativas open source provam que o paradigma agêntico não exige ferramenta cara. Só um LLM capaz.&lt;/p&gt;

&lt;p&gt;Mas ferramentas são só metade da história. A outra metade é como &lt;strong&gt;você&lt;/strong&gt; trabalha com elas.&lt;/p&gt;




&lt;h2&gt;
  
  
  Programação Agêntica — a mudança de paradigma
&lt;/h2&gt;

&lt;p&gt;No desenvolvimento tradicional, você é simultaneamente o cérebro e as mãos da operação. Pensa na solução, digita o código, roda os testes, lê os erros, corrige, repete. Sua produtividade tem um teto físico: velocidade de digitação, capacidade de memória, quantas horas consegue manter o foco.&lt;/p&gt;

&lt;p&gt;Na programação agêntica, o papel muda. Você define o objetivo, fornece contexto, configura o ambiente. O agente executa. Você está no início (definição) e no fim (revisão). O ciclo do meio é autônomo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F771hov82vjbzqvtuzesm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F771hov82vjbzqvtuzesm.png" alt="Antes vs Depois" width="800" height="1045"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Eu sei o que você pode estar pensando: "então a IA faz o trabalho e eu fico... fazendo o quê?"&lt;/p&gt;

&lt;p&gt;A pergunta melhor talvez seja: fazendo o quê de &lt;em&gt;diferente&lt;/em&gt;. Quando surgiu o compilador, ninguém mais precisou escrever Assembly na mão. Quem programava não ficou sem função. Subiu um nível de abstração. Passou a pensar em lógica de negócio em vez de registradores de memória. Ficou &lt;em&gt;mais&lt;/em&gt; produtivo, &lt;em&gt;mais&lt;/em&gt; estratégico, &lt;em&gt;mais&lt;/em&gt; valioso.&lt;/p&gt;

&lt;p&gt;A mesma coisa está acontecendo agora: você não está sendo substituído, está subindo de nível. Quem projeta a fábrica não é menos importante que quem opera, pelo contrário, mas as habilidades são outras.&lt;/p&gt;

&lt;p&gt;E quais são essas habilidades?&lt;/p&gt;

&lt;h3&gt;
  
  
  Os três pilares
&lt;/h3&gt;

&lt;p&gt;Se o novo papel é orquestrar agentes, três habilidades se tornam essenciais. Eu chamo de &lt;strong&gt;os três pilares da engenharia agêntica&lt;/strong&gt;, e cada um vai ganhar um artigo dedicado nesta série.&lt;/p&gt;

&lt;p&gt;O primeiro é &lt;strong&gt;prompt engineering&lt;/strong&gt;. É como você comunica intenção ao agente. Não só "escreva um bom prompt", mas comunicação estruturada: objetivos claros, restrições explícitas, exemplos do que você quer e do que não quer. Voltando à fábrica, são as instruções que você passa pro operador da máquina. Quanto mais precisas, melhor o resultado.&lt;/p&gt;

&lt;p&gt;O segundo é &lt;strong&gt;context engineering&lt;/strong&gt;. É a disciplina de curar o que o agente sabe. Quais arquivos são relevantes? Qual documentação deve estar acessível? Como estruturar as regras do projeto no arquivo &lt;code&gt;CLAUDE.md&lt;/code&gt;? Contexto é o recurso mais precioso num sistema agêntico, e é limitado e caro. Pense na documentação que você entrega pra alguém novo na equipe: sem informação de qualidade, até a melhor máquina produz lixo.&lt;/p&gt;

&lt;p&gt;O terceiro é &lt;strong&gt;harness engineering&lt;/strong&gt;, a configuração de tudo que envolve e orquestra o agente: hooks de automação, MCP servers pra conectar serviços externos, permissões, ferramentas customizadas. Na fábrica, é a infraestrutura: as esteiras, os sensores, o sistema de segurança. A fábrica que mais produz não é a que tem a melhor mão de obra, é a que tem a melhor estrutura ao redor.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwodlxdccovcxcog6dxq3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwodlxdccovcxcog6dxq3.png" alt="Os três pilares" width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Os números — bons e ruins
&lt;/h3&gt;

&lt;p&gt;Vale olhar pros dados com honestidade, porque o cenário tem contradições reais.&lt;/p&gt;

&lt;p&gt;Por um lado, o crescimento é inegável: US$ 7,37 bilhões de receita em 2025, 84% dos devs adotando, o SWE-bench saltando de 1,96% pra mais de 80% em 18 meses. O GitHub Copilot gera 1,2 milhão de PRs por mês. Em chamadas de API de LLM pra código, o Claude lidera com 54% [20].&lt;/p&gt;

&lt;p&gt;Por outro, os dados independentes são mais sóbrios. O estudo METR (um ensaio randomizado rigoroso com 16 devs experientes) encontrou que ferramentas de IA deixaram o time &lt;strong&gt;19% mais lento&lt;/strong&gt;, embora a percepção fosse de estar 20% mais rápido [21]. Código gerado por IA carrega &lt;strong&gt;2,74× mais vulnerabilidades&lt;/strong&gt; segundo a Veracode [22]. E o Gartner prevê que &lt;strong&gt;40% dos projetos agênticos serão cancelados&lt;/strong&gt; antes de chegar a produção até 2027 [23].&lt;/p&gt;

&lt;p&gt;A frase mais honesta que li sobre o assunto veio do relatório DORA 2025: &lt;em&gt;"A IA amplifica as forças de organizações de alto desempenho e as disfunções das que estão em dificuldade."&lt;/em&gt; [24]&lt;/p&gt;

&lt;p&gt;A fábrica automatizada produz mais, mas sem controle de qualidade, produz defeito mais rápido também. O resultado depende de quem configura e supervisiona. É exatamente isso que os três pilares endereçam.&lt;/p&gt;




&lt;h2&gt;
  
  
  Considerações finais
&lt;/h2&gt;

&lt;p&gt;Você agora sabe o que é programação agêntica, de onde veio e quais ferramentas existem. Sabe que o papel mudou: de quem escreve cada linha pra quem projeta, orienta e supervisiona. E sabe que três pilares (prompt engineering, context engineering e harness engineering) separam quem usa IA de qualquer jeito de quem usa com consistência.&lt;/p&gt;

&lt;p&gt;Só que saber o &lt;em&gt;que&lt;/em&gt; não é suficiente. Pra usar isso de verdade, você precisa entender o &lt;em&gt;como&lt;/em&gt;. E o como começa por uma pergunta que pouca gente para pra fazer: como essa tecnologia funciona por dentro?&lt;/p&gt;

&lt;p&gt;O que são tokens? O que é uma context window? Por que modelos erram com tanta convicção? Entender isso muda completamente a forma como você interage com qualquer ferramenta agêntica.&lt;/p&gt;

&lt;p&gt;É exatamente o que vamos desmontar no próximo artigo: &lt;strong&gt;Desmistificando os Modelos de Linguagem&lt;/strong&gt;.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;🤖 Este artigo foi escrito com assistência do Claude (Anthropic).&lt;/p&gt;

&lt;p&gt;Conteúdo pesquisado, verificado e editado por um humano.&lt;/p&gt;

&lt;p&gt;Encontrou algum erro ou crédito faltando? Me manda uma mensagem!&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Referências
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://psycnet.apa.org/record/1957-02914-001" rel="noopener noreferrer"&gt;Miller, G.A. — "The Magical Number Seven, Plus or Minus Two" (1956)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dl.acm.org/doi/10.1145/1357054.1357072" rel="noopener noreferrer"&gt;Mark, G. et al. — "The Cost of Interrupted Work: More Speed and Stress" (2008)&lt;/a&gt; — ACM CHI 2008&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/features/copilot" rel="noopener noreferrer"&gt;GitHub Copilot&lt;/a&gt; — documentation&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://openai.com/blog/chatgpt" rel="noopener noreferrer"&gt;ChatGPT&lt;/a&gt; — OpenAI launch announcement (November 2022)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.cnbc.com/2025/11/13/cursor-ai-startup-funding-round-valuation.html" rel="noopener noreferrer"&gt;Cursor / Anysphere — CNBC (November 2025)&lt;/a&gt; — $1B+ ARR, $29.3B valuation&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://openai.com/index/function-calling-and-other-api-updates/" rel="noopener noreferrer"&gt;OpenAI Function Calling&lt;/a&gt; — API update (June 2023)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.deeplearning.ai/the-batch/how-agents-can-improve-llm-performance/" rel="noopener noreferrer"&gt;Andrew Ng — "Agentic Design Patterns" (2024)&lt;/a&gt; — The Batch newsletter; &lt;a href="https://www.youtube.com/watch?v=sal78ACtGTc" rel="noopener noreferrer"&gt;Sequoia AI Ascent talk&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;Model Context Protocol (MCP)&lt;/a&gt; — Anthropic (November 2024)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.anthropic.com/engineering/building-effective-agents" rel="noopener noreferrer"&gt;Building Effective Agents&lt;/a&gt; — Anthropic (December 2024)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.swebench.com/" rel="noopener noreferrer"&gt;SWE-bench&lt;/a&gt; — Princeton NLP (ICLR 2024)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.tweag.io/blog/2025-10-23-agentic-coding-intro/" rel="noopener noreferrer"&gt;Tweag — "Introduction to Agentic Coding" (2025)&lt;/a&gt;; &lt;a href="https://tweag.github.io/agentic-coding-handbook/" rel="noopener noreferrer"&gt;Agentic Coding Handbook&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.mordorintelligence.com/industry-reports/artificial-intelligence-code-tools-market" rel="noopener noreferrer"&gt;Mordor Intelligence — AI Code Tools Market Report (2025)&lt;/a&gt; — US$ 7.37B market size&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.fool.com/earnings/call-transcripts/2026/02/04/alphabet-googl-q4-2025-earnings-call-transcript/" rel="noopener noreferrer"&gt;Alphabet Q4 2025 Earnings Call (February 2026)&lt;/a&gt; — CFO Anat Ashkenazi: "about 50% of our code is written by coding agents"&lt;/li&gt;
&lt;li&gt;&lt;a href="https://survey.stackoverflow.co/2025/" rel="noopener noreferrer"&gt;Stack Overflow Developer Survey 2025&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.anthropic.com/en/docs/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt; — Anthropic documentation&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.blog/ai-and-ml/github-copilot/copilot-faster-smarter-and-built-for-how-you-work-now/" rel="noopener noreferrer"&gt;GitHub Blog — "Copilot: Faster, smarter, and built for how you work now" (October 2025)&lt;/a&gt; — 20M+ users, 1.2M PRs/month&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/paul-gauthier/aider" rel="noopener noreferrer"&gt;Aider&lt;/a&gt; — GitHub repository&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/nichochar/opencode" rel="noopener noreferrer"&gt;OpenCode&lt;/a&gt; — GitHub repository&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/openai/codex" rel="noopener noreferrer"&gt;OpenAI Codex CLI&lt;/a&gt; — GitHub repository&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://menlovc.com/perspective/2025-the-state-of-generative-ai-in-the-enterprise/" rel="noopener noreferrer"&gt;Menlo Ventures — "2025: The State of Generative AI in the Enterprise" (December 2025)&lt;/a&gt; — Claude: 54% coding market share&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/" rel="noopener noreferrer"&gt;METR — "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity" (2025)&lt;/a&gt;; &lt;a href="https://arxiv.org/abs/2507.09089" rel="noopener noreferrer"&gt;arXiv&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.veracode.com/resources/analyst-reports/2025-genai-code-security-report/" rel="noopener noreferrer"&gt;Veracode — "GenAI and Code Security: What You Need to Know" (2025)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027" rel="noopener noreferrer"&gt;Gartner — "Over 40% of Agentic AI Projects Will Be Canceled by End of 2027" (June 2025)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dora.dev/" rel="noopener noreferrer"&gt;DORA State of DevOps Report 2025&lt;/a&gt; — Google&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>claude</category>
      <category>ai</category>
      <category>braziliandevs</category>
    </item>
    <item>
      <title>Fakt: Automating the Fake-over-mock pattern</title>
      <dc:creator>Rodrigo Sicarelli</dc:creator>
      <pubDate>Wed, 25 Feb 2026 12:40:19 +0000</pubDate>
      <link>https://forem.com/rsicarelli/fakt-automating-the-fake-over-mock-pattern-amh</link>
      <guid>https://forem.com/rsicarelli/fakt-automating-the-fake-over-mock-pattern-amh</guid>
      <description>&lt;p&gt;Kotlin testing has a problem that gets worse the more successful your project becomes.&lt;/p&gt;

&lt;p&gt;Manual test fakes don't scale—each interface requires 60-80 lines of boilerplate that silently drifts from reality during refactoring. Runtime mocking frameworks (MockK, Mockito) solve the boilerplate but introduce severe performance penalties and don't work on Kotlin/Native or WebAssembly. KSP-based tools promised compile-time generation, but Kotlin 2.0 broke them all.&lt;/p&gt;

&lt;p&gt;Fakt is a compiler plugin that generates production-quality fakes through deep integration with Kotlin's FIR and IR compilation phases—the same extension points used by &lt;a href="https://github.com/ZacSweers/metro" rel="noopener noreferrer"&gt;Metro&lt;/a&gt;, a production DI framework from Zac Sweers.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Fakt Does
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/rsicarelli/fakt" rel="noopener noreferrer"&gt;https://github.com/rsicarelli/fakt&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fakt reduces fake boilerplate to an annotation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Fake&lt;/span&gt;
&lt;span class="kd"&gt;interface&lt;/span&gt; &lt;span class="nc"&gt;AnalyticsService&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;track&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;suspend&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;flush&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;Result&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Unit&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At compile time, Fakt generates a complete fake implementation. You use it through a type-safe factory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;fake&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;fakeAnalyticsService&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;track&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Tracked: $event"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nf"&gt;flush&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nc"&gt;Result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;success&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Unit&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Use in tests&lt;/span&gt;
&lt;span class="n"&gt;fake&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;track&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"user_signup"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;fake&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;flush&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;// Verify interactions (thread-safe StateFlow)&lt;/span&gt;
&lt;span class="nf"&gt;assertEquals&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fake&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;trackCalls&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;assertEquals&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fake&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;flushCalls&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it ✨&lt;/p&gt;

&lt;h2&gt;
  
  
  The Testing Problem
&lt;/h2&gt;

&lt;p&gt;Consider a simple interface:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;interface&lt;/span&gt; &lt;span class="nc"&gt;AnalyticsService&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;track&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;suspend&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;flush&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;Result&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Unit&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A proper, production-quality fake requires ~40-60 lines of boilerplate:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Typical handwritten fake — error-prone, tedious&lt;/span&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;FakeAnalyticsService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;trackBehavior&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nc"&gt;Unit&lt;/span&gt;&lt;span class="p"&gt;)?&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;flushBehavior&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;suspend&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nc"&gt;Result&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Unit&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;)?&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;AnalyticsService&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="py"&gt;_trackCalls&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mutableListOf&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Unit&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;()&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;trackCalls&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Unit&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_trackCalls&lt;/span&gt;

    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="py"&gt;_flushCalls&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mutableListOf&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Unit&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;()&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;flushCalls&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Unit&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_flushCalls&lt;/span&gt;

    &lt;span class="c1"&gt;// Interface implementation&lt;/span&gt;
    &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;track&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;_trackCalls&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Unit&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;trackBehavior&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;?:&lt;/span&gt; &lt;span class="nc"&gt;Unit&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="k"&gt;suspend&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;flush&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;Result&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Unit&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;_flushCalls&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Unit&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;flushBehavior&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;?:&lt;/span&gt; &lt;span class="nc"&gt;Result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;success&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Unit&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The problems: N methods require ~10N lines. Interface changes don't break unused fakes—they silently drift. For 50 interfaces, this means thousands of lines of brittle boilerplate.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Mock Tax
&lt;/h3&gt;

&lt;p&gt;Runtime mocking frameworks solve the boilerplate but pay a different cost. Kotlin classes are &lt;code&gt;final&lt;/code&gt; by default, so MockK and Mockito resort to bytecode instrumentation. Independent benchmarks&lt;sup id="fnref1"&gt;1&lt;/sup&gt; quantify the penalty:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mocking Pattern&lt;/th&gt;
&lt;th&gt;Framework&lt;/th&gt;
&lt;th&gt;Comparison&lt;/th&gt;
&lt;th&gt;Verified Penalty&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;mockkObject&lt;/code&gt; (Singletons)&lt;/td&gt;
&lt;td&gt;MockK&lt;/td&gt;
&lt;td&gt;vs. Dependency Injection&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1,391x slower&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;mockkStatic&lt;/code&gt; (Top-level functions)&lt;/td&gt;
&lt;td&gt;MockK&lt;/td&gt;
&lt;td&gt;vs. Interface-based DI&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;146x slower&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;verify { ... }&lt;/code&gt; (Interaction verification)&lt;/td&gt;
&lt;td&gt;MockK&lt;/td&gt;
&lt;td&gt;vs. State-based testing&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;47x slower&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;relaxed&lt;/code&gt; mocks (Unstubbed calls)&lt;/td&gt;
&lt;td&gt;MockK&lt;/td&gt;
&lt;td&gt;vs. Strict mocks&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;3.7x slower&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mock-maker-inline&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Mockito&lt;/td&gt;
&lt;td&gt;vs. &lt;code&gt;all-open&lt;/code&gt; plugin&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;2.7-3x slower&lt;/strong&gt;&lt;sup id="fnref2"&gt;2&lt;/sup&gt;&lt;sup id="fnref3"&gt;3&lt;/sup&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;A production test suite with 2,668 tests experienced a 2.7x slowdown (7.3s → 20.0s) when using &lt;code&gt;mock-maker-inline&lt;/code&gt;&lt;sup id="fnref3"&gt;3&lt;/sup&gt;. For large projects, the mock tax accumulates to 40% slower test suites&lt;sup id="fnref1"&gt;1&lt;/sup&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The KMP Dead End
&lt;/h3&gt;

&lt;p&gt;Runtime mocking relies on JVM-specific features: reflection, bytecode instrumentation, dynamic proxies. Kotlin/Native and Kotlin/Wasm compile to machine code. There is no JVM. MockK and Mockito cannot run in &lt;code&gt;commonTest&lt;/code&gt; source sets targeting Native or Wasm&lt;sup id="fnref4"&gt;4&lt;/sup&gt;&lt;sup id="fnref5"&gt;5&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;The community attempted KSP-based solutions, but Kotlin 2.0's K2 compiler broke them. The StreetComplete app (10,000+ tests) was forced to migrate mid-project&lt;sup id="fnref6"&gt;6&lt;/sup&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Compiler Plugins Work
&lt;/h2&gt;

&lt;p&gt;KSP-based tools (Mockative, MocKMP) operated at the symbol level—after type resolution, with limited access to the type system. When K2 landed, they broke. Compiler plugins operate during compilation, with full access to FIR and IR. They survive Kotlin version updates.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;KSP&lt;/th&gt;
&lt;th&gt;Compiler Plugin&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Access&lt;/td&gt;
&lt;td&gt;After type resolution&lt;/td&gt;
&lt;td&gt;During compilation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Type System&lt;/td&gt;
&lt;td&gt;Read-only symbols&lt;/td&gt;
&lt;td&gt;Full manipulation&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Fakt uses a two-phase FIR → IR architecture:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌──────────────────────────────────────────────────────┐
│  PHASE 1: FIR (Frontend IR)                          │
│  • Detects @Fake annotations                         │
│  • Validates interface structure                     │
│  • Full type system access                           │
└──────────────────────────────────────────────────────┘
                         ↓
┌──────────────────────────────────────────────────────┐
│  PHASE 2: IR (Intermediate Representation)           │
│  • Analyzes interface methods and properties         │
│  • Generates readable .kt source files               │
│  • Thread-safe StateFlow call history                │
└──────────────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the same pattern used by &lt;a href="https://github.com/ZacSweers/metro" rel="noopener noreferrer"&gt;Metro&lt;/a&gt;, Zac Sweers' DI compiler plugin. Metro's architecture has proven stable across Kotlin 1.9, 2.0, and 2.1.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Fakes Over Mocks
&lt;/h2&gt;

&lt;p&gt;Beyond performance, fakes represent a different testing philosophy. Martin Fowler's "Mocks Aren't Stubs"&lt;sup id="fnref7"&gt;7&lt;/sup&gt; describes two schools: state-based testing (verify outcomes) and interaction-based testing (verify method calls).&lt;/p&gt;

&lt;p&gt;The problem with interaction-based tests: they couple to implementation details&lt;sup id="fnref8"&gt;8&lt;/sup&gt;. Refactor a method signature without changing behavior, and mock-based tests break. Google's Testing Blog defines resilience as a critical test quality—"a test shouldn't fail if the code under test isn't defective"&lt;sup id="fnref9"&gt;9&lt;/sup&gt;. Mock-based tests often violate this.&lt;/p&gt;

&lt;p&gt;Google's "Now in Android" app makes this explicit&lt;sup id="fnref10"&gt;10&lt;/sup&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;"Don't use mocking frameworks. Instead, use fakes."&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The goal: "less brittle tests that may exercise more production code, instead of just verifying specific calls against mocks"&lt;sup id="fnref11"&gt;11&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Kotlin's async testing stack—&lt;code&gt;runTest&lt;/code&gt;, &lt;code&gt;TestDispatcher&lt;/code&gt;, Turbine&lt;sup id="fnref12"&gt;12&lt;/sup&gt;—is inherently state-based. Turbine's &lt;code&gt;awaitItem()&lt;/code&gt; verifies emitted values, not method calls. The natural data source for this stack is a fake with &lt;code&gt;MutableStateFlow&lt;/code&gt; backing. Fakt automates this pattern.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Guidance
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Fakes vs. Mocks: Quick Comparison
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;MockK/Mockito&lt;/th&gt;
&lt;th&gt;Fakt&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;KMP Support&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Limited (JVM only)&lt;/td&gt;
&lt;td&gt;Universal (all targets)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Compile-time Safety&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Runtime Overhead&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Heavy (reflection)&lt;/td&gt;
&lt;td&gt;Zero&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Type Safety&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Partial (&lt;code&gt;any()&lt;/code&gt; matchers)&lt;/td&gt;
&lt;td&gt;Complete&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Learning Curve&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Steep (complex DSL)&lt;/td&gt;
&lt;td&gt;Gentle (typed functions)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Call History&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Manual (&lt;code&gt;verify { }&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;Built-in (StateFlow)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Thread Safety&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Not guaranteed&lt;/td&gt;
&lt;td&gt;StateFlow-based&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Debuggability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Reflection (opaque)&lt;/td&gt;
&lt;td&gt;Generated &lt;code&gt;.kt&lt;/code&gt; files&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Choosing the Right Tool
&lt;/h3&gt;

&lt;p&gt;Fakt and mocking libraries solve overlapping but distinct problems. Choosing between them depends on your constraints and testing needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fakt works best when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You've already chosen fakes over mocks. If you understand the state-based testing philosophy and prefer testing outcomes over verifying interactions, Fakt automates what you'd otherwise write by hand.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You only use mocks for convenience. Many developers reach for mocking frameworks not for &lt;code&gt;verify { }&lt;/code&gt; features, but simply because writing manual fakes is tedious. Fakt gives you the factory convenience without the mock overhead—generated fakes are plain Kotlin classes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You're building for Kotlin Multiplatform. Fakt generates plain Kotlin that compiles on JVM, Native, and WebAssembly—no reflection required. This applies to any source set, not just &lt;code&gt;commonTest&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You value exercising production code in tests. Fakt-generated fakes are real implementations your tests compile against, catching interface drift at build time rather than runtime.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tests run concurrently. Fakt tracks call history with StateFlow, which is thread-safe by design. Manual fakes with &lt;code&gt;var count = 0&lt;/code&gt; break under parallel execution.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Mocking libraries (Mokkery, MockK) work best when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You need spy behavior. Partial mocking of real implementations—calling real methods while intercepting others—is something only mocking frameworks can do. Fakt generates new implementations, it doesn't wrap existing ones.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You're mocking third-party classes without interfaces. If a library exposes final classes with no interface to program against, mocking frameworks can instrument the bytecode. Fakt requires an interface to annotate.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Neither tool replaces contract testing.&lt;/strong&gt; For third-party HTTP APIs, use WireMock or Pact. Hand-written fakes for external services drift from reality without contract validation—they create dangerous illusions of fidelity that break in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Works Cited
&lt;/h2&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;Benchmarking Mockk — Avoid these patterns for fast unit tests. Kevin Block. &lt;a href="https://medium.com/@_kevinb/benchmarking-mockk-avoid-these-patterns-for-fast-unit-tests-220fc225da55" rel="noopener noreferrer"&gt;https://medium.com/@_kevinb/benchmarking-mockk-avoid-these-patterns-for-fast-unit-tests-220fc225da55&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;Effective migration to Kotlin on Android. Aris Papadopoulos. &lt;a href="https://medium.com/android-news/effective-migration-to-kotlin-on-android-cfb92bfaa49b" rel="noopener noreferrer"&gt;https://medium.com/android-news/effective-migration-to-kotlin-on-android-cfb92bfaa49b&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;Mocking Kotlin classes with Mockito — the fast way. Brais Gabín Moreira. &lt;a href="https://medium.com/21buttons-tech/mocking-kotlin-classes-with-mockito-the-fast-way-631824edd5ba" rel="noopener noreferrer"&gt;https://medium.com/21buttons-tech/mocking-kotlin-classes-with-mockito-the-fast-way-631824edd5ba&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn4"&gt;
&lt;p&gt;Did someone try to use Mockk on KMM project. Kotlin Slack. &lt;a href="https://slack-chats.kotlinlang.org/t/10131532/did-someone-try-to-use-mockk-on-kmm-project" rel="noopener noreferrer"&gt;https://slack-chats.kotlinlang.org/t/10131532/did-someone-try-to-use-mockk-on-kmm-project&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn5"&gt;
&lt;p&gt;Mock common tests in kotlin using multiplatform. Stack Overflow. &lt;a href="https://stackoverflow.com/questions/65491916/mock-common-tests-in-kotlin-using-multiplatform" rel="noopener noreferrer"&gt;https://stackoverflow.com/questions/65491916/mock-common-tests-in-kotlin-using-multiplatform&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn6"&gt;
&lt;p&gt;Mocking in Kotlin Multiplatform: KSP vs Compiler Plugins. Martin Hristev. &lt;a href="https://medium.com/@mhristev/mocking-in-kotlin-multiplatform-ksp-vs-compiler-plugins-4424751b83d7" rel="noopener noreferrer"&gt;https://medium.com/@mhristev/mocking-in-kotlin-multiplatform-ksp-vs-compiler-plugins-4424751b83d7&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn7"&gt;
&lt;p&gt;Mocks Aren't Stubs. Martin Fowler. &lt;a href="https://martinfowler.com/articles/mocksArentStubs.html" rel="noopener noreferrer"&gt;https://martinfowler.com/articles/mocksArentStubs.html&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn8"&gt;
&lt;p&gt;Unit Testing — Why must you mock me? Craig Walker. &lt;a href="https://medium.com/@walkercp/unit-testing-why-must-you-mock-me-69293508dd13" rel="noopener noreferrer"&gt;https://medium.com/@walkercp/unit-testing-why-must-you-mock-me-69293508dd13&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn9"&gt;
&lt;p&gt;Testing on the Toilet: Effective Testing. Google Testing Blog. &lt;a href="https://testing.googleblog.com/2014/05/testing-on-toilet-effective-testing.html" rel="noopener noreferrer"&gt;https://testing.googleblog.com/2014/05/testing-on-toilet-effective-testing.html&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn10"&gt;
&lt;p&gt;Testing strategy and how to test. Now in Android Wiki. &lt;a href="https://github.com/android/nowinandroid/wiki/Testing-strategy-and-how-to-test" rel="noopener noreferrer"&gt;https://github.com/android/nowinandroid/wiki/Testing-strategy-and-how-to-test&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn11"&gt;
&lt;p&gt;android/nowinandroid: A fully functional Android app built entirely with Kotlin and Jetpack Compose. GitHub. &lt;a href="https://github.com/android/nowinandroid" rel="noopener noreferrer"&gt;https://github.com/android/nowinandroid&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn12"&gt;
&lt;p&gt;Flow testing with Turbine. Cash App Code Blog. &lt;a href="https://code.cash.app/flow-testing-with-turbine" rel="noopener noreferrer"&gt;https://code.cash.app/flow-testing-with-turbine&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>kotlin</category>
      <category>testing</category>
      <category>automation</category>
      <category>kmp</category>
    </item>
    <item>
      <title>The Hidden Cost of Default Hierarchy Template in Kotlin Multiplatform</title>
      <dc:creator>Rodrigo Sicarelli</dc:creator>
      <pubDate>Sun, 02 Nov 2025 18:43:50 +0000</pubDate>
      <link>https://forem.com/rsicarelli/the-hidden-cost-of-default-hierarchy-templates-in-kotlin-multiplatform-256a</link>
      <guid>https://forem.com/rsicarelli/the-hidden-cost-of-default-hierarchy-templates-in-kotlin-multiplatform-256a</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The Default Hierarchy Template in KMP projects is a great way to reduce boilerplate code and start working quickly. However, it came with an unexpected cost in our large-scale codebase. A project with 70+ KMP modules targeting Android, iOS, and JVM saw sync times balloon from 15 minutes to over an hour. More critically, an enterprise project with 180+ modules became completely unusable, crashing after 10+ hours of attempting to sync.&lt;/p&gt;

&lt;p&gt;This wasn't a misconfiguration or a rogue plugin. The culprit? A single, seemingly innocent line of code introduced with Kotlin 1.9.20:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nf"&gt;applyDefaultHierarchyTemplate&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before we dive into the solution, let's understand what's happening under the hood. What are hierarchy templates, and why does the default one create such a performance bottleneck?&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are Hierarchy Templates in Kotlin Multiplatform?
&lt;/h2&gt;

&lt;p&gt;At its core, Kotlin Multiplatform is built on a elegant but complex system of &lt;strong&gt;source sets&lt;/strong&gt;—logical collections of code that share common dependencies and compilation settings.&lt;/p&gt;

&lt;p&gt;When you create a KMP project, you declare &lt;strong&gt;targets&lt;/strong&gt; (the platforms you're compiling for) and &lt;strong&gt;source sets&lt;/strong&gt; (where your code lives):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nf"&gt;kotlin&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;androidTarget&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;jvm&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;iosArm64&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;iosX64&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;iosSimulatorArm64&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each target automatically gets its own source set (&lt;code&gt;androidMain&lt;/code&gt;, &lt;code&gt;jvmMain&lt;/code&gt;, &lt;code&gt;iosArm64Main&lt;/code&gt;), where you can write platform-specific code with access to platform APIs. But the real power of KMP lies in &lt;code&gt;commonMain&lt;/code&gt;—code written here is shared across &lt;em&gt;all&lt;/em&gt; your targets.&lt;/p&gt;

&lt;h3&gt;
  
  
  The dependsOn Relationship: Connecting the Dots
&lt;/h3&gt;

&lt;p&gt;Source sets form a hierarchy through the &lt;code&gt;dependsOn&lt;/code&gt; relationship. When &lt;code&gt;iosArm64Main&lt;/code&gt; depends on &lt;code&gt;commonMain&lt;/code&gt;, it can access all the code written in the common source set. This relationship creates a directed graph that determines:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Code visibility&lt;/strong&gt; - Which declarations are accessible where&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependency propagation&lt;/strong&gt; - Libraries added to &lt;code&gt;commonMain&lt;/code&gt; flow down to all dependent source sets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API safety&lt;/strong&gt; - The compiler ensures you only use APIs available on all platforms a source set compiles to&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Intermediate Source Sets: The Middle Ground
&lt;/h3&gt;

&lt;p&gt;Here's where it gets interesting. What if you want to share code between &lt;em&gt;some&lt;/em&gt; platforms, but not all?&lt;/p&gt;

&lt;p&gt;Imagine you have iOS-specific logic that works across all iOS variants (arm64 for devices, x64 for Intel simulators, simulatorArm64 for Apple Silicon simulators). You don't want to duplicate this code in three places, but you also can't put it in &lt;code&gt;commonMain&lt;/code&gt; because it uses iOS-specific APIs.&lt;/p&gt;

&lt;p&gt;Enter &lt;strong&gt;intermediate source sets&lt;/strong&gt;. An &lt;code&gt;iosMain&lt;/code&gt; source set sits between &lt;code&gt;commonMain&lt;/code&gt; and your platform-specific iOS source sets, allowing you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access iOS-specific APIs (like Foundation framework)&lt;/li&gt;
&lt;li&gt;Share that code across all iOS targets&lt;/li&gt;
&lt;li&gt;Keep it separate from Android and JVM code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This hierarchy might look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;commonMain
├── androidMain
├── jvmMain
└── iosMain (intermediate)
    ├── iosArm64Main
    ├── iosX64Main
    └── iosSimulatorArm64Main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What Hierarchy Templates Do
&lt;/h3&gt;

&lt;p&gt;Manually creating intermediate source sets and wiring up all the &lt;code&gt;dependsOn&lt;/code&gt; relationships was tedious and error-prone. You'd write something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;iosMain&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="nf"&gt;creating&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;dependsOn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;commonMain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;iosArm64Main&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="nf"&gt;getting&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;dependsOn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;iosMain&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="c1"&gt;// ... repeat for each iOS target&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Hierarchy templates&lt;/strong&gt; automate this boilerplate. They're predefined blueprints that analyze your declared targets and automatically create the appropriate intermediate source sets with the correct dependency relationships.&lt;/p&gt;

&lt;p&gt;Starting with Kotlin 1.9.20, the default hierarchy template became active automatically, eliminating the need to manually configure iOS source sets. Sounds great, right?&lt;/p&gt;

&lt;p&gt;It is—until it isn't.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Default Hierarchy Template in Action
&lt;/h2&gt;

&lt;p&gt;To understand the performance problem, we need to see what the default template actually &lt;em&gt;does&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;When you call &lt;code&gt;applyDefaultHierarchyTemplate()&lt;/code&gt; (or let it apply automatically), the Kotlin Gradle Plugin analyzes your targets and creates intermediate source sets based on a comprehensive, predefined structure designed to support &lt;em&gt;all possible&lt;/em&gt; Kotlin Multiplatform targets.&lt;/p&gt;

&lt;p&gt;Let's consider a common real-world scenario. Your project targets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nf"&gt;kotlin&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;applyDefaultHierarchyTemplate&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="nf"&gt;androidTarget&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;jvm&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;iosArm64&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;iosX64&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;iosSimulatorArm64&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You might expect a simple hierarchy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;commonMain
├── androidMain
├── jvmMain
└── iosMain
    ├── iosArm64Main
    ├── iosX64Main
    └── iosSimulatorArm64Main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But here's what the default template &lt;em&gt;actually&lt;/em&gt; creates:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;commonMain
├── androidMain
├── jvmMain
├── nativeMain (shared by ALL native targets)
    └── appleMain (shared by ALL Apple targets)
        └── iosMain (shared by iOS targets)
            ├── iosArm64Main
            ├── iosX64Main
            └── iosSimulatorArm64Main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the extra layers: nativeMain and appleMain. The template creates these intermediate source sets (and their corresponding src/nativeMain and src/appleMain directories) to enable code sharing in scenarios like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;nativeMain&lt;/code&gt;: Share code across &lt;em&gt;all&lt;/em&gt; Kotlin/Native targets (iOS, macOS, Linux, Windows Native, watchOS, tvOS, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;appleMain&lt;/code&gt;: Share code across &lt;em&gt;all&lt;/em&gt; Apple platforms (iOS, macOS, watchOS, tvOS)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The design philosophy is sound. The default template optimizes for the most comprehensive code-sharing scenario. If you later add &lt;code&gt;macosArm64()&lt;/code&gt; to your targets, it will automatically slot into the existing hierarchy under &lt;code&gt;appleMain&lt;/code&gt;, and any code you've written there will just work.&lt;/p&gt;

&lt;p&gt;This is "convention over configuration" at its finest—the template handles the complexity for you.&lt;/p&gt;

&lt;p&gt;But here's the critical question: What if you're never going to target macOS, Linux, or tvOS? What if your "native" targets are only iOS?&lt;/p&gt;

&lt;p&gt;In an iOS-only project, you likely have no code in nativeMain or appleMain—these directories sit empty in your project structure. Yet they still generate build tasks and configuration overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hidden Cost: A Task Explosion
&lt;/h2&gt;

&lt;p&gt;Source sets aren't just a conceptual model—they have real, tangible consequences in your build system. Every source set in your hierarchy triggers the creation of multiple Gradle tasks.&lt;/p&gt;

&lt;p&gt;When the Kotlin Gradle Plugin processes your source set hierarchy, it generates tasks for each source set. The pattern is predictable and measurable.&lt;/p&gt;

&lt;p&gt;The results were striking:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Optimized template&lt;/strong&gt;: 158 tasks per module&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Default template&lt;/strong&gt;: 166 tasks per module&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Difference&lt;/strong&gt;: &lt;strong&gt;8 extra tasks per module&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Extrapolate to our production codebase with 70 modules, and you're looking at &lt;strong&gt;560 wasteful tasks&lt;/strong&gt;. In our enterprise codebase with 180+ modules we have "only" &lt;strong&gt;1440&lt;/strong&gt; &lt;strong&gt;wasteful tasks&lt;/strong&gt; 🫣.&lt;/p&gt;

&lt;p&gt;For every intermediate source set (&lt;code&gt;nativeMain&lt;/code&gt;, &lt;code&gt;appleMain&lt;/code&gt;), Gradle creates a family of tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;compile&amp;lt;SourceSet&amp;gt;KotlinMetadata&lt;/code&gt; - Compiles the source set into platform-agnostic Kotlin IR (Intermediate Representation) stored in a &lt;code&gt;.klib&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;metadata&amp;lt;SourceSet&amp;gt;Classes&lt;/code&gt; - Assembles compilation outputs&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;metadata&amp;lt;SourceSet&amp;gt;ProcessResources&lt;/code&gt; - Processes resources for the source set&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;transform&amp;lt;SourceSet&amp;gt;DependenciesMetadata&lt;/code&gt; - Generates serialized dependency metadata for IDE tooling&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Task Deep Dive: The Metadata Compilation Tasks
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;compileNativeMainKotlinMetadata&lt;/code&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;code&gt;compileAppleMainKotlinMetadata&lt;/code&gt;&lt;/strong&gt; are responsible for compiling the (conceptual) &lt;code&gt;nativeMain&lt;/code&gt; and &lt;code&gt;appleMain&lt;/code&gt; source sets into Kotlin metadata.&lt;/p&gt;

&lt;p&gt;Here's the problem: &lt;strong&gt;These source sets have no code.&lt;/strong&gt; The &lt;code&gt;src/nativeMain/kotlin&lt;/code&gt; and &lt;code&gt;src/appleMain/kotlin&lt;/code&gt; directories exist but sit empty because we're not sharing any code at those levels. Yet the Kotlin compiler still runs, processing an empty source set, generating an (essentially empty) &lt;code&gt;.klib&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;The source sets exist in the dependency graph because the template created them. The &lt;code&gt;iosArm64Main&lt;/code&gt; compilation needs to know what APIs are available from &lt;code&gt;appleMain&lt;/code&gt;, which needs to know what's available from &lt;code&gt;nativeMain&lt;/code&gt;. Even if those source sets are empty, the metadata must be compiled to satisfy the dependency chain.&lt;/p&gt;

&lt;p&gt;Think of it like compiling an empty &lt;code&gt;.kt&lt;/code&gt; file—the compiler still has to initialize, parse (nothing), run analysis passes, and write output. The overhead isn't zero.&lt;/p&gt;

&lt;h3&gt;
  
  
  Task Deep Dive: The IDE Transform Tasks
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;transformNativeMainCInteropDependenciesMetadataForIde&lt;/code&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;code&gt;transformAppleMainCInteropDependenciesMetadataForIde&lt;/code&gt;&lt;/strong&gt; are even more insidious.&lt;/p&gt;

&lt;p&gt;If you have tests under &lt;code&gt;iosTest&lt;/code&gt; you will get an extra &lt;strong&gt;&lt;code&gt;transformNativeTestCInteropDependenciesMetadataForIde&lt;/code&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;code&gt;transformAppleTestCInteropDependenciesMetadataForIde&lt;/code&gt;&lt;/strong&gt; as well.&lt;/p&gt;

&lt;p&gt;These tasks exist specifically for IDE support. When you sync your project in Android Studio or IntelliJ IDEA, these tasks run to process C-interop dependencies (Kotlin/Native bindings to C/Objective-C libraries) and make them understandable to the IDE's code analysis engine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The irony?&lt;/strong&gt; Our project has no C-interop dependencies in &lt;code&gt;nativeMain&lt;/code&gt; or &lt;code&gt;appleMain&lt;/code&gt; because those source sets don't exist in our codebase. We're transforming... nothing.&lt;/p&gt;

&lt;p&gt;But the task still runs. It still needs to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Resolve the dependency graph for the source set&lt;/li&gt;
&lt;li&gt;Check for C-interop &lt;code&gt;.klib&lt;/code&gt; files&lt;/li&gt;
&lt;li&gt;Process (empty) results&lt;/li&gt;
&lt;li&gt;Write metadata for the IDE&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These tasks created real bottlenecks in our workflow. The 70-module project went from 15-minute syncs to over an hour and twenty minutes. The 180-module project became completely unusable, with syncs crashing consistently after 10+ hours.&lt;/p&gt;

&lt;p&gt;After implementing the fix, we couldn't reproduce the exact conditions to capture detailed metrics—Gradle's caching and environmental factors made this difficult. But the aggregate impact was consistent across our entire team, and the theoretical analysis aligned with reality: eliminating 1,440 wasteful tasks restored functionality to the broken project.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: Custom Optimized Hierarchy
&lt;/h2&gt;

&lt;p&gt;Once we understood the problem, the solution became clear: &lt;strong&gt;build exactly the hierarchy we need, no more, no less.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kotlin provides the &lt;code&gt;applyHierarchyTemplate()&lt;/code&gt; DSL for precisely this purpose—defining custom hierarchies that match your project's actual structure.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Optimized Hierarchy
&lt;/h3&gt;

&lt;p&gt;Instead of the default template's deep, general-purpose hierarchy, we created a minimal, flat structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nf"&gt;kotlin&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;applyHierarchyTemplate&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;common&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nf"&gt;withAndroidTarget&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="nf"&gt;withJvm&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="nf"&gt;group&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"ios"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="nf"&gt;withIosArm64&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
                &lt;span class="nf"&gt;withIosX64&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
                &lt;span class="nf"&gt;withIosSimulatorArm64&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nf"&gt;androidTarget&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;jvm&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;iosArm64&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;iosX64&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;iosSimulatorArm64&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates the hierarchy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;commonMain
├── androidMain
├── jvmMain
└── iosMain
    ├── iosArm64Main
    ├── iosX64Main
    └── iosSimulatorArm64Main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice what's missing: &lt;code&gt;nativeMain&lt;/code&gt; and &lt;code&gt;appleMain&lt;/code&gt;. We've collapsed the hierarchy to only include the intermediate source sets we actually use.&lt;/p&gt;

&lt;p&gt;This configuration change transformed our development experience. The 70-module project saw sync times improve from roughly an hour and twenty minutes to about 14 minutes. The 180-module project went from completely broken to functional. The improvement was universal across our team ✨.&lt;/p&gt;

&lt;p&gt;By eliminating unused intermediate source sets, we removed the overhead that had been silently compounding across our codebase.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Note on Reproducing This Issue
&lt;/h2&gt;

&lt;p&gt;After implementing the fix, I attempted to reproduce the original problem to capture more detailed metrics. Surprisingly, the severe degradation didn't reoccur—likely due to Gradle's aggressive caching and configuration state.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you're considering this optimization:&lt;/strong&gt; You may not see dramatic improvements immediately after switching, especially if Gradle has already cached artifacts from your current configuration. The benefits become most apparent on clean syncs or when onboarding new team members. The task count reduction is objective—whether it becomes a bottleneck depends on your specific project context and scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Use Default vs Custom Hierarchy
&lt;/h2&gt;

&lt;p&gt;The default hierarchy template isn't inherently bad—it's solving for a different use case than ours. Understanding when to use each approach is critical.&lt;/p&gt;

&lt;p&gt;If your project genuinely targets macOS, Linux, Windows, iOS, and watchOS, the &lt;code&gt;nativeMain&lt;/code&gt; source set becomes valuable. You &lt;em&gt;want&lt;/em&gt; to share native-specific code across all these platforms, so the Default Hierarchy is gold here.&lt;/p&gt;

&lt;p&gt;On the other hand, if you're starting a new project and not sure if you'll add macOS support in six months, the default template provides a stable foundation that scales as you add targets.&lt;/p&gt;

&lt;p&gt;However, if "native" means exclusively iOS in your project, &lt;code&gt;nativeMain&lt;/code&gt; and &lt;code&gt;appleMain&lt;/code&gt; are dead weight. The task multiplication effect becomes severe at scale, as it adds 8-10 tasks per module.&lt;/p&gt;

&lt;p&gt;So, when to use Default Hierarchy Template? Sorry, but "it depends" 🫠.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The default hierarchy template in Kotlin Multiplatform is a powerful tool that embodies the "convention over configuration" philosophy. For many projects, it's the right choice—it simplifies setup, reduces boilerplate, and scales effortlessly as you add targets.&lt;/p&gt;

&lt;p&gt;But as our experience demonstrates, &lt;strong&gt;the default optimizes for maximum flexibility, not maximum performance.&lt;/strong&gt; When you know your platform constraints (iOS-only native targets) and operate at scale (70+ modules), that flexibility becomes a liability. You're paying the build-time cost of supporting platforms you'll never target.&lt;/p&gt;

&lt;p&gt;The transformation we experienced—from unusable to functional, from frustrating to manageable—came from a simple realization: &lt;strong&gt;we don't need a hierarchy designed for the entire Kotlin Multiplatform universe. We need one designed for our project.&lt;/strong&gt; The &lt;code&gt;applyHierarchyTemplate()&lt;/code&gt; DSL gave us the precision to define exactly that, eliminating hundreds of wasteful tasks and restoring our development velocity.&lt;/p&gt;

&lt;p&gt;That's it! ✌️ Hope you can apply to our project today and give your day a performance boost!&lt;/p&gt;

</description>
      <category>kotlin</category>
      <category>kmp</category>
      <category>mobile</category>
    </item>
    <item>
      <title>KMP-102 - Modularização no KMP</title>
      <dc:creator>Rodrigo Sicarelli</dc:creator>
      <pubDate>Fri, 07 Mar 2025 14:07:13 +0000</pubDate>
      <link>https://forem.com/rsicarelli/kmp-102-modularizacao-no-kmp-4oe5</link>
      <guid>https://forem.com/rsicarelli/kmp-102-modularizacao-no-kmp-4oe5</guid>
      <description>&lt;p&gt;No último artigo, entramos em detalhes e aprendemos sobre as peculiaridades do código exportado nos headers do Objective-C, assim como as boas práticas quanto ao que exportar.&lt;/p&gt;

&lt;p&gt;Neste artigo, vamos entender melhor o comportamento da modularização em projetos KMP, e como isso pode ser feito de forma eficiente e organizada.&lt;/p&gt;




&lt;ul&gt;
&lt;li&gt;O que é modularização?&lt;/li&gt;
&lt;li&gt;Modularização no KMP&lt;/li&gt;
&lt;li&gt;Pavimentando flexibilidade da UI&lt;/li&gt;
&lt;li&gt;
Exportando para o XCFramework

&lt;ul&gt;
&lt;li&gt;Cenário 1: "backend" KMP compartilhado, "frontend" flexível&lt;/li&gt;
&lt;li&gt;Cenário 2: Híbrido, migrando para Compose Multiplatform&lt;/li&gt;
&lt;li&gt;Cenário 3: 100% Compose Multiplatform&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Explorando os benefícios da modularização no KMP&lt;/li&gt;

&lt;li&gt;Conclusão&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  O que é modularização?
&lt;/h2&gt;

&lt;p&gt;Não irei me alongar muito neste tópico, pois já abordamos esse assunto no &lt;a href="https://dev.to/rsicarelli/android-plataforma-parte-1-modularizacao-2016"&gt;Android Plataforma - Parte 1: Modularização&lt;/a&gt;. Se você não tem certeza do que é modularização em projetos Gradle, recomendo uma pausa para a leitura do artigo.&lt;/p&gt;

&lt;p&gt;Em resumo, modularização é a prática de dividir um projeto em módulos menores e independentes, que podem ser desenvolvidos, testados e mantidos separadamente.&lt;/p&gt;

&lt;p&gt;Essa prática é crucial para escalar projetos KMP, já que a modularização impacta diretamente na autonomia e independência dos times, evitando que um time dependa do outro para realizar suas tarefas.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modularização no KMP
&lt;/h2&gt;

&lt;p&gt;No KMP, a modularização é feita por meio de módulos compartilhados, que são responsáveis por compartilhar código entre as plataformas.&lt;/p&gt;

&lt;p&gt;Vamos elaborar uma estrutura de módulos que respeite a separação de responsabilidades e possibilite a reutilização de código de forma eficiente entre módulos. Nosso contexto aqui considera uma aplicação que irá escalar, no sentido de ter mais features e mais plataformas:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frsicarelli%2FKMP-101%2Fblob%2Fmain%2Fposts%2Fassets%2Fkmp-modularization-pt1.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frsicarelli%2FKMP-101%2Fblob%2Fmain%2Fposts%2Fassets%2Fkmp-modularization-pt1.png%3Fraw%3Dtrue" width="2112" height="1920"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Essa estrutura segue algumas ideias do Domain Driven Design (DDD), em que cada módulo representa um domínio independente e isolado da aplicação. Não irei entrar em muitos detalhes sobre o DDD, mas recomendo a leitura do livro &lt;a href="https://www.amazon.com.br/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215/ref=sr_1_1?dib=eyJ2IjoiMSJ9.Lo7-Md3VvIV38Rzn-ytmnX1FyJz_hHxG_c3ocyge7LEEkMf9J0QQUC_vNRqM-bly1FEW6JDWiQjxRiR4Ip4uOSi5BDadwwQLRq-qGmgXmoG36NnUp66mVBVEOL-xFpHChmTWdyWDB5EZGboxu2dOIVTrzRS54KI4S6rDRsLLLoSAkU9bCl81j0cePEicQvqB.QPWgwg7lUfTottKjOov5grb2CciIICVV12MWxs8bueA&amp;amp;dib_tag=se&amp;amp;keywords=Domain-Driven-Design-Tackling-Complexity-Software&amp;amp;qid=1739362218&amp;amp;sr=8-1&amp;amp;ufe=app_do%3Aamzn1.fos.4bddec23-2dcf-4403-8597-e1a02442043d" rel="noopener noreferrer"&gt;Domain-Driven Design: Tackling Complexity in the Heart of Software&lt;/a&gt; para entender melhor sobre o assunto.&lt;/p&gt;

&lt;p&gt;Com essa estrutura, conseguimos:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Permitir escalar de forma eficiente sem duplicação de código. Ao criar uma nova feature, basta criar um novo módulo e adicionar as dependências necessárias.&lt;/li&gt;
&lt;li&gt;Ter granularidade no que será exportado para as outras plataformas, especialmente para o XCFramework.&lt;/li&gt;
&lt;li&gt;Ter independência de domínio para times específicos, evitando conflitos de código e responsabilidades. Por exemplo, times podem criar um &lt;code&gt;CODEOWNER&lt;/code&gt; para um módulo específico, e serem responsáveis por manter e evoluir esse módulo.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pavimentando flexibilidade da UI
&lt;/h2&gt;

&lt;p&gt;Um dos superpoderes do KMP é permitir compartilhar muito ou pouco código. Essa habilidade implica que podemos escolher qual UI iremos utilizar em cada plataforma. Dependendo da sua estratégia de construção de UI, você precisará de uma abordagem específica de módulos para criar essa flexibilidade.&lt;/p&gt;

&lt;p&gt;Vamos pensar que cada feature pode ser separada em um "frontend" e "backend". Seguindo o padrão de arquitetura MVVM, o "frontend" seria a nossa UI (Compose, SwiftUI) e o "backend" seria a nossa lógica de negócio (ViewModel/UiModel + Domain + Data). Ou seja, partes da camada de apresentação podem ser compartilhadas, mas damos a liberdade para cada plataforma de escolher a sua UI.&lt;/p&gt;

&lt;p&gt;Com isso em mente, uma abordagem que pode ser utilizada é a seguinte:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frsicarelli%2FKMP-101%2Fblob%2Fmain%2Fposts%2Fassets%2Fkmp-modularization-pt2.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frsicarelli%2FKMP-101%2Fblob%2Fmain%2Fposts%2Fassets%2Fkmp-modularization-pt2.png%3Fraw%3Dtrue" width="1984" height="1728"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aqui, nós separamos cada feature que possui uma tela em 3 módulos:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;common&lt;/code&gt;, nosso "backend" que contém a lógica de negócio da feature.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;android-ui&lt;/code&gt;, nosso "frontend" apenas em Android, que contém a UI da feature.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;common-ui&lt;/code&gt;, nosso "frontend" multiplataforma, que contém a UI da feature compartilhada entre as plataformas.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Com essa abordagem, é possível:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Iniciar migrações de telas em SwiftUI de forma gradual, sem a necessidade de migrar toda a feature de uma vez.&lt;/li&gt;
&lt;li&gt;Ter flexibilidade para migrar features Jetpack Compose (apenas Android) enquanto se compartilha o "backend" com outras plataformas.&lt;/li&gt;
&lt;li&gt;Ter flexibilidade para iniciar telas em Compose Multiplatform (Android, iOS, Desktop, ...) enquanto se compartilha o "backend" com outras plataformas.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Exportando para o XCFramework
&lt;/h2&gt;

&lt;p&gt;Agora que exploramos um modelo de modularização que permite flexibilidade na escolha da UI, podemos avançar e exportar nosso código Kotlin para o XCFramework.&lt;/p&gt;

&lt;p&gt;Para utilizar nosso código Kotlin no iOS, precisamos de um módulo que represente nosso XCFramework. Esse é um módulo "cola", ou seja, um módulo que agrega vários módulos que serão exportados para o XCFramework.&lt;/p&gt;

&lt;p&gt;Esse módulo não será utilizado diretamente pelo app Android ou outras plataformas, mas representará nossa exportação para o iOS. Esse módulo é comumente chamado de &lt;code&gt;ios-interop&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Para exemplificar o poder da modularização e a flexibilidade do KMP, vamos explorar alguns cenários de compartilhamento:&lt;/p&gt;

&lt;h3&gt;
  
  
  Cenário 1: "backend" KMP compartilhado, "frontend" flexível
&lt;/h3&gt;

&lt;p&gt;Neste cenário, temos um módulo &lt;code&gt;common&lt;/code&gt; que contém a lógica de negócio da feature. O módulo &lt;code&gt;android-ui&lt;/code&gt; contém a UI da feature apenas para Android e é utilizado pelo app Android.&lt;/p&gt;

&lt;p&gt;Características desse modelo:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A lógica de negócio é compartilhada entre as plataformas&lt;/li&gt;
&lt;li&gt;A UI é específica para Android usando Jetpack Compose&lt;/li&gt;
&lt;li&gt;A UI não é compartilhada entre as plataformas&lt;/li&gt;
&lt;li&gt;No iOS, a lógica de negócio é utilizada, mas a UI é específica para iOS com SwiftUI&lt;/li&gt;
&lt;li&gt;Modelo ideal para projetos que buscam migrar para Compose gradualmente ou que pretendem manter a UI específica por plataforma&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frsicarelli%2FKMP-101%2Fblob%2Fmain%2Fposts%2Fassets%2Fkmp-modularization-scenario-1.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frsicarelli%2FKMP-101%2Fblob%2Fmain%2Fposts%2Fassets%2Fkmp-modularization-scenario-1.png%3Fraw%3Dtrue" width="1792" height="2514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Cenário 2: Híbrido, migrando para Compose Multiplatform
&lt;/h3&gt;

&lt;p&gt;Neste cenário, temos um módulo &lt;code&gt;common&lt;/code&gt; que contém a lógica de negócio da feature. O módulo &lt;code&gt;common-ui&lt;/code&gt; contém a UI da feature compartilhada entre as plataformas.&lt;/p&gt;

&lt;p&gt;Aqui, inicia-se a migração para Compose Multiplatform, enquanto mantemos a feature &lt;code&gt;android-ui&lt;/code&gt; específica para Android.&lt;/p&gt;

&lt;p&gt;Características desse modelo:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Lógica de negócio compartilhada entre as plataformas&lt;/li&gt;
&lt;li&gt;Parte da UI compartilhada entre as plataformas&lt;/li&gt;
&lt;li&gt;No &lt;code&gt;android-ui&lt;/code&gt;, componentes de UI específicos para Android usando Jetpack Compose&lt;/li&gt;
&lt;li&gt;No &lt;code&gt;common-ui&lt;/code&gt;, componentes de UI compartilhados usando Compose Multiplatform&lt;/li&gt;
&lt;li&gt;Modelo ideal para iniciar migração para Compose Multiplatform com migração gradual da UI&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frsicarelli%2FKMP-101%2Fblob%2Fmain%2Fposts%2Fassets%2Fkmp-modularization-scenario-2.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frsicarelli%2FKMP-101%2Fblob%2Fmain%2Fposts%2Fassets%2Fkmp-modularization-scenario-2.png%3Fraw%3Dtrue" width="1842" height="2514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Cenário 3: 100% Compose Multiplatform
&lt;/h3&gt;

&lt;p&gt;Neste cenário, temos um módulo &lt;code&gt;common&lt;/code&gt; que contém a lógica de negócio da feature. O módulo &lt;code&gt;common-ui&lt;/code&gt; contém a UI da feature compartilhada entre as plataformas.&lt;/p&gt;

&lt;p&gt;Aqui, não há distinção por plataforma - toda a UI é compartilhada usando Compose Multiplatform.&lt;/p&gt;

&lt;p&gt;Características desse modelo:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Lógica de negócio compartilhada entre as plataformas&lt;/li&gt;
&lt;li&gt;UI totalmente compartilhada por meio do Compose Multiplatform&lt;/li&gt;
&lt;li&gt;Modelo ideal para projetos com UI unificada entre todas as plataformas&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frsicarelli%2FKMP-101%2Fblob%2Fmain%2Fposts%2Fassets%2Fkmp-modularization-scenario-3.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frsicarelli%2FKMP-101%2Fblob%2Fmain%2Fposts%2Fassets%2Fkmp-modularization-scenario-3.png%3Fraw%3Dtrue" width="1842" height="2514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Explorando os benefícios da modularização no KMP
&lt;/h2&gt;

&lt;p&gt;Como vocês puderam ver, a modularização no KMP é uma prática essencial para escalar projetos de forma eficiente e organizada.&lt;/p&gt;

&lt;p&gt;Mas há um ponto crucial que quero destacar: a modularização ajuda a ter granularidade no que queremos exportar para o XCFramework, mais especificamente, para os headers do Objective-C.&lt;/p&gt;

&lt;p&gt;Como vimos no último post, &lt;a href="https://dev.to/rsicarelli/kmp-102-otimizando-a-exportacao-do-kotlin-para-o-obj-cswift-358p"&gt;KMP-102 - Otimizando a Exportação do Kotlin para o Obj-c/Swift&lt;/a&gt;, ser seletivo com o código que exportamos para os headers do Objective-C está diretamente ligado à eficiência do tempo de build (ou seja, compilações do XCFramework mais eficientes).&lt;/p&gt;

&lt;p&gt;Por exemplo:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No &lt;strong&gt;Modelo 1&lt;/strong&gt;, garantimos que apenas o &lt;code&gt;login:common&lt;/code&gt; seja exposto nos headers do Objective-C, enquanto evitamos que qualquer parte do &lt;code&gt;android-ui&lt;/code&gt; seja exposta.&lt;/li&gt;
&lt;li&gt;No &lt;strong&gt;Modelo 3&lt;/strong&gt;, garantimos que nada do "backend" da jornada seja exposto nos headers, apenas o "frontend" multiplataforma.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Essa estratégia é fundamental para a saúde e evolução do repositório, e garante que DEVs KMP possam consumir o XCFramework de forma eficiente e sem conflitos de dependências.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusão
&lt;/h2&gt;

&lt;p&gt;Neste artigo, exploramos a modularização no KMP e como isso pode ser feito de forma eficiente e organizada. Aprendemos como essa prática pode ser utilizada para escalar projetos e obtivemos uma prévia de como isso impacta diretamente na autonomia e independência dos times.&lt;/p&gt;

&lt;p&gt;Geralmente, em exemplos KMP básicos, temos apenas um módulo &lt;code&gt;shared&lt;/code&gt;. Porém, em cenários reais - onde projetos precisam escalar e adotar estratégias de UI flexíveis - a complexidade é muito maior.&lt;/p&gt;

&lt;p&gt;A modularização é uma peça-chave para o sucesso de projetos KMP, e é crucial que seja implementada de forma estruturada e organizada!&lt;/p&gt;

&lt;p&gt;No próximo artigo, vamos explorar estratégias de construção do XCFramework em projetos existentes, garantindo autonomia e independência para os times.&lt;/p&gt;

&lt;p&gt;Até a próxima!&lt;/p&gt;

</description>
      <category>kotlin</category>
      <category>kmp</category>
      <category>braziliandevs</category>
      <category>mobile</category>
    </item>
    <item>
      <title>KMP-102 - Otimizando o Kotlin para o Obj-c/Swift</title>
      <dc:creator>Rodrigo Sicarelli</dc:creator>
      <pubDate>Sat, 18 Jan 2025 11:37:03 +0000</pubDate>
      <link>https://forem.com/rsicarelli/kmp-102-otimizando-a-exportacao-do-kotlin-para-o-obj-cswift-358p</link>
      <guid>https://forem.com/rsicarelli/kmp-102-otimizando-a-exportacao-do-kotlin-para-o-obj-cswift-358p</guid>
      <description>&lt;p&gt;No último post, aprendemos como utilizar código Kotlin no Swift.&lt;br&gt;
Aprendemos sobre algumas técnicas para melhorar o codigo exportado para o Swift,&lt;br&gt;
e como as anotações como &lt;code&gt;@HiddenFromObjC&lt;/code&gt; e &lt;code&gt;@HidesFromObjC&lt;/code&gt; controlam a visibilidade do código no Swift.&lt;/p&gt;

&lt;p&gt;Nesse post, vamos aprofundar sobre como essa exportação funciona e o impacto no nosso código gerado.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Como o Kotlin/Native exporta código para o Swift&lt;/li&gt;
&lt;li&gt;
Recapitulando a exportação de código

&lt;ul&gt;
&lt;li&gt;💡 Resumindo&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Como o Kotlin/Native resolve os tipos Kotlin para Objective-C?&lt;/li&gt;
&lt;li&gt;
Controlando o que é exportado para os Headers

&lt;ul&gt;
&lt;li&gt;🤔 Mas por que eu devo me preocupar com isso?&lt;/li&gt;
&lt;li&gt;Recomendação de paragidma de exportação&lt;/li&gt;
&lt;li&gt;
Formas de esconder código Kotlin do Objective-C

&lt;ul&gt;
&lt;li&gt;1. Utilizando o modificador &lt;code&gt;internal&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;2. Utilizando as anotações &lt;code&gt;@HiddenFromObjC&lt;/code&gt; e &lt;code&gt;@HidesFromObjC&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;2.1 @HiddenFromObjC&lt;/li&gt;
&lt;li&gt;2.2 @HidesFromObjC&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Impacto do uso do &lt;code&gt;internal&lt;/code&gt;, &lt;code&gt;@HiddenFromObjC&lt;/code&gt; e &lt;code&gt;@HidesFromObjC&lt;/code&gt; no codebase&lt;/li&gt;
&lt;li&gt;Conclusão&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Recapitulando a exportação de código
&lt;/h2&gt;

&lt;p&gt;Ao compilar um &lt;code&gt;.framework&lt;/code&gt; com o Kotlin/Native, o compilador gera uma série de arquivos, sendo eles:&lt;/p&gt;

&lt;p&gt;
  &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frsicarelli%2FKMP-101%2Fblob%2Fmain%2Fposts%2Fassets%2Fkotlin-native-xcframework-expanded.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frsicarelli%2FKMP-101%2Fblob%2Fmain%2Fposts%2Fassets%2Fkotlin-native-xcframework-expanded.png%3Fraw%3Dtrue" width="316" height="269"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Headers/KotlinShared.h&lt;/code&gt;: Interface gerada pelo KMP que expõe as funções e classes Kotlin para o Objective-C/Swift.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;KotlinShared.c&lt;/code&gt; (ou sem extensão): Arquivo binário compilado que contém as implementações nativas do código Kotlin, traduzido para &lt;a href="https://mcyoung.xyz/2023/08/01/llvm-ir/" rel="noopener noreferrer"&gt;LLVM IR&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Outros componentes (como &lt;code&gt;.plist&lt;/code&gt; e &lt;code&gt;bundles&lt;/code&gt;): Informações adicionais necessárias para o funcionamento do framework no iOS.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  💡 Resumindo
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;KotlinShared.h&lt;/code&gt;: é o que está visível para utilizar no Obj-c/Swift&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;KotlinShared.c&lt;/code&gt;: é a compilação interna, que não está exposto.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Como o Kotlin/Native resolve os tipos Kotlin para Objective-C?
&lt;/h2&gt;

&lt;p&gt;Ao compilar código com o Kotlin/Native, o compilador segue uma série de etapas para traduzir tipos e estruturas Kotlin para algo compreensível pelo Objective-C (e, consequentemente, pelo Swift). O resultado dessa tradução é o arquivo &lt;code&gt;KotlinShared.h&lt;/code&gt;, que mapeia os tipos Kotlin para seus equivalentes nativos.&lt;/p&gt;

&lt;p&gt;Por exemplo, uma &lt;code&gt;String&lt;/code&gt; no Kotlin é transformado em &lt;code&gt;NSString&lt;/code&gt;, enquanto coleções como &lt;code&gt;List&lt;/code&gt; e &lt;code&gt;Map&lt;/code&gt; são traduzidas para &lt;code&gt;NSArray&lt;/code&gt; e &lt;code&gt;NSDictionary&lt;/code&gt;. Além disso, o compilador preserva informações importantes, como nullability, garantindo que valores nullable e non-nullable sejam representados corretamente no Objective-C.&lt;/p&gt;

&lt;p&gt;Aqui, a classe Kotlin &lt;code&gt;Person&lt;/code&gt; foi mapeada diretamente para uma classe Objective-C, com propriedades como &lt;code&gt;name&lt;/code&gt; traduzidas para &lt;code&gt;NSString&lt;/code&gt; e &lt;code&gt;parents&lt;/code&gt; para &lt;code&gt;NSArray&amp;lt;Person *&amp;gt;&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Person&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;age&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;parents&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Person&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#import &amp;lt;Foundation/Foundation.h&amp;gt;

NS_SWIFT_NAME(Person)
@interface Person : NSObject

@property (readonly) NSString * _Nonnull name;
@property (readonly) NSInteger age;
@property (readonly) NSArray&amp;lt;Person *&amp;gt; * _Nonnull parents;

- (instancetype _Nonnull)initWithName:(NSString * _Nonnull)name 
                                  age:(NSInteger)age 
                              parents:(NSArray&amp;lt;Person *&amp;gt; * _Nonnull)parents;

@end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Controlando o que é exportado para os Headers
&lt;/h2&gt;

&lt;p&gt;Esse conceito é crucial, especialmente se você busca escalar o KMP no seu projeto.&lt;/p&gt;

&lt;p&gt;Por padrão, tudo que é &lt;strong&gt;público no Kotlin é exportado para o Objective-C&lt;/strong&gt;, o que não é ideal em projetos grandes. À medida que o código cresce, o arquivo &lt;code&gt;KotlinShared.h&lt;/code&gt; pode se tornar extenso, impactando o desempenho da compilação e dificultando a manutenção.&lt;/p&gt;

&lt;h3&gt;
  
  
  🤔 Mas por que eu devo me preocupar com isso?
&lt;/h3&gt;

&lt;p&gt;A medida que seu projeto cresce, você terá mais e mais código Kotlin sendo processado e exportado para os Headers. &lt;/p&gt;

&lt;p&gt;Isso pode (e vai) resultar em &lt;strong&gt;um arquivo &lt;code&gt;KotlinShared.h&lt;/code&gt; gigante&lt;/strong&gt;, com centenas de linhas de código.&lt;/p&gt;

&lt;p&gt;Com um &lt;code&gt;KotlinShared.h&lt;/code&gt; grande, a compilação do seu XCFramework irá ficar mais lenta, pois o compilador precisa processar todas as declarações do Kotlin para gerar os Headers.&lt;/p&gt;

&lt;p&gt;Além disso, um &lt;code&gt;KotlinShared.h&lt;/code&gt; grande pode resultar em &lt;strong&gt;mais erros de compilação&lt;/strong&gt; no Xcode, pois o compilador do Swift precisa processar todas as declarações do Kotlin para gerar o binário final.&lt;/p&gt;

&lt;p&gt;Por último, a experiência de desenvolvimento é deteriodada, já que toda vez que você precisa checar o &lt;code&gt;KotlinShared.h&lt;/code&gt; no Xcode, você terá que lidar com um arquivo gigante e difícil de navegar, além de uma demora maior para abrir o arquivo no Xcode.&lt;/p&gt;

&lt;h3&gt;
  
  
  💡 Resumindo
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Se seu time quer escalar o KMP, é importante controlar o que é exportado para o Objective-C.&lt;/li&gt;
&lt;li&gt;Isso garante que o &lt;code&gt;KotlinShared.h&lt;/code&gt; seja enxuto e fácil de navegar, acelerando a compilação do XCFramework e melhorando a experiência de desenvolvimento (vamos nos aprofundar nisso em um post futuro).&lt;/li&gt;
&lt;li&gt;É extremamente recomendado que seu time propague a cultura de controlar o que é exportado para o Objective-C desde o começo, para evitar problemas de escalabilidade no futuro.&lt;/li&gt;
&lt;li&gt;Esconder código Kotlin do Objective-C é &lt;strong&gt;considerada boa prática&lt;/strong&gt;. O famoso "combinado não sai caro" se aplica muito bem aqui 😅.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Recomendação de paragidma de exportação
&lt;/h3&gt;

&lt;p&gt;Aqui temos muito o que aprender com bibliotecas open source. Ao consumir uma biblioteca open source, é comum você ter acesso apenas a uma interface bem definida, com poucos detalhes de implementação.&lt;/p&gt;

&lt;p&gt;Isso ajuda a gente (que consome a biblioteca) a entender o que a biblioteca faz, sem precisar entender como ela faz. Isso é o que chamamos de &lt;strong&gt;encapsulamento&lt;/strong&gt;. Além do mais, a experiência na IDE é elevada, já que o auto-complete e a navegação entre arquivos é mais rápida e precisa.&lt;/p&gt;

&lt;p&gt;Com isso em mente, a recomendação é &lt;strong&gt;esconder o máximo possível do código Kotlin do Objective-C&lt;/strong&gt;. Isso significa que você deve exportar apenas o que é necessário para o Swift consumir, e esconder o resto.&lt;/p&gt;

&lt;p&gt;A mentalidade é a seguinte:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;✅ Esconder por padrão. &lt;/p&gt;

&lt;p&gt;⚠️ Expor apenas o necessário.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Formas de esconder código Kotlin do Objective-C
&lt;/h3&gt;

&lt;p&gt;Existem 3 formas de esconder código Kotlin do Objective-C:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Utilizando o modificador &lt;code&gt;internal&lt;/code&gt;
&lt;/h4&gt;

&lt;p&gt;Essa abordargem é a mais recomendada, pois gera um impacto positivo no seu código Kotlin consumido em outros source sets (Android, Desktop, Common, etc).&lt;/p&gt;

&lt;p&gt;Por padrão, o modificador &lt;code&gt;internal&lt;/code&gt; faz com que a declaração seja visível apenas no módulo em que foi declarada. Isso significa que o código Kotlin marcado como &lt;code&gt;internal&lt;/code&gt; não será exportado para o Objective-C.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;internal&lt;/span&gt; &lt;span class="kd"&gt;data class&lt;/span&gt; &lt;span class="nc"&gt;Person&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;age&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;parents&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Person&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. Utilizando as anotações &lt;code&gt;@HiddenFromObjC&lt;/code&gt; e &lt;code&gt;@HidesFromObjC&lt;/code&gt;
&lt;/h4&gt;

&lt;p&gt;As anotações &lt;code&gt;@HiddenFromObjC&lt;/code&gt; e &lt;code&gt;@HidesFromObjC&lt;/code&gt; são específicas do Kotlin/Native e têm como objetivo controlar a visibilidade de métodos, propriedades ou classes na interoperabilidade com Objective-C/Swift. Elas influenciam como os elementos Kotlin são expostos ao framework gerado pelo Kotlin/Native para uso em projetos iOS.&lt;/p&gt;

&lt;h5&gt;
  
  
  2.1 @HiddenFromObjC
&lt;/h5&gt;

&lt;p&gt;Essa anotação é usada para &lt;strong&gt;ocultar completamente um elemento Kotlin da API exposta para Objective-C/Swift&lt;/strong&gt;. Qualquer método, propriedade ou classe anotada com &lt;code&gt;@HiddenFromObjC&lt;/code&gt; não será gerado no framework resultante e, portanto, não será visível em projetos Swift/Objective-C.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nd"&gt;@HiddenFromObjC&lt;/span&gt;
&lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;internalUtilityFunction&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Esta função não será exposta para Objective-C/Swift&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nd"&gt;@HiddenFromObjC&lt;/span&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;InternalHelper&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;doSomething&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Esta classe inteira será invisível no framework gerado&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  2.2 @HidesFromObjC
&lt;/h5&gt;

&lt;p&gt;É uma &lt;strong&gt;meta-anotação&lt;/strong&gt;, ou seja, ela é usada para marcar outras anotações que serão aplicadas a elementos do código Kotlin.&lt;/p&gt;

&lt;p&gt;Quando uma anotação é marcada com &lt;code&gt;@HidesFromObjC&lt;/code&gt;, qualquer elemento que for anotado com essa anotação será automaticamente removido da API Objective-C pública gerada.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;@HidesFromObjC&lt;/code&gt; permite uma maior flexibilidade, já que você pode criar suas próprias anotações com essa funcionalidade.&lt;/p&gt;

&lt;p&gt;Exemplos de uso incluem criar anotações personalizadas que escondem partes do código da API Objective-C, enquanto ainda permitem que o elemento permaneça disponível no Kotlin.&lt;/p&gt;

&lt;p&gt;Aqui, a anotação personalizada &lt;code&gt;@InternalUseOnly&lt;/code&gt; utiliza &lt;code&gt;@HidesFromObjC&lt;/code&gt;, o que automaticamente remove qualquer função ou classe anotada com ela da API Objective-C.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nd"&gt;@HidesFromObjC&lt;/span&gt;
&lt;span class="nd"&gt;@Target&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;AnnotationTarget&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;CLASS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;AnnotationTarget&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;FUNCTION&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;annotation&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;InternalUseOnly&lt;/span&gt;

&lt;span class="nd"&gt;@InternalUseOnly&lt;/span&gt;
&lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;internalFunction&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Esta função não será exposta ao Objective-C"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Impacto do uso do &lt;code&gt;internal&lt;/code&gt;, &lt;code&gt;@HiddenFromObjC&lt;/code&gt; e &lt;code&gt;@HidesFromObjC&lt;/code&gt; no codebase
&lt;/h2&gt;

&lt;p&gt;Ao controlar o que é exportado:&lt;br&gt;
• Você reduz a superfície da API pública, evitando confusões e erros.&lt;br&gt;
• O tamanho do framework gerado diminui, melhorando o desempenho do build.&lt;br&gt;
• A segurança aumenta, já que classes ou métodos internos não ficam acessíveis no iOS.&lt;br&gt;
• A manutenção se torna mais simples, com uma API mais limpa e focada.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusão
&lt;/h2&gt;

&lt;p&gt;Controlar o que é exportado para o Objective-C é uma prática essencial para manter a qualidade e a escalabilidade do seu projeto KMP.&lt;/p&gt;

&lt;p&gt;Ao esconder código Kotlin do Objective-C, você garante que apenas o necessário é exposto para o Swift, mantendo a API enxuta e fácil de navegar.&lt;/p&gt;

&lt;p&gt;Além disso, você evita problemas de performance, segurança e manutenção, garantindo que seu projeto KMP seja escalável e fácil de manter.&lt;/p&gt;

&lt;p&gt;👍 É de suma importância que você e seu time adotem essa prática desde o início do projeto, para evitar problemas de escalabilidade no futuro.&lt;/p&gt;

&lt;p&gt;Com esse conceito bem fixado, podemos avançar no próximo post onde iremos explorar uma estratégia que irá desbloquear a escala do KMP no seu projeto (spoiler: utilizando &lt;code&gt;.klibs&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Nos vemos na próxima ✌️&lt;/p&gt;

</description>
      <category>mobile</category>
      <category>kmp</category>
      <category>kotlin</category>
      <category>braziliandevs</category>
    </item>
    <item>
      <title>KMP-102 - Utilizando Código Kotlin no Swift</title>
      <dc:creator>Rodrigo Sicarelli</dc:creator>
      <pubDate>Fri, 11 Oct 2024 11:20:34 +0000</pubDate>
      <link>https://forem.com/rsicarelli/kmp-102-utilizando-codigo-kotlin-no-swift-2ice</link>
      <guid>https://forem.com/rsicarelli/kmp-102-utilizando-codigo-kotlin-no-swift-2ice</guid>
      <description>&lt;p&gt;No último post, aprendemos a criar um &lt;code&gt;XCFramework&lt;/code&gt; a partir de código Kotlin e exploramos algumas características dos tipos de build gerados.&lt;/p&gt;

&lt;p&gt;Com isso, podemos avançar e aprender como o código Kotlin compilado para Objective-C funciona e como consumi-lo no iOS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Exportando um 'Olá mundo' em Kotlin para iOS

&lt;ul&gt;
&lt;li&gt;O que está acontecendo aqui?&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Compreendendo o código gerado pelo Kotlin/Native&lt;/li&gt;

&lt;li&gt;

Melhorando a interoperabilidade com Swift

&lt;ul&gt;
&lt;li&gt;E a anotação &lt;code&gt;@HiddenFromObjC&lt;/code&gt;?&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Outras maneiras de melhorar a interoperabilidade

&lt;ul&gt;
&lt;li&gt;Utilizando o SKIE para melhorar a interoperabilidade&lt;/li&gt;
&lt;li&gt;
Considerações sobre o SKIE

&lt;ul&gt;
&lt;li&gt;Reduzindo o tempo de build do SKIE utilizando anotações&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;Conclusões finais&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Exportando um 'Olá mundo' em Kotlin para iOS
&lt;/h2&gt;

&lt;p&gt;Para começar, vamos entender alguns pontos importantes sobre como o código Kotlin é convertido para Objective-C e, consequentemente, como utilizá-lo no iOS.&lt;/p&gt;

&lt;p&gt;Vamos criar um simples &lt;code&gt;HelloWorld&lt;/code&gt; em Kotlin:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;//HelloWorld.kt commonMain&lt;/span&gt;
&lt;span class="n"&gt;expect&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;helloWorld&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;

&lt;span class="c1"&gt;//HelloWorld.apple.kt appleMain&lt;/span&gt;
&lt;span class="n"&gt;actual&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;helloWorld&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Olá mundo Apple Main"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Agora precisamos compilar um &lt;code&gt;XCFramework&lt;/code&gt; e integra-lo no Xcode. Existem diversos tutoriais na internet sobre como realizar essa tarefa; para esta demonstração, segui o guia "&lt;a href="https://jyotibhambhu.medium.com/part-3-how-to-integrate-kotlin-multiplatform-kmp-into-your-ios-project-7dc4016f7fb5" rel="noopener noreferrer"&gt;How to Integrate Kotlin Multiplatform (KMP) into Your iOS Project&lt;/a&gt;".&lt;/p&gt;

&lt;p&gt;Os passos básicos são:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Compilar o &lt;code&gt;XCFramework&lt;/code&gt; com &lt;code&gt;./gradlew assembleKotlinSharedXCFramework&lt;/code&gt;. &lt;strong&gt;NOTA:&lt;/strong&gt; substitua "KotlinShared" pelo nome do seu &lt;code&gt;XCFramework&lt;/code&gt;. Explicamos isso nos artigos anteriores.&lt;/li&gt;
&lt;li&gt;Configurar o projeto Xcode para consumir o &lt;code&gt;XCFramework&lt;/code&gt; gerado.&lt;/li&gt;
&lt;li&gt;Utilizar o código Kotlin no iOS.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Depois que toda a configuração for realizada, conseguimos avançar e criar uma tela bem simples em SwiftUI para consumir o código Kotlin:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="kd"&gt;import&lt;/span&gt; &lt;span class="kt"&gt;SwiftUI&lt;/span&gt;
&lt;span class="kd"&gt;import&lt;/span&gt; &lt;span class="kt"&gt;KotlinShared&lt;/span&gt;

&lt;span class="kd"&gt;struct&lt;/span&gt; &lt;span class="kt"&gt;ContentView&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;View&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;@State&lt;/span&gt; &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="nv"&gt;showText&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;

    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="nv"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kd"&gt;some&lt;/span&gt; &lt;span class="kt"&gt;View&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kt"&gt;Button&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Show Text"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;showText&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toggle&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;showText&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="kt"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;HelloWorld_appleKt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;helloWorld&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Como resultado, teremos:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frsicarelli%2FKMP-101%2Fblob%2Fmain%2Fposts%2Fassets%2Fkotlin-shared-hello-world-ios.gif%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frsicarelli%2FKMP-101%2Fblob%2Fmain%2Fposts%2Fassets%2Fkotlin-shared-hello-world-ios.gif%3Fraw%3Dtrue" width="640" height="1389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  O que está acontecendo aqui?
&lt;/h3&gt;

&lt;p&gt;Vamos entender o que está acontecendo nos bastidores:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;O código Kotlin é compilado para Objective-C e empacotado em um &lt;code&gt;XCFramework&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;O &lt;code&gt;XCFramework&lt;/code&gt; é integrado no projeto Xcode.&lt;/li&gt;
&lt;li&gt;Com o &lt;code&gt;XCFramework&lt;/code&gt; integrado, podemos importar o código Kotlin no iOS usando &lt;code&gt;import KotlinShared&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Dentro de &lt;code&gt;KotlinShared&lt;/code&gt; (o nome do &lt;code&gt;XCFramework&lt;/code&gt;), temos acesso ao código Kotlin compilado para Objective-C.&lt;/li&gt;
&lt;li&gt;A classe &lt;code&gt;HelloWorld_appleKt&lt;/code&gt; é gerada automaticamente pelo Kotlin/Native, permitindo o acesso ao método &lt;code&gt;helloWorld()&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Assim, podemos utilizar o código Kotlin no iOS!
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="kd"&gt;import&lt;/span&gt; &lt;span class="kt"&gt;KotlinShared&lt;/span&gt;

&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;helloWorld&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;HelloWorld_appleKt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;helloWorld&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Mas se notarmos, a sintaxe para acessar o código Kotlin no iOS é bem estranha. &lt;code&gt;HelloWorld_appleKt.helloWorld()&lt;/code&gt; é uma sintaxe nada idiomática para o Swift.&lt;/p&gt;

&lt;p&gt;Vamos entender melhor esse ponto.&lt;/p&gt;

&lt;h2&gt;
  
  
  Compreendendo o código gerado pelo Kotlin/Native
&lt;/h2&gt;

&lt;p&gt;A maior limitação hoje no Kotlin/Native é a interoperabilidade com Objective-C. O Kotlin/Native não consegue gerar um código que seja 100% compatível com o Swift.&lt;/p&gt;

&lt;p&gt;Isso porque o Kotlin/Native é um compilador que gera código Objective-C, e não Swift. O código gerado é compatível com Objective-C, e não Swift.&lt;/p&gt;

&lt;p&gt;Ou seja, temos várias funcionalidades em Kotlin traduzidas diretamente para Swift (como &lt;strong&gt;high order functions&lt;/strong&gt;, &lt;strong&gt;enums&lt;/strong&gt;, etc), mas não temos uma tradução direta de Kotlin --&amp;gt; Objective-c.&lt;/p&gt;

&lt;p&gt;Para investigar como o código Kotlin é traduzido para Objective-C, podemos acessar o código gerado pelo Kotlin/Native. Para isso, basta dar um &lt;code&gt;cmd + click&lt;/code&gt; na nossa classe &lt;code&gt;HelloWorld_appleKt&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fld53epb613oxyhmis630.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fld53epb613oxyhmis630.png" alt="Hello world em Obj-c" width="699" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Para melhorar a experiência de uso do código Kotlin no iOS, podemos codificar nosso código Kotlin de uma forma diferente, para ser mais idiomático ao Swift.&lt;/p&gt;

&lt;h2&gt;
  
  
  Melhorando a interoperabilidade com Swift
&lt;/h2&gt;

&lt;p&gt;Observamos que não podemos simplesmente escrever código Kotlin e esperar que ele seja idiomático ao Swift devida a característica do Kotlin/Native somente gerar código Objective-C.&lt;/p&gt;

&lt;p&gt;Para isso, temos que escrever nosso código Kotlin de uma forma que seja mais amigável ao Swift. Vamos refatorar o código &lt;code&gt;HelloWorld&lt;/code&gt; para ser mais idiomático ao Swift:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;// HelloWorld.apple.kt appleMain&lt;/span&gt;
&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="nn"&gt;br.com.rsicarelli.example&lt;/span&gt;

&lt;span class="nd"&gt;@HiddenFromObjC&lt;/span&gt;
&lt;span class="n"&gt;actual&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;helloWorld&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Olá mundo Apple Main"&lt;/span&gt;

&lt;span class="kd"&gt;object&lt;/span&gt; &lt;span class="nc"&gt;HelloWorld&lt;/span&gt;

&lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nc"&gt;HelloWorld&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;helloWorld&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Agora, realizamos o mesmo passo a passo para utilizar no Xcode:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Compilar o XCFramework com &lt;code&gt;./gradlew assembleKotlinSharedXCFramework&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;No Xcode, &lt;code&gt;Products&lt;/code&gt; &amp;gt; &lt;code&gt;Build for ...&lt;/code&gt; &amp;gt; &lt;code&gt;Running&lt;/code&gt;, ou simplesmente &lt;code&gt;cmd + shift + r&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Logo após o build, notamos que a nossa classe anterior &lt;code&gt;HelloWorld_appleKt&lt;/code&gt; não está mais disponível.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupux2whv1n5vvwovcjtf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupux2whv1n5vvwovcjtf.png" alt="Hello world quebrado no Xcode" width="592" height="129"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Antes de entender o porquê, vamos integrar nosso código KMP utilizando a nova abordagem:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="kd"&gt;import&lt;/span&gt; &lt;span class="kt"&gt;KotlinShared&lt;/span&gt;

&lt;span class="kd"&gt;struct&lt;/span&gt; &lt;span class="kt"&gt;ContentView&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;View&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;@State&lt;/span&gt; &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="nv"&gt;showText&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;

    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="nv"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kd"&gt;some&lt;/span&gt; &lt;span class="kt"&gt;View&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kt"&gt;Button&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Show Text"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;showText&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toggle&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;showText&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="kt"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;HelloWorld&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shared&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sucesso! Esse código é mais idiomático ao Swift, e conseguimos utilizar o código Kotlin no iOS de uma forma mais amigável.&lt;/p&gt;

&lt;p&gt;Se abrirmos o código Objective-C gerado pelo Kotlin/Native, notamos algumas diferenças:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furdr7zy4z3lkb4d38p7c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furdr7zy4z3lkb4d38p7c.png" alt="Hello world idiomático ao Swift" width="800" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Interessante observar que, agora, nossa classe &lt;code&gt;HelloWorld&lt;/code&gt; é gerada como um Singleton, e o método &lt;code&gt;get&lt;/code&gt; é gerado como uma extensão!&lt;/p&gt;
&lt;h3&gt;
  
  
  E a anotação &lt;code&gt;@HiddenFromObjC&lt;/code&gt;?
&lt;/h3&gt;

&lt;p&gt;A anotação &lt;code&gt;@HiddenFromObjC&lt;/code&gt; é uma anotação do Kotlin/Native que indica que o método não deve ser exposto para Objective-C. Isso é útil para métodos que não devem ser acessados diretamente pelo Objective-C, como funções de extensão.&lt;/p&gt;

&lt;p&gt;A lógica do uso dessa anotação nesse contexto é a seguinte: temos duas formas de acessar o método &lt;code&gt;helloWorld()&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Através da função de alto nível (high order function no Kotlin)&lt;/li&gt;
&lt;li&gt;Através da extensão do objeto &lt;code&gt;HelloWorld&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nesse caso, expormos as duas maneiras para o Objective-C não faz sentido, pois a função de alto nível apenas delegar para a extensão do objeto &lt;code&gt;HelloWorld&lt;/code&gt;. Isso pode ser confuso para quem está consumindo o código Kotlin no iOS.&lt;/p&gt;

&lt;p&gt;Para isso, utilizamos a anotação &lt;code&gt;@HiddenFromObjC&lt;/code&gt; para esconder a função de alto nível do Objective-C, e expor apenas a extensão do objeto &lt;code&gt;HelloWorld&lt;/code&gt;!&lt;/p&gt;

&lt;p&gt;Notas importantes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A anotação &lt;code&gt;@HiddenFromObjC&lt;/code&gt; é uma anotação do Kotlin/Native, ou seja, não podemos utilizar em nenhum outro source set do KMP.&lt;/li&gt;
&lt;li&gt;A anotação &lt;code&gt;@HiddenFromObjC&lt;/code&gt; pode ser utilizada para funções, classes, atributos, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Uma documentação completa entre a interoperabilidade entre Kotlin e Objective-C pode ser encontrada aqui &lt;a href="https://kotlinlang.org/docs/native-objc-interop.html" rel="noopener noreferrer"&gt;Interoperability with Swift/Objective-C&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Outras maneiras de melhorar a interoperabilidade
&lt;/h2&gt;

&lt;p&gt;Essa abordagem já funciona muito bem, porém, pode ser bem tedioso ter que criar uma extensão para cada função que queremos expor para o iOS.&lt;/p&gt;

&lt;p&gt;No final, o que queremos é ter um código Kotlin que seja idiomático ao Swift, mas, ao mesmo tempo, codando Kotlin com todo seu potencial.&lt;/p&gt;

&lt;p&gt;Para isso, temos três opções:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Utilizar o plugin &lt;a href="https://github.com/touchlab/SKIE" rel="noopener noreferrer"&gt;SKIE (Swift Kotlin Interface Enhancer)&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Atualizar para o Kotlin 2.1 e utilizar o novo sistema de interoperabilidade entre Kotlin --&amp;gt; Swift.&lt;/li&gt;
&lt;li&gt;Manualmente exportar extensions para cada acesso que queremos utilizar para o iOS, utilizando Swift.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A primeira opção é a mais robusta e a mais recomendada, já que o SKIE possuí uma série de funcionalidades que facilitam a interoperabilidade entre Kotlin e Swift.&lt;/p&gt;

&lt;p&gt;A segunda opção, exportar código Swift utilizando Kotlin 2.1, continua em fase experimental, e não é recomendada para produção.&lt;/p&gt;

&lt;p&gt;A terceira forma é bem manual e pode ser bem tediosa, mas é uma opção válida para quem não quer utilizar o SKIE. Como DEVs KMP, queremos escrever menos código possível, então é uma abordagem custosa de se escalar.&lt;/p&gt;

&lt;p&gt;Para esse artigo, vamos utilizar o SKIE para melhorar a interoperabilidade entre Kotlin e Swift!&lt;/p&gt;
&lt;h3&gt;
  
  
  Utilizando o SKIE para melhorar a interoperabilidade
&lt;/h3&gt;

&lt;p&gt;Integrar o SKIE em um módulo KMP é bem tranquilo e o projeto fornece uma documentação detalhada sobre a integração, &lt;a href="https://skie.touchlab.co/Installation" rel="noopener noreferrer"&gt;SKIE &amp;gt; Installation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mas de forma resumida:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Aplicar o plugin &lt;code&gt;co.touchlab.skie&lt;/code&gt; no &lt;code&gt;build.gradle.kts&lt;/code&gt; do projeto KMP&lt;/li&gt;
&lt;li&gt;O plugin deve ser aplicado apenas no módulo que gera o XCFramework.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Basicamente é isso, aplicar o plugin e sincronizar.&lt;/p&gt;

&lt;p&gt;Agora, vamos retornar a nossa abordagem anterior e apenas exportar a função &lt;code&gt;helloWorld()&lt;/code&gt; (sem a anotação &lt;code&gt;@HiddenFromObjC&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;// HelloWorld.apple.kt appleMain&lt;/span&gt;

&lt;span class="n"&gt;actual&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;helloWorld&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Olá mundo Apple Main"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Seguimos o passo a passo para utilizar no Xcode:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Compilar o XCFramework com &lt;code&gt;./gradlew assembleKotlinSharedXCFramework&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Aqui na minha máquina eu precisei de um clean build no Xcode, então &lt;code&gt;Products&lt;/code&gt; &amp;gt; &lt;code&gt;Clean Build Folder&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;No Xcode, &lt;code&gt;Products&lt;/code&gt; &amp;gt; &lt;code&gt;Build for ...&lt;/code&gt; &amp;gt; &lt;code&gt;Running&lt;/code&gt;, ou simplesmente &lt;code&gt;cmd + shift + r&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Agora, podemos utilizar o código Kotlin no iOS de uma forma mais idiomática ao Swift:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="kd"&gt;import&lt;/span&gt; &lt;span class="kt"&gt;SwiftUI&lt;/span&gt;
&lt;span class="kd"&gt;import&lt;/span&gt; &lt;span class="kt"&gt;KotlinShared&lt;/span&gt;

&lt;span class="kd"&gt;struct&lt;/span&gt; &lt;span class="kt"&gt;ContentView&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;View&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;@State&lt;/span&gt; &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="nv"&gt;showText&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;

    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="nv"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kd"&gt;some&lt;/span&gt; &lt;span class="kt"&gt;View&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kt"&gt;Button&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Show Text"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;showText&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toggle&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;showText&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="kt"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;helloWorld&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Analisando a função &lt;code&gt;helloWorld()&lt;/code&gt;, observamos que o SKIE gera uma função global que é acessível diretamente no Swift. Essa função global acessa a função &lt;code&gt;helloWorld()&lt;/code&gt; do Kotlin (na forma "feia"), e a expõe para o Swift.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frsicarelli%2FKMP-101%2Fblob%2Fmain%2Fposts%2Fassets%2Fkotlin-shared-hello-world-skie.gif%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frsicarelli%2FKMP-101%2Fblob%2Fmain%2Fposts%2Fassets%2Fkotlin-shared-hello-world-skie.gif%3Fraw%3Dtrue" width="640" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Muito melhor hein? Agora, conseguimos utilizar o código Kotlin no iOS de uma forma idiomática ao Swift!&lt;/p&gt;

&lt;h3&gt;
  
  
  Considerações sobre o SKIE
&lt;/h3&gt;

&lt;p&gt;O SKIE é extremamente poderoso e facilita muito a interoperabilidade entre Kotlin e Swift.&lt;/p&gt;

&lt;p&gt;Porém, é importante lembrar que o SKIE é um plugin experimental, e está sujeito a mudança e depreciações.&lt;/p&gt;

&lt;p&gt;Além disso, como é adicionado uma camada extra de conversão, a construção do XCFramework é deteriorada, e o tempo de build pode aumentar consideravelmente.&lt;/p&gt;

&lt;p&gt;Isso porque o SKIE percorre todo o código Kotlin e cria seu par em Swift, o que pode ser um processo bem custoso. O SKIE fará isso não só com seu código Kotlin, mas também com todas as dependências que você exporta como "api" para o &lt;code&gt;KotlinShared&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Reduzindo o tempo de build do SKIE utilizando anotações
&lt;/h4&gt;

&lt;p&gt;Uma funcionalidade muito legal do SKIE é possibilidade de escolher quais funcionalidades do SKIE você quer utilizar.&lt;/p&gt;

&lt;p&gt;Para isso, o SKIE fornece uma série de &lt;a href="https://github.com/touchlab/SKIE/tree/main/SKIE/common/configuration/annotations/impl/src/commonMain/kotlin/co/touchlab/skie/configuration/annotations" rel="noopener noreferrer"&gt;anotações&lt;/a&gt; que permitem customizar a exportação de código Kotlin para Swift. Isso nos possibilita escolher a dedo qual código queremos exportar para o Swift, e reduzir o tempo de build do SKIE.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusões finais
&lt;/h2&gt;

&lt;p&gt;Com esse artigo, conseguimos entender como utilizar código Kotlin no Swift, suas características e limitações, e como melhorar a interoperabilidade entre Kotlin e Swift com uma escrita alternativa de código Kotlin ou utilizando o SKIE.&lt;/p&gt;

&lt;p&gt;O KMP é craque em exportar código Objective-C, mas estamos atualmente limitados na exportação de código Swift. Com o SKIE, conseguimos melhorar essa limitação e exportar código Kotlin de uma forma mais idiomática ao Swift. E, as próximas versões do Kotlin, a interoperabilidade entre Kotlin e Swift será ainda mais robusta e nativa.&lt;/p&gt;

&lt;p&gt;Espero que tenham gostado do artigo! 🚀&lt;/p&gt;

&lt;p&gt;Até a próxima 🤙&lt;/p&gt;

</description>
      <category>kotlin</category>
      <category>kmp</category>
      <category>braziliandevs</category>
      <category>mobile</category>
    </item>
    <item>
      <title>KMP-102 - Características do XCFramework no KMP</title>
      <dc:creator>Rodrigo Sicarelli</dc:creator>
      <pubDate>Sun, 21 Jul 2024 11:17:30 +0000</pubDate>
      <link>https://forem.com/rsicarelli/kmp-102-caracteristicas-do-xcframework-no-kmp-3162</link>
      <guid>https://forem.com/rsicarelli/kmp-102-caracteristicas-do-xcframework-no-kmp-3162</guid>
      <description>&lt;p&gt;No post anterior, aprendemos sobre como o Kotlin/Native exporta uma coleção de &lt;code&gt;.frameworks&lt;/code&gt; no formato XCFramework.&lt;/p&gt;

&lt;p&gt;Agora, vamos entender as características desse XCFramework.&lt;/p&gt;

&lt;h2&gt;
  
  
  Como utilizar um XCFramework no iOS
&lt;/h2&gt;

&lt;p&gt;O pacote XCFramework irá oferecer um &lt;code&gt;.framework&lt;/code&gt; para cada Kotlin Native target. Lá dentro, alvos como o device físico (&lt;code&gt;iosArm64&lt;/code&gt;), simulador (&lt;code&gt;iosSimulatorArm64&lt;/code&gt;) e simuladores para processadores intel (&lt;code&gt;iosX64&lt;/code&gt;) estarão presentes.&lt;/p&gt;

&lt;p&gt;Consumir um &lt;code&gt;.framework&lt;/code&gt; varia conforme o ambiente e o codebase existente, mas de forma geral, basta criar um &lt;em&gt;build phase&lt;/em&gt; no projeto Xcode para conseguir utilizar o import das classes exportadas pelo Kotlin/Native.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🔗 &lt;a href="https://kotlinlang.org/docs/native-spm.html" rel="noopener noreferrer"&gt;Utilizando o Swift Package Manager&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;🔗 &lt;a href="https://kotlinlang.org/docs/native-cocoapods.html" rel="noopener noreferrer"&gt;CocoaPods overview and setup&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;🔗 &lt;a href="https://kotlinlang.org/docs/apple-framework.html" rel="noopener noreferrer"&gt;Kotlin/Native as an Apple framework – tutorial&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Existem diversas formas que podemos utilizar para importar no projeto.&lt;/p&gt;

&lt;p&gt;Todos esses modelos possuem características importantes a serem exploradas.&lt;/p&gt;

&lt;h2&gt;
  
  
  Entendendo como o XCFramework é gerado
&lt;/h2&gt;

&lt;p&gt;No KMP, o &lt;code&gt;.framework&lt;/code&gt; é do tipo "Fat". Isso significa que ele inclui não apenas seu código, mas também todas as dependências necessárias. Isso difere de outros tipos, que podem incluir menos conteúdo:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Skinny&lt;/strong&gt;: Contém apenas o seu código, sem nenhuma dependência externa.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Thin&lt;/strong&gt;: Inclui seu código e suas dependências diretas.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hollow&lt;/strong&gt;: O oposto do Thin, contendo apenas as dependências, sem seu código.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fat&lt;/strong&gt;: Inclui tudo: seu código, dependências diretas e tudo o necessário para funcionar de forma independente.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Essa abordagem "Fat" tem implicações importantes para a modularização e o gerenciamento de dependências, como discutiremos a seguir.&lt;/p&gt;

&lt;p&gt;A natureza "Fat" dos frameworks no KMP cria um desafio técnico para modularizar nossas distribuições. Isso ocorre porque todas as dependências são empacotadas juntas, forçando-nos a consolidar todo o código do KMP em uma única exportação. Esse modelo pode levar a duplicações de dependências e aumento do tamanho do pacote final, complicando a gestão do projeto, especialmente em ambientes de desenvolvimento colaborativos.&lt;/p&gt;

&lt;h2&gt;
  
  
  Contexto sobre aplicações Kotlin
&lt;/h2&gt;

&lt;p&gt;Projetos Kotlin possuem uma natureza multi modular para reutilização de cache e desempenho de build. Modularizar projetos influenciam positivamente a experiência de desenvolvimento em projetos Kotlin que utilizam o Gradle.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/rsicarelli/android-plataforma-parte-1-modularizacao-2016"&gt;Nesse artigo&lt;/a&gt; eu exploro um pouco mais sobre modularização em projetos Android, que também se aplicam para projetos KMP.&lt;/p&gt;

&lt;p&gt;Projetos Kotlin costumam a ter múltiplos módulos como:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- legado
- core/design-system
- core/logging
- core/analytics
- feature1
- feature2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Esses módulos podem ser utilizados individualmente em projetos Kotlin, mas isso não significa que podemos ter um &lt;code&gt;.framework&lt;/code&gt; para correspondente.&lt;/p&gt;

&lt;p&gt;Quer dizer, até podemos, porém, tem uma característica a ser observada.&lt;/p&gt;

&lt;p&gt;Considere que a &lt;code&gt;feature1&lt;/code&gt; e &lt;code&gt;feature2&lt;/code&gt; utilizam as seguintes dependências em KMP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;// feature1&lt;/span&gt;
&lt;span class="n"&gt;kotlinx-serialization&lt;/span&gt;
&lt;span class="n"&gt;kotlinx-coroutines&lt;/span&gt;

&lt;span class="c1"&gt;// feature2&lt;/span&gt;
&lt;span class="n"&gt;kotlinx-serialization&lt;/span&gt;
&lt;span class="n"&gt;kotlinx-coroutines&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ao exportar o XCFramework, as dependencias do &lt;code&gt;kotlinx-serialization&lt;/code&gt; e &lt;code&gt;kotlinx-coroutines&lt;/code&gt; &lt;strong&gt;estariam duplicadas em cada &lt;code&gt;.framework&lt;/code&gt;&lt;/strong&gt;, causando:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Aumento do pacote final (&lt;code&gt;.ipa&lt;/code&gt;);&lt;/li&gt;
&lt;li&gt;Aumento de tempo de build, considerando uma escala de módulos.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Isso acontece por uma característica imposta pelo &lt;code&gt;.framework&lt;/code&gt; no iOS: um &lt;code&gt;.framework&lt;/code&gt; não consegue se comunicar com o outro. &lt;/p&gt;

&lt;p&gt;Em um cenário ideal, o &lt;code&gt;kotlinx-serialization&lt;/code&gt; seria um &lt;code&gt;.framework&lt;/code&gt; isolado e nosso &lt;code&gt;.framework&lt;/code&gt; se comunicasse com esse &lt;code&gt;.framework&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Então, esse modelo "fat" se torna uma característica adotada em projetos KMP, como uma forma de otimização do uso e redução do tamanho final do aplicativo.&lt;/p&gt;

&lt;p&gt;Com isso, vamos avançar e entender melhor quais desafios esse modelo impõe.&lt;/p&gt;

&lt;h2&gt;
  
  
  Utilizando um "fat" KMP no iOS
&lt;/h2&gt;

&lt;p&gt;Consideremos um cenário onde temos um projeto iOS existente e desejamos integrar código KMP. Para ilustrar, vamos supor que fizemos uma alteração em um módulo, como adicionar um novo parâmetro a uma função. Esta mudança, embora pareça simples, pode quebrar o código no iOS, pois o projeto iOS espera a versão anterior da função. Aqui está um exemplo passo a passo:&lt;/p&gt;

&lt;p&gt;Primeiro, vamos assumir o seguinte &lt;code&gt;build.gradle.kts&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nf"&gt;kotlin&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;xcFramework&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;XCFramework&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;xcFrameworkName&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"KotlinShared"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;exportedDependencies&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;listOf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;feature1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;feature2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nf"&gt;listOf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nf"&gt;iosX64&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="nf"&gt;iosArm64&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="nf"&gt;iosSimulatorArm64&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;iosTarget&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;
        &lt;span class="n"&gt;iosTarget&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;binaries&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;framework&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;baseName&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"KotlinShared"&lt;/span&gt;

            &lt;span class="n"&gt;exportedDependencies&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;dependency&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;
                &lt;span class="nf"&gt;export&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dependency&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;

            &lt;span class="n"&gt;xcFramework&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ao executar a task &lt;code&gt;assembleKotlinSharedXCFramework&lt;/code&gt;, teremos um pacotão com todos os módulos exportados. &lt;/p&gt;

&lt;p&gt;Para projetos KMP, é essencial ter um módulo central, muitas vezes chamado de &lt;code&gt;ios-interop&lt;/code&gt;. Esse módulo funciona como um ponto de integração que agrupa e exporta todas as dependências necessárias para serem usadas no Xcode. Esse método centraliza a gestão das dependências e facilita a manutenção e atualização do projeto.&lt;/p&gt;

&lt;h2&gt;
  
  
  Desafios para modularizar o KMP
&lt;/h2&gt;

&lt;p&gt;Como discutimos anteriormente, a natureza "fat" dos frameworks XCFramework no KMP implica que cada módulo exportado inclui todas as suas dependências. Isso resulta em duplicação de dependências comuns entre módulos e um aumento geral no tamanho do pacote final. Além disso, essa abordagem gera desafios significativos na modularização, que são especialmente evidentes em projetos que integram o SwiftUI como interface de usuário no iOS. Vejamos esses desafios mais detalhadamente.&lt;/p&gt;

&lt;p&gt;Vamos assumir que a &lt;code&gt;feature1&lt;/code&gt; e &lt;code&gt;feature2&lt;/code&gt; expõem as seguintes classes Kotlin a serem consumidas no iOS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Feature1ViewModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Feature1Repository&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Unit&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Feature2ViewModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Feature2Repository&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Unit&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ao exportar o XCFramework, todas as classes de &lt;code&gt;feature1&lt;/code&gt; e &lt;code&gt;feature2&lt;/code&gt; estarão presentes no &lt;code&gt;.framework&lt;/code&gt;, ou seja, conseguimos utilizar ambas &lt;code&gt;Feature1ViewModel&lt;/code&gt; e &lt;code&gt;Feature2ViewModel&lt;/code&gt; no iOS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="kd"&gt;import&lt;/span&gt; &lt;span class="kt"&gt;KotlinShared&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="kt"&gt;Feature1ViewModelWrapper&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;viewModel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;KotlinSharedFeature1ViewModel&lt;/span&gt;

    &lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;Feature1Repository&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;viewModel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;KotlinSharedFeature1ViewModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;viewModel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="kt"&gt;Feature2ViewModelWrapper&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;viewModel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;KotlinSharedFeature2ViewModel&lt;/span&gt;

    &lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;Feature2Repository&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;viewModel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;KotlinSharedFeature2ViewModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;viewModel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Até aqui, tudo certo. Nosso código KMP foi integrado no iOS com sucesso e vamos assumir que esse código já está até em produção. Agora, vamos adicionar um novo parâmetro no &lt;code&gt;Feature1ViewModel&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Feature1ViewModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Feature1Repository&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;repository2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Feature1Repository2&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Unit&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;fetchRepository2&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Unit&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ao exportar o XCFramework, &lt;strong&gt;o código no iOS irá quebrar&lt;/strong&gt;, pois a classe &lt;code&gt;Feature1ViewModelWrapper&lt;/code&gt; não possui o novo parâmetro &lt;code&gt;repository2&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="kt"&gt;Feature1ViewModelWrapper&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;viewModel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;KotlinSharedFeature1ViewModel&lt;/span&gt;

    &lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;Feature1Repository&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;//irá quebrar, `repository2` não está sendo enviado&lt;/span&gt;
        &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;viewModel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;KotlinSharedFeature1ViewModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Agora, vamos assumir que esse XCFramework já foi gerado e exportado, porém, ainda não foi integrado no repositório do iOS. O time responsável pela &lt;code&gt;feature2&lt;/code&gt; precisa de uma nova funcionalidade e também precisa realizar uma alteração na &lt;code&gt;Feature2ViewModel&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Feature2ViewModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Feature2Repository&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;repository2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Feature2Repository2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Unit&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ao exportar o XCFramework, &lt;strong&gt;o código no iOS irá quebrar&lt;/strong&gt;, pelo mesmo motivo acima, já que a classe &lt;code&gt;Feature2ViewModelWrapper&lt;/code&gt; não possui o novo parâmetro &lt;code&gt;repository2&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="kt"&gt;Feature2ViewModelWrapper&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;viewModel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;KotlinSharedFeature2ViewModel&lt;/span&gt;

    &lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;Feature2Repository&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;viewModel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;KotlinSharedFeature2ViewModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;//irá quebrar, `repository2` não foi passado como parametro&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;viewModel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Agregando esse cenário acima, temos a seguinte linha do tempo:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;Feature1ViewModel&lt;/code&gt; e &lt;code&gt;Feature2ViewModel&lt;/code&gt; são integradas ao projeto iOS.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Feature1ViewModel&lt;/code&gt; é atualizada para incluir um novo parâmetro, causando uma quebra no iOS.&lt;/li&gt;
&lt;li&gt;Após o merge das alterações, uma nova versão do &lt;code&gt;XCFramework&lt;/code&gt; é gerada e publicada através de ferramentas como Swift Package Manager, CocoaPods, controle de versão, etc.&lt;/li&gt;
&lt;li&gt;Essa versão, contendo as mudanças em &lt;code&gt;Feature1ViewModel&lt;/code&gt;, resulta em quebras no iOS.&lt;/li&gt;
&lt;li&gt;Antes que essa versão seja integrada ao projeto iOS (corrigindo a quebra), o time de &lt;code&gt;feature2&lt;/code&gt; realiza alterações no &lt;code&gt;Feature2ViewModel&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Uma versão subsequente do &lt;code&gt;XCFramework&lt;/code&gt; é gerada e publicada, incluindo as novas alterações em &lt;code&gt;Feature2ViewModel&lt;/code&gt; que também resultam em quebras no iOS.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Neste cenário complexo:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;O time responsável por &lt;code&gt;feature2&lt;/code&gt; precisa esperar que o time de &lt;code&gt;feature1&lt;/code&gt; corrija as quebras no iOS antes de poder integrar a correção da &lt;code&gt;feature2&lt;/code&gt;. Este processo pode criar um ciclo de espera e correção que retarda a entrega de novas funcionalidades.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Para resumir e simplificar a compreensão:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A versão 1.0.0 do XCFramework, já integrada no iOS, funciona sem problemas.&lt;/li&gt;
&lt;li&gt;A versão 1.1.0 introduz uma mudança significativa (&lt;code&gt;breaking change&lt;/code&gt;) em &lt;code&gt;feature1&lt;/code&gt;, causando problemas.&lt;/li&gt;
&lt;li&gt;A versão 1.2.0 traz uma mudança significativa em &lt;code&gt;feature2&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;A versão 1.2.0 só pode ser integrada ao iOS depois que as correções de &lt;code&gt;feature1&lt;/code&gt; na versão 1.1.0 forem integradas e validadas.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqvgn3e4hbxd476z6p70.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqvgn3e4hbxd476z6p70.png" alt="Timeline of KMP breaking changes" width="800" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Dores do desenvolvimento KMP
&lt;/h2&gt;

&lt;p&gt;Integrar código KMP em projetos iOS existentes, especialmente aqueles desenvolvidos com SwiftUI, apresenta desafios únicos devido à necessidade de uma comunicação direta entre módulos. Este desafio é menos intenso em projetos que utilizam Compose Multiplatform (CMP), onde a comunicação entre módulos ocorre de forma mais indireta e desacoplada.&lt;/p&gt;

&lt;p&gt;O modelo "fat" de frameworks impõe várias complicações no desenvolvimento com KMP, entre elas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gestão de Dependências:&lt;/strong&gt; É necessário seguir uma linha do tempo específica para incorporar mudanças no código KMP ao repositório iOS, garantindo que todas as dependências estejam sincronizadas.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sensibilidade a Mudanças:&lt;/strong&gt; Qualquer alteração em atributos, parâmetros ou funções pode resultar em quebras no projeto iOS, exigindo correções imediatas para manter a estabilidade do projeto.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependência entre times&lt;/strong&gt;: Devs frequentemente precisam esperar que outras times corrijam quebras no iOS antes de poderem avançar com a integração de novas funcionalidades do KMP.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Impacto no ciclo de desenvolvimento diário
&lt;/h2&gt;

&lt;p&gt;No dia a dia, esses desafios tornam-se ainda mais evidentes. Por exemplo, ao integrar novas funcionalidades na branch principal (&lt;code&gt;main&lt;/code&gt;) do projeto KMP — geralmente associada ao desenvolvimento Android — e tentar testá-las no iOS, frequentemente nos deparamos com quebras devido a mudanças que ainda não foram integradas ao projeto iOS.&lt;/p&gt;

&lt;p&gt;Para mitigar esse problema, geralmente geramos um XCFramework localmente para testes no iOS. No entanto, essa abordagem ainda sofre com o risco de quebras se a branch main contiver alterações não sincronizadas com o iOS, criando um ciclo contínuo de identificação e correção de quebras, o que atrasa significativamente o desenvolvimento.&lt;/p&gt;

&lt;p&gt;Isso gera um gargalo enorme no dia a dia, pois temos um desafio enorme de identificar qual time responsável pela quebra e, consequentemente, aguardar a correção para então integrar o código KMP no iOS. &lt;/p&gt;

&lt;p&gt;Em times pequenos ou projetos pessoais isso não é um problema, mas isso em escala é definitivamente o maior gargalo do desenvolvimento KMP atualmente.&lt;/p&gt;

&lt;h2&gt;
  
  
  Como contornar esse problema
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Melhoria na Comunicação&lt;/strong&gt;: Reforçar a comunicação entre as times de desenvolvimento para planejar e sincronizar mudanças pode reduzir a frequência de quebras inesperadas.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automação de Testes&lt;/strong&gt;: Implementar testes automatizados e processos de integração contínua para detectar e corrigir quebras antes que elas impactem outros desenvolvedores ou o projeto principal.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Existe uma estratégia que podemos adotar, porém, ficará para um artigo futuro. Primeiro, precisamos subir a escadinha de conhecimento em KMP em outros conceitos para conseguirmos compreender melhor essa estratégia alternativa.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusão
&lt;/h2&gt;

&lt;p&gt;É importante sermos realistas e encararmos os problemas reais de uma tecnologia. Às vezes, no calor do "boom" de uma nova tecnologia, deixamos passar alguns aspectos cruciais para escalarmos uma solução, e se não tratarmos esses problemas, podemos ter (teremos!) um gargalo enorme no desenvolvimento. Isso pode gerar um barulho interno no seu time, como pessoas não adotando a tecnologia devido a má experiência de desenvolvimento, e constantes quebras no código causados por outros times em outros contextos.&lt;/p&gt;

&lt;p&gt;Entender a natureza do XCFramework é crucial para termos um projeto escalável e saudável, com uma experiência de desenvolvimento de ponta a ponta sem gargalos.&lt;/p&gt;

&lt;p&gt;Nos próximos artigos, vamos entender melhor sobre o código que que é exportado para o iOS, algumas características e limitações do código Kotlin &amp;gt; Objective-C e Objective-C &amp;gt; Swift, como escrever nosso código Kotlin para ser idiomático em Swift, e algumas abordagens para melhorarmos a integração Kotlin &amp;lt;--&amp;gt; Swift.&lt;/p&gt;

&lt;p&gt;Nos vemos na próxima, tchau! &lt;/p&gt;

&lt;h3&gt;
  
  
  Referências
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://dzone.com/articles/the-skinny-on-fat-thin-hollow-and-uber" rel="noopener noreferrer"&gt;https://dzone.com/articles/the-skinny-on-fat-thin-hollow-and-uber&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>kotlin</category>
      <category>kmp</category>
      <category>mobile</category>
      <category>braziliandevs</category>
    </item>
    <item>
      <title>KMP-102 - XCFramework para Devs KMP</title>
      <dc:creator>Rodrigo Sicarelli</dc:creator>
      <pubDate>Wed, 29 May 2024 11:30:23 +0000</pubDate>
      <link>https://forem.com/rsicarelli/kmp-102-xcframework-para-devs-kmp-4a4b</link>
      <guid>https://forem.com/rsicarelli/kmp-102-xcframework-para-devs-kmp-4a4b</guid>
      <description>&lt;h2&gt;
  
  
  KMP102 - XCFramework para Devs Kotlin Multiplataforma
&lt;/h2&gt;

&lt;p&gt;Olá! Dou as boas-vindas a série KMP-102. Vamos aprofundar os conceitos do Kotlin Multiplatform, aprendendo mais sobre como integrar nosso código Kotlin no iOS e em outras plataformas.&lt;/p&gt;

&lt;p&gt;Como início desta série, vamos aprender mais sobre um formato de arquivo especial para compartilhar código com a família Apple: o &lt;code&gt;XCFramework&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Introdução ao &lt;code&gt;.framework&lt;/code&gt; da Apple
&lt;/h3&gt;

&lt;p&gt;Um &lt;a href="https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/BPFrameworks/Concepts/WhatAreFrameworks.html" rel="noopener noreferrer"&gt;framework&lt;/a&gt; é um pacote que contém um conjunto de recursos e código-fonte destinados a serem utilizados em projetos para a família Apple. No mundo da JVM, isso é equivalente a um &lt;code&gt;.jar&lt;/code&gt; ou, no caso do Android, a um &lt;code&gt;.aar&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Trata-se de um formato pré-compilado que pode ser utilizado livremente entre projetos no Xcode. Esse formato de arquivo facilita a criação de bibliotecas para dispositivos Apple, permitindo sua distribuição e utilização por meio de gerenciadores de pacotes, como CocoaPods ou o Swift Package Manager.&lt;/p&gt;

&lt;p&gt;
  &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdeveloper.apple.com%2Flibrary%2Farchive%2Fdocumentation%2FGeneral%2FConceptual%2FDevPedia-CocoaCore%2FArt%2Fframework_2x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdeveloper.apple.com%2Flibrary%2Farchive%2Fdocumentation%2FGeneral%2FConceptual%2FDevPedia-CocoaCore%2FArt%2Fframework_2x.png" alt="AppKit.framework" width="800" height="261"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Introdução ao XCFramework
&lt;/h3&gt;

&lt;p&gt;O &lt;a href="https://developer.apple.com/documentation/xcode/creating-a-multi-platform-binary-framework-bundle" rel="noopener noreferrer"&gt;XCFramework&lt;/a&gt; é um tipo de pacote ou artefato que facilita a distribuição de bibliotecas para a família Apple. Basicamente, ao invés de distribuirmos vários &lt;code&gt;.frameworks&lt;/code&gt; para cada plataforma, temos um único &lt;code&gt;.xcframework&lt;/code&gt; contendo múltiplos &lt;code&gt;.frameworks&lt;/code&gt;, cada um representando uma plataforma específica suportada pela biblioteca.&lt;/p&gt;

&lt;p&gt;O Kotlin Multiplataforma, mais especificamente o Kotlin/Native, utiliza este artefato para pré-compilar código Kotlin para Objective-C, garantindo total interoperabilidade com Swift. Com isso, nosso código Kotlin é facilmente compartilhado entre todos os alvos suportados do projeto, simplificando significativamente o processo de desenvolvimento: ao invés de compilar vários &lt;code&gt;.frameworks&lt;/code&gt; para cada alvo suportado no KMP, compilamos apenas um &lt;code&gt;.xcframework&lt;/code&gt; para cada alvo ou arquitetura de processador.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gerando um XCFramework no KMP
&lt;/h3&gt;

&lt;p&gt;Por trás dos panos, o KGP (Kotlin Gradle Plugin) utiliza a toolchain do Xcode e nos oferece uma API que possibilita a criação de um &lt;code&gt;XCFramework&lt;/code&gt; através dos nossos arquivos &lt;code&gt;build.gradle.kts&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nf"&gt;kotlin&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;xcFramework&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;XCFramework&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;xcFrameworkName&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"KotlinShared"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nf"&gt;listOf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nf"&gt;iosX64&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="nf"&gt;iosArm64&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="nf"&gt;iosSimulatorArm64&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;iosTarget&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;
        &lt;span class="n"&gt;iosTarget&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;binaries&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;framework&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;baseName&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"KotlinShared"&lt;/span&gt;
            &lt;span class="n"&gt;isStatic&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;

            &lt;span class="n"&gt;xcFramework&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ao sincronizar o projeto, observamos que a task &lt;code&gt;assembleKotlinSharedXCFramework&lt;/code&gt; foi registrada no nosso projeto. Observe que a task tem o miolo &lt;code&gt;KotlinShared&lt;/code&gt;, que corresponde com o parâmetro &lt;code&gt;xcFrameworkName&lt;/code&gt; da classe &lt;code&gt;XCFramework&lt;/code&gt;:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb92ucpkgjxoqjwp0lhnr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb92ucpkgjxoqjwp0lhnr.png" alt="XCFramework registered task" width="730" height="221"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Analisando o resultado da tarefa assemble...XCFramework
&lt;/h3&gt;

&lt;p&gt;Ao executarmos a task &lt;code&gt;assembleKotlinSharedXCFramework&lt;/code&gt;, o Kotlin/Native gera os &lt;code&gt;.xcframeworks&lt;/code&gt; para todos os alvos que definimos no &lt;code&gt;build.gradle.kts&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Este artefato é exatamente o arquivo que precisamos vincular ao projeto Xcode para consumir nosso código KMP compilado para Objective-C!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Nota&lt;/strong&gt;: Tenha cuidado com o nome do projeto! Caracteres especiais, como "-", podem resultar em erro, apesar de o XCFramework ser gerado.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;
  &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frsicarelli%2FKMP-101%2Fblob%2Fmain%2Fposts%2Fassets%2Fxcframework-task-result.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Frsicarelli%2FKMP-101%2Fblob%2Fmain%2Fposts%2Fassets%2Fxcframework-task-result.png%3Fraw%3Dtrue" alt="AppKit.framework" width="644" height="624"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  NativeBuildTypes: debug e release
&lt;/h2&gt;

&lt;p&gt;Observe que temos dois frameworks gerados: a versão &lt;code&gt;debug&lt;/code&gt; e a versão &lt;code&gt;release&lt;/code&gt;. Esses dois tipos possuem características especiais, provenientes da classe &lt;a href="https://github.com/JetBrains/kotlin/blob/master/libraries/tools/kotlin-gradle-plugin-api/src/common/kotlin/org/jetbrains/kotlin/gradle/plugin/mpp/NativeBinaryTypes.kt" rel="noopener noreferrer"&gt;NativeBinaryType&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;Analisando esse enum, entendemos que a versão &lt;code&gt;release&lt;/code&gt; possui a flag &lt;code&gt;optimized = true&lt;/code&gt; e &lt;code&gt;debuggable = false&lt;/code&gt;, enquanto a versão &lt;code&gt;debug&lt;/code&gt; possui &lt;code&gt;optimized = false&lt;/code&gt; e &lt;code&gt;debuggable = true&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Como você pode imaginar, devemos ter cuidado ao escolher qual &lt;code&gt;XCFramework&lt;/code&gt; utilizar no fluxo de desenvolvimento:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Para o ambiente de desenvolvimento local, a versão &lt;code&gt;debug&lt;/code&gt; é a escolha ideal, pois permite debugar nosso código KMP.&lt;/li&gt;
&lt;li&gt;Para o ambiente de produção, a versão &lt;code&gt;release&lt;/code&gt; é a escolha correta, pois o binário é otimizado e evita a inclusão de informações de debug no produto final.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;// kotlin/libraries/tools/kotlin-gradle-plugin-api/src/common/kotlin/org/jetbrains/kotlin/gradle/plugin/mpp/NativeBinaryTypes.kt&lt;/span&gt;

&lt;span class="k"&gt;enum&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;NativeBuildType&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;optimized&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Boolean&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;debuggable&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Boolean&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Named&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;RELEASE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;false&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nc"&gt;DEBUG&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Controlando qual tipo de build gerar
&lt;/h2&gt;

&lt;p&gt;A configuração para gerar os tipos de binário é proveniente da função &lt;code&gt;iosTarget.binaries.framework()&lt;/code&gt;. Ao analisarmos a classe &lt;a href="https://github.com/JetBrains/kotlin/blob/master/libraries/tools/kotlin-gradle-plugin/src/common/kotlin/org/jetbrains/kotlin/gradle/dsl/AbstractKotlinNativeBinaryContainer.kt" rel="noopener noreferrer"&gt;AbstractKotlinNativeBinaryContainer&lt;/a&gt;, observamos que a função &lt;code&gt;framework()&lt;/code&gt; possui um argumento &lt;code&gt;buildTypes&lt;/code&gt; com um valor padrão.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;// kotlin/libraries/tools/kotlin-gradle-plugin/src/common/kotlin/org/jetbrains/kotlin/gradle/dsl/AbstractKotlinNativeBinaryContainer.kt&lt;/span&gt;

&lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;framework&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;namePrefix&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;buildTypes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Collection&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;NativeBuildType&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;NativeBuildType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DEFAULT_BUILD_TYPES&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Framework&lt;/span&gt;&lt;span class="p"&gt;.()&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nc"&gt;Unit&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createBinaries&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;namePrefix&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;namePrefix&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;NativeOutputKind&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;FRAMEWORK&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;buildTypes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="nc"&gt;Framework&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;// kotlin/libraries/tools/kotlin-gradle-plugin-api/src/common/kotlin/org/jetbrains/kotlin/gradle/plugin/mpp/NativeBinaryTypes.kt&lt;/span&gt;
&lt;span class="k"&gt;enum&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;NativeBuildType&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="p"&gt;.)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Named&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
    &lt;span class="k"&gt;companion&lt;/span&gt; &lt;span class="k"&gt;object&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;DEFAULT_BUILD_TYPES&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;setOf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;DEBUG&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;RELEASE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Durante o fluxo de desenvolvimento, pode ser desejável evitar a compilação das duas versões devido ao aumento do tempo de compilação. Para isso, basta adaptar nosso &lt;code&gt;build.gradle.kts&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nf"&gt;kotlin&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;compileOnlyDebug&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt; &lt;span class="c1"&gt;// some gradle.properties flag will help you here!&lt;/span&gt;

    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;buildType&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;compileOnlyDebug&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nc"&gt;NativeBuildType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DEBUG&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="nc"&gt;NativeBuildType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;RELEASE&lt;/span&gt;

    &lt;span class="nf"&gt;listOf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nf"&gt;iosX64&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="nf"&gt;iosArm64&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="nf"&gt;iosSimulatorArm64&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;iosTarget&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;
        &lt;span class="n"&gt;iosTarget&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;binaries&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;framework&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;buildTypes&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;listOf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buildType&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;baseName&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"KotlinShared"&lt;/span&gt;
            &lt;span class="n"&gt;isStatic&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;

            &lt;span class="n"&gt;xcFramework&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusões
&lt;/h2&gt;

&lt;p&gt;O XCFramework é um tema central no universo do Kotlin Multiplatform (KMP). Compreender o que é, como funciona e como gerá-lo nos proporciona um maior controle e compreensão dos bastidores do KMP.&lt;/p&gt;

&lt;p&gt;No próximo artigo, exploraremos melhor a função &lt;code&gt;framework()&lt;/code&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  Fontes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kotlinlang.org/docs/multiplatform-build-native-binaries.html" rel="noopener noreferrer"&gt;KotlinLang | Build final native binaries&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/@mihail_salari/embracing-the-power-of-xcframeworks-a-comprehensive-guide-for-ios-developers-77fe192d47fe" rel="noopener noreferrer"&gt;Embracing the Power of XCFrameworks: A Comprehensive Guide for iOS Developers&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kotlin</category>
      <category>kmp</category>
      <category>ios</category>
      <category>braziliandevs</category>
    </item>
    <item>
      <title>Kotlin Koans BR: Extension functions e properties (funções e propriedades estendidas)</title>
      <dc:creator>Rodrigo Sicarelli</dc:creator>
      <pubDate>Sat, 06 Apr 2024 17:18:50 +0000</pubDate>
      <link>https://forem.com/rsicarelli/kotlin-koans-br-extension-functions-e-properties-funcoes-e-propriedades-estendidas-e39</link>
      <guid>https://forem.com/rsicarelli/kotlin-koans-br-extension-functions-e-properties-funcoes-e-propriedades-estendidas-e39</guid>
      <description>&lt;h2&gt;
  
  
  🔗 &lt;a href="https://play.kotlinlang.org/koans/Classes/Extension%20functions/Task.kt" rel="noopener noreferrer"&gt;Tarefa&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Implemente as funções de extensão &lt;code&gt;Int.r()&lt;/code&gt; e &lt;code&gt;Pair.r()&lt;/code&gt; e faça com que elas convertam &lt;code&gt;Int&lt;/code&gt; e &lt;code&gt;Pair&lt;/code&gt; em um &lt;code&gt;RationalNumber&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introdução as extension functions no Kotlin
&lt;/h2&gt;

&lt;p&gt;Em Kotlin, as &lt;a href="https://kotlinlang.org/docs/extensions.html#extension-functions" rel="noopener noreferrer"&gt;extension functions&lt;/a&gt; são uma ferramenta poderosa que permite adicionar novas funcionalidades a uma classe sem a necessidade de modificá-la ou herdá-la: você a "estende".&lt;/p&gt;

&lt;p&gt;Essa ferramenta nos ajuda a isolar melhor nosso código, reaproveitar, e contextualizar dependendo do uso.&lt;/p&gt;

&lt;p&gt;Vamos supor que você possua a seguinte classe hipotética que calcula valores de frete:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;DeliveryCalculator&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;calculateDefaultDelivery&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;value&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Double&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;10.50&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s"&gt;"${value * 100}%"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;calculateFastDelivery&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;value&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Double&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;22.90&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s"&gt;"${value * 100}%"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;calculateScheduledDelivery&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;value&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Double&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;15.50&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s"&gt;"${value * 100}%"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Perceba que estamos repetindo a lógica de cálculo de porcentagem 3 vezes: &lt;code&gt;"${valor * 100}%"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Para evitar a repetição do código, podemos extrair apenas esse cálculo em uma função que recebe o &lt;code&gt;valor&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;DeliveryCalculator&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;calculateDefaultDelivery&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;value&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Double&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;10.50&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;formatAsPercentage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;calculateFastDelivery&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;value&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Double&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;22.90&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;formatAsPercentage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;calculateScheduledDelivery&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;value&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Double&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;15.50&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;formatAsPercentage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;formatAsPercentage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Double&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"${value * 100}%"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Essa opção já é ótima e nos ajuda a reaproveitar nosso código. Porém, com as extension functions do Kotlin, existe uma forma mais idiomática e elegante de resolver o mesmo problema&lt;/p&gt;

&lt;p&gt;Ao criar uma extensão, essa função atua como se fosse um membro daquela classe, mas internamente o compilador a trata como apenas uma função comum que aceita uma instância daquela classe como seu primeiro parâmetro.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;DeliveryCalculator&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;calculateDefaultDelivery&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;value&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Double&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;10.50&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;formatAsPercentage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;calculateFastDelivery&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;value&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Double&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;22.90&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;formatAsPercentage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;calculateScheduledDelivery&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;value&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Double&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;15.50&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;formatAsPercentage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nc"&gt;Double&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;formatAsPercentage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"${this * 100}%"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ou ainda:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;DeliveryCalculator&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;calculateDefaultDelivery&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;10.50&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;formatAsPercentage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;calculateFastDelivery&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;22.90&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;formatAsPercentage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;calculateScheduledDelivery&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;15.50&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;formatAsPercentage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nc"&gt;Double&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;formatAsPercentage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"${this * 100}%"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A maior vantagem é que estamos contextualizando a função e estendendo a classe &lt;code&gt;Double&lt;/code&gt; (que é fechada), adaptando apenas para nosso contexto específico.&lt;/p&gt;

&lt;p&gt;Também é possível declarar funcões de "high-order" e reaproveitar em todo o repositório:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;DeliveryCalculator&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;calculateDefaultDelivery&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;10.50&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;formatAsPercentage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;calculateFastDelivery&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;22.90&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;formatAsPercentage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;calculateScheduledDelivery&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;15.50&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;formatAsPercentage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Pública para todo o repositório&lt;/span&gt;
&lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nc"&gt;Double&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;formatAsPercentage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"${this * 100}%"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Extension properties (estendendo propriedades)
&lt;/h3&gt;

&lt;p&gt;No caso acima, uma função pode ser redundante, já que não há nenhum parâmetro para a função &lt;code&gt;formatAsPercentage()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Para resolver isso, o Kotlin também nos possibilita estender propriedades da classe, tornando o código ainda mais limpo.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;DeliveryCalculator&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;calculateDefaultDelivery&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;10.50&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;asPercentage&lt;/span&gt;

    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;calculateFastDelivery&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;22.90&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;asPercentage&lt;/span&gt;

    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;calculateScheduledDelivery&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;15.50&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;asPercentage&lt;/span&gt;

    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;Double&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;asPercentage&lt;/span&gt;
        &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"${this * 100}%"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Como elas funcionam?
&lt;/h3&gt;

&lt;p&gt;Por baixo dos panos, uma extensão é apenas uma função estática que recebe o objeto que você está "expandindo" (o objeto receptor) como seu primeiro argumento.&lt;/p&gt;

&lt;p&gt;Dessa forma, não existe uma sobrecarga de desempenho ao usar funções de extensão em comparação com funções normais.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vantagens
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Melhora a legibilidade do código&lt;/strong&gt;: Muitas vezes, chamar um método em um objeto é mais intuitivo do que passar o objeto como um argumento para uma função.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Evita poluição do namespace&lt;/strong&gt;: Ao invés de criar funções de utilidade genérica, você pode criar as suas próprias extensões privadas apenas no contexto onde ela é utilizada.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Evita subclasses desnecessárias&lt;/strong&gt;: Em vez de criar uma subclasse apenas para adicionar algumas funcionalidades, você pode criar extensões&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Desvantagens
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Não substituem métodos originais&lt;/strong&gt;: Se a classe original tiver um método com a mesma assinatura da função de extensão, a função original será chamado.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Acesso limitado&lt;/strong&gt;: funções de extensão não podem acessar membros protegidos ou privados da classe.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Podem levar à confusão&lt;/strong&gt;: O uso excessivo sem organização adequada pode tornar o código difícil de entender.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Testabilidade
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Isolamento e pureza&lt;/strong&gt;: Idealmente, as funções de extensão devem operar como funções puras, tornando os testes mais previsíveis.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Restrição de acesso&lt;/strong&gt;: Sua incapacidade de acessar membros privados torna as funções de extensão mais fáceis de testar.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplicidade&lt;/strong&gt;: funções de extensão devem ter uma única responsabilidade. Isto facilita o teste.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pair
&lt;/h3&gt;

&lt;p&gt;No exercício, nos deparamos com uma classe específica do Kotlin.&lt;/p&gt;

&lt;p&gt;Em Kotlin, &lt;a href="https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-pair/" rel="noopener noreferrer"&gt;Pair&lt;/a&gt; é uma classe que representa um valor composto por dois elementos - uma 'dupla'. É uma maneira simples de armazenar dois valores relacionados juntos, mas sem semântica particular.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Pair&lt;/code&gt; é uma classe definida na &lt;code&gt;stdlib&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;data class&lt;/span&gt; &lt;span class="nc"&gt;Pair&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;out&lt;/span&gt; &lt;span class="nc"&gt;A&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;out&lt;/span&gt; &lt;span class="nc"&gt;B&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;(&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;first&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;A&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;second&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;B&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusão
&lt;/h2&gt;

&lt;p&gt;As extension functions e properties do Kotlin serão ferramentas que irão acompanhá-lo durante toda a sua trajetória como DEV Kotlin.&lt;/p&gt;

&lt;p&gt;Elas nos ajudam a organizar e reaproveitar nosso código, contextualizando e incentivando funções puras e isoladas que facilitam a compreensão do código-fonte.&lt;/p&gt;

</description>
      <category>kotlin</category>
      <category>braziliandevs</category>
      <category>mobile</category>
      <category>kmp</category>
    </item>
  </channel>
</rss>
