<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Aditya</title>
    <description>The latest articles on Forem by Aditya (@aditya172926).</description>
    <link>https://forem.com/aditya172926</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/aditya172926"/>
    <language>en</language>
    <item>
      <title>How I worked with Docker profiles?</title>
      <dc:creator>Aditya</dc:creator>
      <pubDate>Wed, 04 Feb 2026 06:33:06 +0000</pubDate>
      <link>https://forem.com/aditya172926/how-i-worked-with-docker-profiles-4lb</link>
      <guid>https://forem.com/aditya172926/how-i-worked-with-docker-profiles-4lb</guid>
      <description>&lt;p&gt;I was working on a script that is supposed to be executed on Ethereum mainnet and do some operations like Deposit some tokens in a Defi vault and then withdraw them too, get some on-chain data and show it works successfully.&lt;/p&gt;

&lt;p&gt;Now I wanted it to work for other people too who just take a pull from my repo and try to run this script. So naturally I used Docker to make it easier for anyone to do this.&lt;/p&gt;

&lt;p&gt;Now doing some write operations on Ethereum mainnet costs real money, which I only have some. So you can create a fork of the actual network and start a local node to simulate the real network and have the script perform write operations on the fork.&lt;/p&gt;

&lt;p&gt;The fork doesn’t costs you real money, and you can do a lot of stuff there. But in brief it allows you to simulate transactions in a way that the real network would without costing real money.&lt;/p&gt;

&lt;p&gt;So naturally I wanted all this to happen using a single command and different execution profiles. So the script’s code remains the same, but I would like to execute it in a fork mode, then on mainnet mode whenever I want.&lt;/p&gt;

&lt;p&gt;To work with the fork mode I first started with 2 terminal sessions where one executes the script commands and other runs the forked network node. So this is a 2 step process normally. But with docker profiles I was able to do this in a single command.&lt;/p&gt;

&lt;p&gt;Here is where Docker profile was really helpful as it allowed me to switch between environments in which the script should execute, mainnet or fork.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fggtae1wo3dbdazo2wzkb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fggtae1wo3dbdazo2wzkb.png" width="800" height="917"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;em&gt;Docker profiles execution flow&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here is the docker-compose.yml which shows how I used profiles&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  anvil:
    build:
      context: .
      dockerfile: Dockerfile.anvil
    container_name: lombard-anvil
    command:
      - --host
      - "0.0.0.0"
      - --port
      - "8545"
      - --fork-url
      - ${RPC_URL}
      - --silent
    ports:
      - "8545:8545"
    healthcheck:
      test:
        - CMD
        - cast
        - block-number
        - --rpc-url
        - http://127.0.0.1:8545
      interval: 3s
      timeout: 3s
      retries: 10
      start_period: 5s
    profiles:
      - fork

  # fork mode
  app-fork:
    build: .
    container_name: lombard-app-fork
    env_file:
      - .env
    environment:
      - RPC_URL=http://anvil:8545
      - PRIVATE_KEY=${PRIVATE_KEY}
    depends_on:
      anvil:
        condition: service_healthy
    command: ["${TOKEN_ADDRESS}", "${TOKEN_AMOUNT}"]
    profiles:
      - fork

  # mainnet mode
  app-mainnet:
    build: .
    container_name: lombard-app-mainnet
    env_file:
      - .env
    environment:
      - RPC_URL=${RPC_URL}
      - PRIVATE_KEY=${PRIVATE_KEY}
    command: ["${TOKEN_ADDRESS}", "${TOKEN_AMOUNT}"]
    profiles:
      - mainnet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So this compose has 3 services anvil, app-fork and app-mainnet.&lt;/p&gt;

&lt;p&gt;Anvil is the service that forks the Ethereum mainnet latest state and runs a local blockchain node. This is perfect for doing the scripts simulation where I don’t want to spend real money.&lt;/p&gt;

&lt;p&gt;The app-fork service initializes the requirements to run the script and makes sure that anvil fork is up and running first before executing the script. It also overwrites the environment variable RPC_URL to use anvil’s rpc and not the mainnet one so the transactions goes to the anvil node and not the real ethereum node.&lt;/p&gt;

&lt;p&gt;Lastly the app-mainnet simply runs the script in mainnet mode, it doesn’t need an anvil local node running as it has the live RPC_URL from the .env to send transactions to the Ethereum network. This costs real money.&lt;/p&gt;

&lt;p&gt;With this compose and using commands like below I am able to switch between environments easily in which I want to execute my script.&lt;/p&gt;

&lt;p&gt;To execute in fork mode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker compose --profile fork up --build --abort-on-container-exit --exit-code-from app-fork
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To execute in mainnet mode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker compose --profile mainnet up --build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is the link to the repo of the script I was talking about: &lt;a href="https://github.com/aditya172926/lombard-vault-challenge" rel="noopener noreferrer"&gt;https://github.com/aditya172926/lombard-vault-challenge&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hope this gives you a better idea about working with Docker profiles.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>devops</category>
    </item>
    <item>
      <title>Tracking CPU spike!</title>
      <dc:creator>Aditya</dc:creator>
      <pubDate>Fri, 14 Nov 2025 21:30:14 +0000</pubDate>
      <link>https://forem.com/aditya172926/tracking-cpu-spike-3c48</link>
      <guid>https://forem.com/aditya172926/tracking-cpu-spike-3c48</guid>
      <description>&lt;p&gt;Looking at my CPU and Memory consumption spike as I launch Cursor on my machine ⚠️🔺🔺&lt;/p&gt;

&lt;p&gt;A lot of things going in the background. I think rust-analyzer is the one using a lot too.&lt;/p&gt;

&lt;p&gt;Tracking the consumption using Stomata btw.&lt;br&gt;
&lt;a href="https://github.com/aditya172926/stomata-cli" rel="noopener noreferrer"&gt;https://github.com/aditya172926/stomata-cli&lt;/a&gt;&lt;br&gt;
Consider giving a ⭐ to the github repo, building more it.&lt;/p&gt;

&lt;p&gt;The next release of Stomata, v0.1.4 would include single process metrics tracking, followed by network metrics.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgntlvy7bgl2pvszva0g9.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgntlvy7bgl2pvszva0g9.gif" alt="Stomata tracking resource consumption" width="600" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rust</category>
      <category>cli</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Resource consumption by Rust</title>
      <dc:creator>Aditya</dc:creator>
      <pubDate>Tue, 11 Nov 2025 08:47:20 +0000</pubDate>
      <link>https://forem.com/aditya172926/resource-consumption-by-rust-2p1d</link>
      <guid>https://forem.com/aditya172926/resource-consumption-by-rust-2p1d</guid>
      <description>&lt;p&gt;You run &lt;code&gt;**cargo build**&lt;/code&gt; on a large Rust codebase, something like compiling a blockchain node from source and saw your system freeze or face an out-of-memory error.&lt;/p&gt;

&lt;p&gt;I faced this issue when building &lt;a href="https://github.com/o1-labs/mina-rust" rel="noopener noreferrer"&gt;Mina Protocol’s&lt;/a&gt; rust node on my 16GB machine and got me thinking what is happening in my machine during Rust compilation and can it be optimized for a successful build?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9vr2w7tnsbut1yhkw17.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9vr2w7tnsbut1yhkw17.gif" alt="Frozen machine" width="498" height="312"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Frozen machine…reaction on point&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This happens due to compiled languages like Rust can ask for substantial system resources during the build process. Once you understand what goes behind compiling a rust project and why it can fail, you should be able to optimize builds for resource utilizations to get a successful builds, without actually spending large sums on upgrading hardware.&lt;/p&gt;
&lt;h2&gt;
  
  
  Compilation resource consumption
&lt;/h2&gt;

&lt;p&gt;Compilation is a CPU intensive process. Rustc, the rust compiler performs complex operations like Lexical analysis, parsing, type checking, borrow checking, monomorphization, LLVM optimization, code generation&lt;/p&gt;

&lt;p&gt;Cargo parallelizes this building process when possible. On modern multi-core system, cargo will attempt to utilize all available CPU cores which can cause system-wide slowdowns and the machine can heat up.&lt;/p&gt;

&lt;p&gt;Now while CPU usage is expected, the memory consumption is the real bottleneck. Several factors contribute to high memory usage during the build process. One is &lt;strong&gt;monomorphization&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Rust uses generics. Since rust requires the type of a variable to be defined, generics allow us to do operation on multiple types in the same variable. Rust generics work through monomorphization, the compiler generates a separate copy of generic code for each concrete type it is going to be used with.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fn process_something&amp;lt;T&amp;gt;(data: T) {...}

process_something(10);
process_something(10.0);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the example, upon compilation the rust compiler would add the following code&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fn process_something_u64(data: u64) {...}
fn process_something_f64(data: f64) {...}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The compiler generates separate new versions of &lt;code&gt;process_something()&lt;/code&gt; each optimized for its specific concrete type that is going to be used for the generic function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1s52rah6tjctt323c2a.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1s52rah6tjctt323c2a.gif" alt="Exploding code" width="498" height="249"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;yeah multiple functions of the generic type&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Large codebases heavily rely on generic libraries for async operations, data transformations, serializations etc and this can create a lot of function variants and each variant will require additional memory to compile.&lt;/p&gt;

&lt;p&gt;Increase in Dependency graph also contributes to this problem. If you are working on a large rust codebase, chances are it is not a standalone project. It might depend on web frameworks like axum, serialization crates, clients, cryptography libs, etc.&lt;/p&gt;

&lt;p&gt;Each dependency also brings its own dependency and a project with 10 dependency might be actually compiling 100 crates and each being compiled consumes more memory for syntax trees, type information, debug information etc.&lt;/p&gt;

&lt;p&gt;Then next is &lt;strong&gt;Procedural Macros&lt;/strong&gt;. They are particularly memory hungry because they run as separate processes at compile time, generate additional code, gets compiled before being used first, can generate new code from small annotations.&lt;/p&gt;

&lt;p&gt;After this comes the &lt;strong&gt;LLVM IR&lt;/strong&gt; to eat more memory. It stands for &lt;strong&gt;Low-Level Virtual Machine Intermediate Representation&lt;/strong&gt;. It is a portable, low-level, assembly like language that is between the high level Rust code and the final machine code. The LLVM optimizer runs many transformations and optimizations like constant folding, inlining, dead code elimination, vectorization, etc. This can be memory intensive in large rust projects, in fact it can be the most memory and CPU hungry part of the Rust build.&lt;/p&gt;

&lt;p&gt;The memory is consumed for the above optimizations plus code generation for the target architecture and Link-time optimization. For LTO the memory requirements can increase a lot as the LLVM must keep a lot of program in the memory simultaneously to get a whole program view.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build Fail Reasons
&lt;/h2&gt;

&lt;p&gt;The most common reason behind builds failing is insufficient RAM. A developer with 16 GB RAM can struggle against building large projects with other applications running simultaneously.&lt;/p&gt;

&lt;p&gt;During the building process, the RAM can get exhausted and then the system resorts to a &lt;strong&gt;swap space&lt;/strong&gt;. It is a disk based virtual memory that the operating system uses when the RAM is full, but swap is slow compared to RAM. It is a backup area that extends the RAM, although it is slower it allows the system to keep running instead of crashing when the memory is exhausted.&lt;/p&gt;

&lt;p&gt;This process is managed by the kernel and happens automatically.&lt;/p&gt;

&lt;p&gt;Once the compilation starts swapping, build times can increase by a lot, from seconds to minutes and possible hours for large projects. Due to this the system might become unresponsive and can be mistaken for a stalled build and terminate before completing the compilation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tracking resource consumption
&lt;/h3&gt;

&lt;p&gt;After repeatedly hitting Out-of-memory errors during large Rust builds, I built &lt;strong&gt;&lt;a href="https://github.com/aditya172926/stomata-cli" rel="noopener noreferrer"&gt;Stomata&lt;/a&gt;&lt;/strong&gt; a process-level resource monitoring CLI tool to get a better visualization on the resources consumed. Below it is tracking &lt;strong&gt;memory&lt;/strong&gt; and &lt;strong&gt;swap&lt;/strong&gt; consumption while running tests on &lt;strong&gt;&lt;a href="https://github.com/paradigmxyz/reth" rel="noopener noreferrer"&gt;Reth&lt;/a&gt;&lt;/strong&gt;, showing moments of large consumption of the machine resources in real-time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://x.com/Aditya26sg/status/1983516668249739628" rel="noopener noreferrer"&gt;Tracking memory and swap consumption in a large Rust project build&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Codegen and Parallelization
&lt;/h3&gt;

&lt;p&gt;Codegen is short for code generation, this is where the Rust compiler converts the program into LLVM IR and runs optimizations to finally emit the machine code. A codegen unit is a chunk of your crate that Rust compiles separately in to machine code.&lt;/p&gt;

&lt;p&gt;You can think of a &lt;strong&gt;codegen uint&lt;/strong&gt; as an independent work package that the LLVM can process in parallel. By splitting your original code into multiple codegen units, rust allows LLVM to run in parallel across all CPU cores. This is done for improving the compile speed and do optimizations faster.&lt;/p&gt;

&lt;p&gt;Suppose your project has 100 functions, you can set the number of codegen units in &lt;code&gt;Cargo.toml&lt;/code&gt; profile. Suppose you set it to 1, the compiler gives all 100 functions to LLVM as 1 big chunk, so it does the optimizations globally across the entire project or crate and the compile time is longer here, but the build size would be the smallest.&lt;/p&gt;

&lt;p&gt;If you set more codegen units, lets say 16 then 16 smaller groups each of about 6 functions each are compiled in parallel. This makes it much faster. It is a speed vs performance trade-off.&lt;/p&gt;

&lt;p&gt;So it is a double edged sword, while it can dramatically reduce the build time on powerful machines it can still overwhelm your hardware as each codegen unit consumes memory independently. So on systems with limited RAM reducing the parallelization might become necessary.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkugitep8ufrhp8emfdo.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkugitep8ufrhp8emfdo.gif" alt="performance or speed" width="498" height="249"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;select between performance or speed&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Optimizations
&lt;/h3&gt;

&lt;p&gt;Above we listed multiple reasons due to which compiling a large Rust project might fail and what happens under the hood when you use cargo. But we can do some optimizations while building the project to manage the resource consumption.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limit parallel Jobs:&lt;/strong&gt; You can control how many crates Cargo compiles simultaneously. Using the command &lt;code&gt;cargo build -j 4&lt;/code&gt; limits the parallel jobs to 4. You can further reduce this to consume less RAM. It might increase the build time of the large project, but can make a successful build.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feature compilation:&lt;/strong&gt; Many crates offer feature flags to reduce dependencies. Like &lt;code&gt;serde = { version = “1.0”, features = [”derive”] }&lt;/code&gt;. It disables unnecessary features and can significantly reduce the compilation requirements&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Workspace optimization:&lt;/strong&gt; For multi-crate projects use cargo workspaces to share dependencies. This ensures that dependencies are compiled once and shared across all the workspace members.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pre-built dependencies:&lt;/strong&gt; You should be able to install some system packages or their pre-compiled binaries that are heavy into your OS and provide them as a system support rather than a crate. Such as libssl-dev.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Remote build:&lt;/strong&gt; Use cloud build services like github actions, remote development servers, VMs for heavy builds and running tests if it is impossible for your machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reduce codegen units:&lt;/strong&gt; This might increase your build time, but can make a successful build of a large project with less memory consumption.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally resource consumption during compiling Rust projects is not a limitation, it is just how Rust works with its type system and aggressive optimizations. It is by design. These make the rust code run fast and safe in return making the compilation process more resource intensive.&lt;/p&gt;

&lt;p&gt;We should understand the workings of the tech we are using and how to best optimize it so we actually get results we are hoping for. You can always throw money at the problem and buy a better hardware, but then you leave other ways this problem could have been approached that makes a really mature engineer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfh66utmzp9axqvj0lsg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfh66utmzp9axqvj0lsg.gif" alt="if you have money" width="498" height="205"&gt;&lt;/a&gt; &lt;em&gt;do you think he knows what Rust is?&lt;/em&gt;&lt;/p&gt;

</description>
      <category>rust</category>
      <category>programming</category>
      <category>cpu</category>
    </item>
    <item>
      <title>What are precompiles?</title>
      <dc:creator>Aditya</dc:creator>
      <pubDate>Thu, 06 Nov 2025 10:01:37 +0000</pubDate>
      <link>https://forem.com/aditya172926/what-are-precompiles-50f1</link>
      <guid>https://forem.com/aditya172926/what-are-precompiles-50f1</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnuvbwrguamooumns10ye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnuvbwrguamooumns10ye.png" alt="Precompiles banner" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Precompiles are predefined smart contracts that have a special address and provide specific functionality which is not executed at the EVM bytecode level but natively by the client.&lt;/p&gt;

&lt;p&gt;They are primarily used to add specific functions that would be computationally expensive if executed in the EVM bytecode. They also serve a purpose to facilitate the interaction between a parent chain and a child chain. In cases like having them with Arbitrum client, they can be optimized for performance.&lt;/p&gt;

&lt;p&gt;Rollups like arbitrum provide addition child chain specific precompiles that smart contracts can call the same way they can do for solidity functions, in addition to supporting all Ethereum precompiles.&lt;/p&gt;

&lt;p&gt;Precompiles are not written in Solidity, they are written in the language in which the client is written such as Go, Rust, etc. They are much more gas efficient than their equivalent EVM code. They have fixed addresses which are known in advance.&lt;/p&gt;

&lt;p&gt;Here are a few examples of common Ethereum precompiles with their addresses&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;0x01: ECRecover (to recover signature)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;0x02: SHA256 hash&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;0x08: bn256 pairing for zk-SNARKs&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you are deploying your own rollup, providers such as Arbitrum allow users to create new precompiles. This requires changes to the State Transition Function (STF), it defines how new blocks are produced from the input transactions.&lt;/p&gt;

&lt;p&gt;Updates to State Transition Function requires updating the fraud proving system to recognize that the new behavior is right, else the prover would take the side of unmodified nodes which will win the fraud proofs against the modified node precompile.&lt;/p&gt;

&lt;p&gt;Adding a new precompile requires modification in the STF because a node with this change would disagree about the outcome of the EVM execution compared to the unmodified node when the new precompile is invoked.&lt;/p&gt;

&lt;p&gt;But there are specific requirements which should be kept in mind before updating the STF&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the resulting STF must be deterministic. It is ok to take a non-deterministic path to get a deterministic output, example you shuffle the order of addresses randomly and give each 1 ETH, this is ok as still all addresses got 1 ETH.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The STF must not reach a new result for the old blocks. Example if you modify the STF to not charge gas, a new node will reach a different result state for the historical blocks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The STF must be pure and not use external resources. For example it should not use filesystem, make external network calls, launch processes etc. Because the fraud proving system does not and cannot support these resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The STF must not carry the state between the blocks outside of the global state. Meaning persistent state must be stored within block headers of Ethereum state trie.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The STF must not modify Ethereum state outside of a transaction. This is important to ensure that replaying the old blocks reaches the same result both for tracing and validation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is important for fraud proofs that execution reliably finishes in a short amount of time, and a block gas limit of 32 million gas should safely fit within this limit.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The STF must not fail or panic, it is important that it always produces a new block, even if user input is malformed. Example if STF receives an invalid transaction as input, it still produces an empty block.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is important to synchronize between the nodes when an upgrade takes effect. Usually for this we an create a certain timestamp by which all nodes must be upgraded.&lt;/p&gt;

&lt;p&gt;Usually when also writing a precompiling, as gas cost is associated with it. Even though a precompile can just be written without taking any gas, it is done so because gas fee precompiles are vulnerable to Denial of Service attacks, where an attacker can call a precompile again and again without bearing the computational costs that builds up while executing the precompile logic.&lt;/p&gt;

&lt;p&gt;The only time Precompiles makes sense is to access node internals, when extreme optimization is needed and access to system level functions.&lt;/p&gt;

</description>
      <category>web3</category>
      <category>blockchain</category>
      <category>ethereum</category>
      <category>backend</category>
    </item>
    <item>
      <title>What is Data Availability Layer?</title>
      <dc:creator>Aditya</dc:creator>
      <pubDate>Wed, 05 Nov 2025 14:51:32 +0000</pubDate>
      <link>https://forem.com/aditya172926/what-is-data-availability-layer-4i6a</link>
      <guid>https://forem.com/aditya172926/what-is-data-availability-layer-4i6a</guid>
      <description>&lt;p&gt;Here we will checkout what is a DA layer, and explore options like Celestia, Arbitrum solution, Ethereum etc.&lt;/p&gt;

&lt;p&gt;The L1 chains like ethereum has 3 main responsibilities that are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;execution of transactions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;achieving consensus on transaction ordering&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and guaranteeing the availability of the transaction data&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The data availability refers to the last point on the blockchain like Ethereum. The idea is that all transaction related data is available to the nodes on the network and it is important to allow nodes to independently verify transactions are compute the blockchain state.&lt;/p&gt;

&lt;p&gt;It is the ability of the nodes to download data withing all blocks. Validators usually do this, they download the transaction from a newly proposed block, re-execute the transactions to confirm it is right according to consensus rules and adds the block to the head of the chain once it is valid.&lt;/p&gt;

&lt;p&gt;There are some problems with this, the first is thoughput. Chains like ethereum require the nodes to store the large amounts of data so they can provide it for a peer if it needs it and verify it. So if is only able to process 15-20 transactions per second. Storing data on-chain also increases the size of the blockchain exponentially which further drives the requirements of better hardware for full-nodes that need to store ever increasing amount of state data.&lt;/p&gt;

&lt;p&gt;So rising hardware costs discourages individual participation.&lt;/p&gt;

&lt;p&gt;Data availability layer is a system that stores and provides consensus on the availability of the data. It refers to the location where transaction data is stored.&lt;/p&gt;

&lt;p&gt;Rollups play an important role in scaling Ethereum by moving the computation and state storage away from ethereum’s environment. It posts the transaction data in batches from a rollup to the parent which in cases like optimism and arbitrum is Ethereum.&lt;/p&gt;

&lt;p&gt;Block data posted from a rollup to Ethereum is publicly available, allowing anyone to execute transaction and validate the rollup chain state. All this while maintaining the principles of a blockchain.&lt;/p&gt;

&lt;p&gt;But making data available on-chain is also expensive depending on what DA layer you are selecting. For example it is expensive to store data on Ethereum due to its high transactional costs. DA is a large cost of running a rollup and also the efficiency of the DA solution determines how much activity it can process at once and overall performance.&lt;/p&gt;

&lt;p&gt;So posting directly to Ethereum is an expensive option, so 3rd party solutions like Celestia are developed for cheap and efficient data availability for rollups. Some L3s which are using L2s as the settlement layer use the L2 as the data availability layer too due to low costs of submitting transactions.&lt;/p&gt;

&lt;p&gt;How does Arbitrum provide a DA solution?&lt;/p&gt;

&lt;p&gt;Arbitrum has 2 modes to fix a solution for DA layer. In rollup mode all transaction data in included in the transaction calldata of the parent chain or blobs submitted by the transaction.&lt;/p&gt;

&lt;p&gt;AnyTrust mode, the transaction initially gets submitted to a group of nodes known as Data Availability Committee (DAC). The DAC stores and distributes the data and instead of including entire dataset on chain, only a cryptographic proof that the data has been stored on DAC is submitted to the parent chain. It significantly reduces the amount of data stored on chain and reduces the cost.&lt;/p&gt;

&lt;p&gt;Data flow on arbitrum anytrust:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the sequencer queues the transactions and batches them together&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;these are submitted to the parent chain, in AnyTrust mode the sequencer sends the batch to DAC and then submits the Data Availability Certificate which is returned and generated by the DA solution to the parent chain.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How does Celestia work?&lt;/p&gt;

&lt;p&gt;The idea behind celestia is to decouple transaction execution and consensus layer. The consensus layer is only responsible for transaction ordering and guaranteeing data availability.&lt;/p&gt;

&lt;p&gt;In celestia a block is considered valid only if the data behind that block is available. This is to prevent block producers to release block headers without releasing the data behind them that would prevent clients from reading the transactions necessary to compute the state of their applications.&lt;/p&gt;

&lt;p&gt;Celestia introduces a new primitive, data availability sampling. It provides an efficient solution to the DA problem by only requiring the resource-limited light nodes to sample a small number of random shares from each block to verify DA. As more light nodes participate in the sampling, the amount of data that the network can safely handle can increase. It enables a larger block size without increasing the cost to verify the chain.&lt;/p&gt;

&lt;p&gt;The traditional approach to verify the DA is&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;you must download all the data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;light nodes cannot do this as it needs too much bandwidth and storage&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;so only full nodes can do this&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is difficult to scale&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So you either sacrifice the decentralization by having only a few full nodes, or keep the blocks small by capping the throughput.&lt;/p&gt;

&lt;p&gt;Celestia Data Availability Sampling allows the light nodes to verify data by downloading only a tiny random samples instead of everything. Each light node randomly samples a few small pieces of data.&lt;/p&gt;

&lt;p&gt;Light nodes are able to verify with minimal bandwidth. Block size can scale without forcing nodes to download everything and more light nodes means a stronger security. So decentralization is maintained even with massive blocks.&lt;/p&gt;

&lt;p&gt;The only tradeoff of this is its probabilistic nature rather than deterministic. With sufficient samples the probability of failure is very low.&lt;/p&gt;

</description>
      <category>web3</category>
      <category>blockchains</category>
    </item>
    <item>
      <title>What is MEV?</title>
      <dc:creator>Aditya</dc:creator>
      <pubDate>Mon, 03 Nov 2025 12:28:37 +0000</pubDate>
      <link>https://forem.com/aditya172926/what-is-mev-31l9</link>
      <guid>https://forem.com/aditya172926/what-is-mev-31l9</guid>
      <description>&lt;p&gt;First lets start with understanding what is MEV? It is known as Maximal extractable value. It refers to the practice of rearranging and reordering transactions waiting to be added to the blockchain, for the minors or network validators to extract maximum value .&lt;/p&gt;

&lt;p&gt;MEV is possible due to decentralized nature of blockchain. As the minors and validators can see every transaction and contract code that is going to be included in the network and in the order in which they are going to be included.&lt;/p&gt;

&lt;p&gt;Unlike web2 systems the minors and validators have the freedom to reorder transactions as they want in a given block. The pending transactions are temporarily sent to the network mempool which is a publicly accessed waiting area.&lt;/p&gt;

&lt;p&gt;Validators can pick transactions and order them to create a block that can be verified. Normally this ordering should done by giving priorities to specific things like gas fees, but due to transparency it can present a profitable opportunity to extract value like arbitrage, sandwich attacks, etc.&lt;/p&gt;

&lt;p&gt;There are complex algorithms and bots that look for MEV opportunities for profit and pay high gas fee to incentivize the miners or validators to include their transactions in the target blocks in order to get additional value.&lt;/p&gt;

&lt;p&gt;Lets go into an example. In traditional financial system, arbitrage is the taken advantage of different in an asset price across different trading venues. So similarly arbitrage-based MEV takes advantage of the price variations of tokens across various DEXs.&lt;/p&gt;

&lt;p&gt;The searchers generate additional value by purchasing tokens at a lower price on one exchange and selling them on higher price at an another exchange. This can also happen on 2 separate liquidity poole on the same exchange.&lt;/p&gt;

&lt;p&gt;Another scenario is liquidation, in DeFi it is a process that occurs when the value of a borrower’s collateral does not cover the value of their loan. Once a debt is liquidated any individual can purchase the original collateral at a discount and resell the asset at a higher price to make a profit. Here the searchers act as liquidators who scan the blockchain for unhealthy loan positions that need to be liquidated. They identify the local positions that are eligible for liquidation, acquire the borrowers collaterals at a discounter price and resell at a higher price to gain the extra value.&lt;/p&gt;

&lt;p&gt;These are non-harmful types of MEV because liquidations are a regular occurrence in any financial systems.&lt;/p&gt;

&lt;p&gt;Arbitrum has a different transaction ordering policy. It is called timeboost that enables chain owners to capture the MEV on their chain, reduce spam and preserve fast block times. It does this while protecting users from harmful types of MEV like sandwich attacks.&lt;/p&gt;

&lt;p&gt;Previously the arbitrum chains ordered the incoming transactions in First Come, First Serve basis, it protected users from harmful MEV types like sandwich attacks.&lt;/p&gt;

&lt;p&gt;The downside of this is the searchers are incentivized by investing in hardware resources. Timeboost retains the benefits of FCFS but addresses its limitations. It is a set of rules that the sequencer of an arbitrum chain is trusted to follow when ordering the transactions.&lt;/p&gt;

&lt;p&gt;How does timeboost work? It uses 3 different components to work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;a special express lane which allows valid transactions to be sequenced as soon as the sequencer receives them&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;an offchain auction to determine the controller of the express lane for a given round&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;an auction contract deployed on the target chain to serve as the canonical source of truth for the auction results and handling of auction proceeds&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The auction round by default is 60 seconds. The transactions not in express lane are subjected to 200 ms delay in their arrival before being sequenced. The sequencer implements a special endpoint known as timeboost_sendExpressLaneTransaction. The transactions submitted to it will be sequenced immediately by the sequencer and it will only accept valid transaction payload to this endpoint if they are signed correctly by the current rounds controller of the express lane.&lt;/p&gt;

&lt;p&gt;The normally submitted transactions will be considered non-express and artificially get delayed by 200 ms. Then the transactions are eventually sequenced into a single, ordered stream of transactions for the sequencer to post to a data availability layer.&lt;/p&gt;

&lt;p&gt;Timeboost is an optional feature on the arbitrum chain infrastructure, it is the decision of the chain owner to enable it as it brings a unique way to accrue value for its token and generate revenue for the chain.&lt;/p&gt;

</description>
      <category>web3</category>
      <category>blockchain</category>
    </item>
    <item>
      <title>Fraud Proofs in Rollup</title>
      <dc:creator>Aditya</dc:creator>
      <pubDate>Tue, 21 Oct 2025 07:11:56 +0000</pubDate>
      <link>https://forem.com/aditya172926/fraud-proofs-in-rollup-54g8</link>
      <guid>https://forem.com/aditya172926/fraud-proofs-in-rollup-54g8</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ps1ukxvv4cy97nzgvlz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ps1ukxvv4cy97nzgvlz.png" alt="Banner" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fraud proofs are a very important part of Optimistic rollup stack. For transactions like ETH withdrawal and tokens from Optimism chains, a withdrawal proof needs to be submitted which shows that the withdrawal was actually included in the OP chain.&lt;/p&gt;

&lt;p&gt;Fraud proofs permissionlessly allow users to submit and challenge the proposals about the state of the rollup chain that are used to prove such withdrawal transactions.&lt;/p&gt;

&lt;p&gt;This allows more decentralization on the rollup chain by following&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;allowing anyone to make proposals about the state of L2&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;allowing anyone to challenge proposals made by other users&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;allow users to send message from L2 to L1 without need of trusted party&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;allow users to trigger withdrawals from L2 to L1 without need of trusted party&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The fault proof game is permissionless, but the Optimism security council acts as a guardian role checking for faults in the fault proof game. So each proposal must want for a delay during which the council checks and prevents invalid proposals from being used to withdraw ETH or tokens.&lt;/p&gt;

&lt;p&gt;They can also choose to shift the system to use a &lt;strong&gt;PermissionedDisputeGame&lt;/strong&gt; in which only specific proposer and challenger roles can submit and challenge proposals.&lt;/p&gt;

&lt;p&gt;The Proposals or also known as State Proposals are claims about the state of the rollup that are submitted to Ethereum through the DisputeGameFactory contract. They can be used for many things but most commonly are used by end users for proving they made a withdrawal on the rollup. Now because anyone can submit a proposal, invalid proposals needs to be challenged. There is a 7 days challenge period during which users can challenge the proposal if they think it is incorrect.&lt;/p&gt;

&lt;p&gt;There are some security guards also built around the fault proof game&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;an off chain monitoring system has been set to monitor all proposed roots and ensure they align with the correct state&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A root is finalized through a game an additional delay called airgap window is added before withdrawals can occur. During this the guardian role can reject the root&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A contract called DelayedWETH has been set up to hold the bonds and only allow payouts after a delay so that the bonds can be redirected towards the correct recipient in the case of game resolving incorrectly.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Workings of Fault Proof components
&lt;/h2&gt;

&lt;p&gt;The Fault Proof system is made of 3 main components&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Fault proof program&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fault proof virtual machine&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dispute game protocol&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These components work together to challenge malicious or faulty activity on the network to preserve trust and consistency of the system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fault Proof program
&lt;/h3&gt;

&lt;p&gt;The default implementation of this system is op-program that implements a program that runs through the rollup state-transition to verify an L2 output from L1 inputs.&lt;/p&gt;

&lt;p&gt;This verifiable output can then resolve a disputed output on L1. The program is a combination of of op-node and op-geth. It has both the consensus and execution parts of the protocol in a single process. The fault proof program runs in a deterministic way, so that 2 invocations with the same input data not only gives the same result but also the same program execution trace.&lt;/p&gt;

&lt;p&gt;This deterministic execution allows it to be run on an onchain VM as a part of the dispute resolution process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fault Proof VM
&lt;/h3&gt;

&lt;p&gt;The VM is decoupled from the Fault Proof Program to enable higher level of composability and parallelization of upgrades to both components. The Fault proof program which is the client side runs within the Fault proof VM expresses the L2 state transition.&lt;/p&gt;

&lt;p&gt;The VM is very minimal, such that Ethereum protocol changes like EVM op-code additions do not affect the VM. Instead the fault proof program can be updated to import new state-transition components. The VM is tasked with lower level instruction execution. The program needs to be emulated. Generally proving instruction looks as follows&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;To execute the instruction, the VM emulates something similar to an instruction-cycle of a thread-context. The instruction is read from the memory, interpreted and register-file and may change the memory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To support the pre-image oracle (from where all the data is retrieved) and basic program runtime needs memory-allocation, support for a subset of linux syscalls. The program writes a hash as request for a pre-image and then reads the value in small chunks at a time.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Dispute game protocol
&lt;/h3&gt;

&lt;p&gt;In this protocol, different types of games can be created, managed and upgraded via the &lt;strong&gt;DisputeGameFactory.&lt;/strong&gt; This allows for new features like aggregate proof systems and ability to expand the protocol. The game is the core primitive to the dispute protocol. It models a simple state machine and it is initialized with a 32 byte commitment to any piece of information of which the validity can be disputed.&lt;/p&gt;

&lt;p&gt;They contain a function to resolve this commitment to either true or false which is left for the implementer of the primitive to define. The games rely on 2 fundamental properties&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;incentive compatibility. The system penalizes false claims and rewards truthful ones to ensure fair participation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;resolution, each game has a mechanism to definitively validate or invalidate the root claim&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s about it, a high level overview of the Fault proof in optimism rollup chain.&lt;/p&gt;

</description>
      <category>web3</category>
      <category>rollups</category>
      <category>ethereum</category>
      <category>blockchain</category>
    </item>
    <item>
      <title>How does Proof of Stake Work?</title>
      <dc:creator>Aditya</dc:creator>
      <pubDate>Sat, 28 Jan 2023 11:54:45 +0000</pubDate>
      <link>https://forem.com/aditya172926/how-does-proof-of-stake-work-2poi</link>
      <guid>https://forem.com/aditya172926/how-does-proof-of-stake-work-2poi</guid>
      <description>&lt;p&gt;Proof of Stake (PoS) is a component of the mechanism used to achieve consensus on the state of a blockchain.&lt;/p&gt;

&lt;p&gt;Another common way is to use 👉 Proof of Work&lt;/p&gt;

&lt;p&gt;After September 15, 2022, a major update happened to the Ethereum blockchain, known as The Merge. Due to that, Ethereum now uses Proof of Stake, unlike Bitcoin which still uses PoW.&lt;/p&gt;

&lt;p&gt;PoS is important for the participants in the network to reach a consensus. But what actually is a ...&lt;/p&gt;

&lt;h2&gt;
  
  
  Consensus Protocol?
&lt;/h2&gt;

&lt;p&gt;In decentralized systems like Ethereum, all the nodes on the network are required to agree on the blockchain's current state. For example, if I ask all of the nodes "what is my current ETH balance", they should have the same answer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mse23q0sd9hpczcxe0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mse23q0sd9hpczcxe0s.png" alt="Consensus" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To achieve this, we developed consensus protocols. They can also be viewed as an economic mechanism (as we are dealing with tokens) which also prevents illegal operations and attacks on the network.&lt;/p&gt;

&lt;p&gt;Proof of Stake is one of the consensus mechanisms used by the Ethereum blockchain and many others.&lt;/p&gt;

&lt;p&gt;But what is ...&lt;/p&gt;

&lt;h2&gt;
  
  
  The idea behind PoS?
&lt;/h2&gt;

&lt;p&gt;The working of PoS is explained in the following points&lt;/p&gt;

&lt;p&gt;A validator will stake some tokens like ETH into a smart contract on the Ethereum blockchain&lt;/p&gt;

&lt;p&gt;The ETH now acts as collateral and can be destroyed if the validator does some malicious action, eg. lying about the current state of the network.&lt;/p&gt;

&lt;p&gt;The validator also becomes responsible for checking the validity of the new blocks and sometimes creating them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's in it for the Validators?&lt;/strong&gt; The validator can cause you to lose your capital which you staked for a transaction to execute if they lied. Hence even the validators are rewarded to be truthful about their actions on the blockchain.&lt;/p&gt;

&lt;p&gt;But now this raises another question ...&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do we need PoS?
&lt;/h2&gt;

&lt;p&gt;PoS has some killer advantages over the previously used PoW. Some of them are listed here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It uses very less energy than PoW. Because in PoW you require high-end computation power, which requires a lot of hardware components to mine competitively, unlike PoS where you stake your earnings and just say the truth.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Much easier to enter as a validator. In PoW, one needs to have expensive hardware to produce blocks. In the case of PoS that comes down to something like a good "gaming PC"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reduces centralization risks even further than PoW. Operations on PoW are expensive, so miners create mining farms where they club their resources and collectively create blocks. It becomes a risk as it takes more steps towards centralization.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Rewards of ETH to validators are reduced. This is not a negative point, because unlike nodes in PoW, nodes in PoS require less energy to operate overall.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Penalties for attacks such as 51% are exponentially more than it was for PoW. Because you have your tokens staked, instead of just hardware.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All this is great, but ...&lt;/p&gt;

&lt;h2&gt;
  
  
  How are Blocks created in PoS?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkwpmt2ecbilb1q7hs5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkwpmt2ecbilb1q7hs5d.png" alt="Block creation in PoS" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Validators are responsible for creating new blocks and also validating them. A user must deposit 32 ETH into a deposit contract and run a node software which has an execution client, a consensus client and a validator client.&lt;/p&gt;

&lt;p&gt;The flow goes this way -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The validator receives new blocks from peers on the network&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;They re-execute the transactions in that block to confirm it is valid&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;They broadcast a vote in favour of the block in the network&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If there are enough votes in the favour of the block, it is added to the chain&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This happens every 12-13 seconds, the time of a new block on Ethereum.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;p&gt;Validators also help in keeping the network secure. They should have sufficient resources like hardware, internet, uptime to participate, etc for the rewards.&lt;/p&gt;

&lt;p&gt;They miss the ETH rewards if they fail to meet expectations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But what is preventing these validators to do something wrong?&lt;/strong&gt; If they behave dishonestly their stake can be slashed leading to a huge financial loss and being kicked out of the network.&lt;/p&gt;

&lt;p&gt;The following are some of the ways a validator can misbehave-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Proposing multiple blocks in a single 12-second period&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Submitting false votes for a block&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sybil attack&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sybil attack is quite common in the case of Decentralized systems. This is where a participant pretends to be many different users on the network.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A separate article is coming up focusing on Sybil Attack.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding the Correct Chain?
&lt;/h2&gt;

&lt;p&gt;In the case of a fork, where 2 validators introduce a block at the same time in parallel, how does the network decide on which chain to fall back on and add the blocks?&lt;/p&gt;

&lt;p&gt;When 2 validators produce a block in parallel, there become 2 different versions of the Chain. So all the validators need to follow a "Correct Chain" where they can add blocks and delete their forks.&lt;/p&gt;

&lt;p&gt;In the case of Proof of Work, this was decided by the Longest Chain Rule because that's the chain which has the most "Work" put in.&lt;/p&gt;

&lt;p&gt;For PoS, Ethereum uses &lt;strong&gt;LMD-GHOST&lt;/strong&gt; algorithm which identifies the fork which has the greatest amount of Votes in its history.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A separate article will follow discussing more on LMD-GHOST algorithm.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The algorithm will check which fork received more votes from the validators and then everyone will resume from that fork.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finality
&lt;/h2&gt;

&lt;p&gt;When a transaction is a part of a block that cannot be changed on Ethereum, it is said to have Finality. In PoS it is done with checkpoint blocks.&lt;/p&gt;

&lt;p&gt;After every 32 blocks, a checkpoint block is introduced. This period is called an epoch. The validators also vote for pairs of checkpoint blocks.&lt;/p&gt;

&lt;p&gt;If &lt;strong&gt;Block "n"&lt;/strong&gt; was a checkpoint, &lt;strong&gt;block "n+32"&lt;/strong&gt; is too. Validators vote for &lt;strong&gt;(Block "n", Block "n+32")&lt;/strong&gt; which is considered a valid sequence. After enough votes, Block "n" is marked -&amp;gt; &lt;em&gt;finalized&lt;/em&gt; and Block "n+32" is -&amp;gt; &lt;em&gt;justified&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This prevents any attacker to change the contents of a finalized block without spending a large number of resources. The attacker would require 33% of the entire stake to rewrite the block history.&lt;/p&gt;

&lt;p&gt;Basically, the attacker would vote against (Block "n", Block "n+32") pair and never let Block "n" be finalized.&lt;/p&gt;

&lt;p&gt;PoS does have its limitations as it is more complex than PoW. But with constant development and research, they will be covered as well.&lt;/p&gt;




&lt;p&gt;That's all :)&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>web3</category>
      <category>ethereum</category>
      <category>consensus</category>
    </item>
    <item>
      <title>How does Bitcoin work?</title>
      <dc:creator>Aditya</dc:creator>
      <pubDate>Sun, 15 Jan 2023 04:00:00 +0000</pubDate>
      <link>https://forem.com/aditya172926/how-does-bitcoin-work-33n5</link>
      <guid>https://forem.com/aditya172926/how-does-bitcoin-work-33n5</guid>
      <description>&lt;p&gt;Cryptocurrency has gained huge popularity with people jumping in for various reasons. Many join the crypto space for trading, some join because of the hype, some for understanding the tech behind it, etc.&lt;/p&gt;

&lt;p&gt;No matter which way you come into this space, most do not understand the basic functionality of what a cryptocurrency does. All they know is it is a distributed ledger and has something to do with cryptography.&lt;/p&gt;

&lt;p&gt;But what's actually happening under the hood, what do the computers actually do? Let's understand the workings of the first cryptocurrency aka Bitcoin.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are ledgers?
&lt;/h2&gt;

&lt;p&gt;A ledger is a record of transactions that happened or are supposed to happen in near future involving multiple parties like sender and recipient.&lt;/p&gt;

&lt;p&gt;A blockchain is like a digital ledger where blocks of transactions are in the form of a chain. It is a public ledger in the case of Bitcoin and is available to everyone to view and write.&lt;/p&gt;

&lt;p&gt;Now a problem with a public ledger where anyone can read and write transactions is that anyone can write any transaction. Even if the transaction was not meant to happen.&lt;/p&gt;

&lt;p&gt;For example, there are 2 people, say Ramesh and Suresh. Both have the access to the public ledger to read/write. Suresh writes that "Ramesh owes him ₹100", without the approval of Ramesh. This is a false transaction.&lt;/p&gt;

&lt;p&gt;To prevent this exact scenario digital ledgers such as Bitcoin use ...&lt;/p&gt;

&lt;h2&gt;
  
  
  Digital Signatures
&lt;/h2&gt;

&lt;p&gt;The idea is that like normal signatures, there should be a digital representation of your unique signature, which is infeasible to forge by others.&lt;/p&gt;

&lt;p&gt;So when Suresh writes "Ramesh owes him ₹100", Ramesh can sign that statement with his digital signature to approve that transaction. If the statement is not signed i.e the approval is not given then the transaction does not happen.&lt;/p&gt;

&lt;p&gt;The digital signature is just a string of 0s and 1s that are combined to sign. But this data is copied by someone, they can impersonate you and can sign illegal transactions on your behalf, without your consent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do we prevent Digital Signature forgeries?&lt;/strong&gt; Every user of the blockchain will have a public key and a private key. They are a string of bits and both keys will be unique for every user.&lt;/p&gt;

&lt;p&gt;A private key is also called a secret key. This should be hidden and never disclosed to others as this is a very crucial part of generating Digital signatures.&lt;/p&gt;

&lt;h2&gt;
  
  
  How is Digital Signature Generated?
&lt;/h2&gt;

&lt;p&gt;In the physical world, your signature on most of the documents is the same. But, in blockchain signing, every single transaction message will have a different unique signature.&lt;/p&gt;

&lt;p&gt;Using different digital signatures for different transactions or better to say, you only use 1 digital signature 1 time ensures that someone cannot use your previous signature for a new transaction, even if they get hold of it.&lt;/p&gt;

&lt;p&gt;Digital signatures are generated using a function where the inputs given are the transaction message, your private key, and some other parameters that may be used to increase the randomness of the output.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;function(txn. message, private key) = Unique Digital Signature&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to verify Digital Signatures?
&lt;/h2&gt;

&lt;p&gt;For every transaction, the message or some other parameters will be different resulting in a unique digital signature every time. The private key ensures that only you can create such a signature. That's why it is absolutely necessary to keep your private keys safe.&lt;/p&gt;

&lt;p&gt;If someone else gets your private key, they can generate digital signatures on your behalf and sign transactions that you might not want. It is possible to steal your assets if someone gets hold of your Private Key.&lt;/p&gt;

&lt;p&gt;Another function is used to verify if the signature is valid signature or not. The parameters it takes are the transaction message, Digital signature, and your Public key.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Verify(txn. message, Signature, Public key) = True or False&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;As your public key is available to everyone to view when you share, anyone can validate that you were the one who signed a transaction and that your signature is valid.&lt;/p&gt;

&lt;p&gt;The amazing part of this is that these functions are irreversible. That means that you cannot generate a private key if you perform the inverse.&lt;/p&gt;

&lt;p&gt;More details on how these functions work will be in another article.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other links
&lt;/h2&gt;

&lt;p&gt;There is more to Bitcoin than just a decentralized ledger, the aim of this blog is to provide a stable understanding of how the transactions with bitcoin work.&lt;/p&gt;

&lt;p&gt;To understand more about blockchains such as Bitcoin, you should consider knowing how the Proof of Work protocol works. It is an important component of how the consensus is reached on the Bitcoin network 👉 &lt;a href="https://dev.to/aditya172926/proof-of-work-for-bitcoin-4l01"&gt;What is Proof Of Work in Bitcoin&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;That's all :)&lt;/p&gt;

</description>
      <category>bitcoin</category>
      <category>blockchain</category>
      <category>web3</category>
    </item>
    <item>
      <title>Proof of Work for Bitcoin</title>
      <dc:creator>Aditya</dc:creator>
      <pubDate>Sat, 14 Jan 2023 12:33:42 +0000</pubDate>
      <link>https://forem.com/aditya172926/proof-of-work-for-bitcoin-4l01</link>
      <guid>https://forem.com/aditya172926/proof-of-work-for-bitcoin-4l01</guid>
      <description>&lt;p&gt;It is one thing to know what is a consensus mechanism and another to understand it.&lt;/p&gt;

&lt;p&gt;Proof of work is a protocol that allows all of the nodes on the network to agree on the state of the blockchain. This protocol is still used for &lt;strong&gt;Bitcoin&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Like Bitcoin, it was also used for the Ethereum network, until September 12, 2022, when the Ethereum merge happened resulting in the switching of Ethereum from Proof of Work to Proof of Stake.&lt;/p&gt;

&lt;p&gt;Proof of Work (PoW) is part of a consensus protocol. But what is ...&lt;/p&gt;

&lt;h2&gt;
  
  
  Consensus?
&lt;/h2&gt;

&lt;p&gt;It is defined as a general agreement about something, an idea or a deal.&lt;/p&gt;

&lt;p&gt;The participants in the Blockchain which are known as nodes are distributed. They have their local storage and each node machine can have its state.&lt;/p&gt;

&lt;p&gt;For the blockchain network to work properly, every node must have the same state of the local blockchain copy in its storage. Nodes are machines/computers but they are distributed and decentralized all over the network.&lt;/p&gt;

&lt;p&gt;In a centralized system, where multiple computers are connected to one central server which handles all of the data exchanges, it is easy to maintain the same state of all of the computers connected to it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2uaao9x4qldoktpqbfs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2uaao9x4qldoktpqbfs.png" alt="Centralized Systems" width="563" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because the server knows and can perform read/write to synchronise all the computers connected to it.&lt;/p&gt;

&lt;p&gt;In a decentralized system, as there is no central server, all the computers are connected to a network. So to keep them in sync, some protocol needs to be established.&lt;/p&gt;

&lt;p&gt;It's the Consensus protocols that help us on achieving agreement on the state of the blockchain at any given point.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding PoW
&lt;/h2&gt;

&lt;p&gt;It is a consensus protocol currently used by Bitcoin and previously by Ethereum.&lt;/p&gt;

&lt;p&gt;Here the miners are required to create new blocks of valid transactions. After the block is added, the miner broadcasts the event to the rest of the nodes on the network. (To understand how mining works in blockchains, read this 👉 &lt;a href="https://adityas.hashnode.dev/blockchain-mining" rel="noopener noreferrer"&gt;Mining in Blockchain&lt;/a&gt; )&lt;/p&gt;

&lt;p&gt;In PoW, a miner is supposed to solve a difficult mathematical puzzle, before anyone else to get the rewards. The puzzle is quite hard but very easy for someone else to verify its validity.&lt;/p&gt;

&lt;h2&gt;
  
  
  So, first come first serve?
&lt;/h2&gt;

&lt;p&gt;Not exactly, alone just PoW cannot be seen as a consensus protocol. There are some other mechanisms and procedures which help in selecting the miner who will produce a block.&lt;/p&gt;

&lt;p&gt;A Sybil resistance mechanism will be used in this case. It is a way to measure the endurance of the protocol in case of a &lt;strong&gt;Sybil Attack&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's a Sybil attack?&lt;/strong&gt; It is a problem when a user or a group pretends to be many different users. Outside the digital world, it is like you shouting "present" in school for yourself and your friend too.&lt;/p&gt;

&lt;p&gt;A decentralized system must ensure security against this to give rewards to deserving miners. If the selection of miners is just done randomly, a Sybil attack can be done easily.&lt;/p&gt;

&lt;h2&gt;
  
  
  Selection of Chain
&lt;/h2&gt;

&lt;p&gt;Each node contains a copy of the entire blockchain with them, and then each miner will append new blocks to their local chain. You can say there will be many different states of a blockchain if each miner has its copy.&lt;/p&gt;

&lt;p&gt;How will the other miners agree on 100s of different states? There should be a single chain which contains the latest state and all nodes should agree to the state of that one blockchain.&lt;/p&gt;

&lt;p&gt;Each miner will include different blocks in their blockchain, this is called a fork.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fos4ilwb2iknu9bn6zuh1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fos4ilwb2iknu9bn6zuh1.png" alt="Blockchain Forking" width="731" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Blockchains need a single continuous chain, else there will be many states of a single blockchain and it will be impossible to have consensus on all of those.&lt;/p&gt;

&lt;p&gt;Blockchains like Bitcoin and Ethereum use the longest-chain rule. The chain which is accepted by a higher number of nodes and is growing longer is considered. The forked chains are deleted after they have served their purpose.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;PoW + longest chain rule&lt;/strong&gt; is also called as Nakamoto Consensus.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the "Work" in PoW?
&lt;/h2&gt;

&lt;p&gt;PoW aims to make the miners work for their rewards. They should spend their resources i.e time and energy, in this case, computation power to propose new blocks.&lt;/p&gt;

&lt;p&gt;The miners will have to produce valid blocks for which they generate a certificate of legitimacy. That allows anyone to check the block's validity easily and if the miners cheat, they will lose their resources for nothing in return.&lt;/p&gt;

&lt;p&gt;When Ethereum used to work on PoW, this is the way the miners had to do work -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A miner will select a bunch of transactions to include in a block&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The network rules will select a slice of data from the current blockchain state. Because the whole blockchain is always increasing, it is much more feasible to work on the part of it that has the latest state&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A target value is calculated by passing the dataset through a hashing function. The lower the target the more difficult the mining process, and vice-versa&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now the miner will use brute force to generate a random number which you call the nonce.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Some more parameters, in addition to target, nonce, and dataset are passed in a hashing function to get a result which should be lower than the target value.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The miner has to do trial and error for random numbers to be the valid nonce to get a value lower than the target. The lower the target, the more attempts are required.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hence, here is the rewarding "work" the miners do.&lt;/p&gt;




&lt;p&gt;That's all :)&lt;/p&gt;

</description>
      <category>css</category>
      <category>webdev</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Why are miners rewarded?</title>
      <dc:creator>Aditya</dc:creator>
      <pubDate>Sun, 08 Jan 2023 06:05:00 +0000</pubDate>
      <link>https://forem.com/aditya172926/why-are-miners-rewarded-43pg</link>
      <guid>https://forem.com/aditya172926/why-are-miners-rewarded-43pg</guid>
      <description>&lt;p&gt;If you are not new to Ethereum or in Blockchain space in general, you have already heard about "Miners".&lt;/p&gt;

&lt;p&gt;Before we start with the rewards the miners get, let's know what is "Mining"?&lt;/p&gt;

&lt;h2&gt;
  
  
  Mining?
&lt;/h2&gt;

&lt;p&gt;Ethereum is a blockchain network, i.e it is a network/chain of blocks of transactions. The process of creating new blocks and adding them to the chain is called Mining.&lt;/p&gt;

&lt;p&gt;Until recently, before September 15, 2022, Ethereum miners used to invest their time and computation power to execute transactions and create new blocks. Now the process is changed.&lt;/p&gt;

&lt;p&gt;So is that all to miners? No, they also help to keep the network safe.&lt;/p&gt;

&lt;p&gt;In decentralized systems, every participant should agree on the order of transactions that are occurring as some process happens: A -&amp;gt; B -&amp;gt; C steps.&lt;/p&gt;

&lt;p&gt;If the steps are not followed like A-&amp;gt;C-&amp;gt;B, etc the transaction order will not work. It is important that participants in the network agree in what order (steps) should the transaction occur.&lt;/p&gt;

&lt;p&gt;This job also falls on the shoulders of miners. They are responsible for validating the transactions, and their orders on the blockchain.&lt;/p&gt;

&lt;h2&gt;
  
  
  Double-spending
&lt;/h2&gt;

&lt;p&gt;Miners are important in the Ethereum network because they prevent this problem in the digital currency space.&lt;/p&gt;

&lt;p&gt;Double spending is a flaw in any digital cash protocol like Ethereum, Bitcoin, etc. where the same single digital token is spent more than once.&lt;/p&gt;

&lt;p&gt;How does this happen? Outside the digital world, you might call this &lt;em&gt;counterfeit money&lt;/em&gt;. Anyone with expertise can create lots of counterfeit bills of currencies like $, ¥, ₹, etc. and spend it resulting in the inflation of the currency.&lt;/p&gt;

&lt;p&gt;The validation process and adding the transaction to the chain by a miner help to prevent this issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Blockchain 🤝 Miners: How?
&lt;/h2&gt;

&lt;p&gt;When someone executes a transaction function which results in a change in the current state of the chain, it must be reflected in the new state of the blockchain.&lt;/p&gt;

&lt;p&gt;Miners have another job. Large digital services are under constant threat of attacks and manipulation. So when miners validate transactions they produce a &lt;strong&gt;certificate of legitimacy&lt;/strong&gt;, saying that their proposed transactions are legit.&lt;/p&gt;

&lt;p&gt;This is a complex process. Chains like Bitcoin which uses proof of work do require the miners to work out this certificate. Due to this, it becomes convenient for others to check a transaction.&lt;/p&gt;

&lt;p&gt;But what do miners get in return? Miners are rewarded with new tokens for their work. As there is no central authority in a decentralized system, miners are crucial for safety and reward acts as an incentive for their participation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Transaction Mining Process
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A user will sign transactions using their wallet such as Metamask&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A node will be responsible to broadcast this transaction event on the entire network&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then every node will add the transaction data to its own mempool. It is a list of pending transactions that are yet to be included in a block.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When the miner can maximise the transaction fee and also stay under the block gas limit, then they group the transactions from the mempool into a block.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The validity of each transaction is verified, and execution is performed on the local Ethereum Virtual Machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After the certificate is produced, the miner broadcasts the completed block on the blockchain. The event includes the metadata such as the certificate, the latest state of the chain, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The other nodes also check this event and they update their local copies of the blockchain to accept the new state. And then the transaction event is removed from each node's mempool.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's all :)&lt;/p&gt;

</description>
      <category>ethereum</category>
      <category>blockchain</category>
      <category>web3</category>
      <category>bitcoin</category>
    </item>
    <item>
      <title>Ethereum Gas: What is it?</title>
      <dc:creator>Aditya</dc:creator>
      <pubDate>Sat, 07 Jan 2023 12:42:36 +0000</pubDate>
      <link>https://forem.com/aditya172926/ethereum-gas-what-is-it-1m0p</link>
      <guid>https://forem.com/aditya172926/ethereum-gas-what-is-it-1m0p</guid>
      <description>&lt;p&gt;If Ethereum is a car, the gas is literally the "gas" ⛽&lt;/p&gt;

&lt;p&gt;Gas is the fuel using which the operations are done on the Ethereum blockchain. While the transactions are made on the network, the users are required to pay a transaction fee, that is the Gas fee.&lt;/p&gt;

&lt;p&gt;Gas is a unit of computation done on the Ethereum network. It denotes the amount of effort that was required to execute a transaction. That's why a fee is charged by the user who wants to execute it, the user pays for the computation effort.&lt;/p&gt;

&lt;p&gt;The gas fee is not limited to Ethereum, you will find the same in Polygon as well, basically anything compatible with Ethereum Virtual Machine.&lt;/p&gt;

&lt;p&gt;The Gas fee is paid in the networks' native currency, here it is ether aka ETH.&lt;/p&gt;

&lt;p&gt;But when you execute a transaction, the Gas fee is not exactly given in ether, it's given in something like gwei. Gwei is a denomination of ETH, just like how a meter is for a kilometer.&lt;/p&gt;

&lt;p&gt;The value of 1 Gwei = 0.000000001 ETH = 10^(-9) ETH.&lt;/p&gt;

&lt;p&gt;There is another denomination of Gwei, i.e Wei which is the smallest.&lt;/p&gt;

&lt;p&gt;1 Wei = 10^(-18) ETH&lt;/p&gt;

&lt;p&gt;For example, say the price of 1 gas is 200 Gwei. And for a transaction to send some ETH to another address is 21,000 gas. So your gas fee comes to =&amp;gt; 21,000 * 200 = 42,00,000 Gwei = 0.0042 ETH&lt;/p&gt;

&lt;p&gt;How the gas price for 1 gas is set is up to the user. In Wallets like Metamask, it gives you a reasonable amount depending on the network conditions. But to make the transaction go faster, you are sometimes prompted to pay more.&lt;/p&gt;

&lt;p&gt;The higher the gas price, the more priority is given to that transaction by the miners as they earn a chunk out of it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Calculating Gas
&lt;/h2&gt;

&lt;p&gt;When a smart contract is deployed on the blockchain, it is compiled into the bytecode and then down to opcodes.&lt;/p&gt;

&lt;p&gt;Opcodes are the operations that run directly on the Ethereum Virtual Machine, like &lt;strong&gt;Add&lt;/strong&gt;, &lt;strong&gt;Mul&lt;/strong&gt;, &lt;strong&gt;Div&lt;/strong&gt;, &lt;strong&gt;Sub&lt;/strong&gt;, etc. Each has a fixed gas cost. In a function of the smart contract, the Gas cost is the sum of all its Opcodes costs.&lt;/p&gt;

&lt;p&gt;So more complex functions require more gas and simpler functions like sending ether, require less.&lt;/p&gt;

&lt;p&gt;The Gas fee that Metamask asks you to pay is an estimate, it is possible that the execution took less gas fee, or you can set a limit on how much of the gas fee should be used, and the unsent amount gets refunded to your wallet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gas Limits
&lt;/h2&gt;

&lt;p&gt;Each transaction has a gas limit that the user can specify. So if the execution requires more gas than the specified limit, it will fail and revert.&lt;/p&gt;

&lt;p&gt;Similarly, the blockchain network has a limit on the maximum amount of Gas for each block. This is done so that the computation cost doesn't flow out of range and the nodes remain in sync with the network to handle more computationally complex functions.&lt;/p&gt;

&lt;h2&gt;
  
  
  London Upgrade
&lt;/h2&gt;

&lt;p&gt;This upgrade was implemented to the Ethereum network in 2021 to improve the gas fee estimation and quicker execution of transactions.&lt;/p&gt;

&lt;p&gt;Before this, wallets like Metamask will give you an estimate of the gas fee based on the history of the network. Now there is a base gas price fee on every block. This price per unit of gas is calculated on demand for the transaction to be included in a block.&lt;/p&gt;

&lt;p&gt;The base fee is burnt to maintain equilibrium on ETH supply and burn rate. Also, a priority fee is charged which is used to reward miners for execution. Higher the priority fee, you know it.&lt;/p&gt;

&lt;p&gt;The gas fee is calculated as =&amp;gt; &lt;strong&gt;gas spent * (base fee + priority fee)&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Block Sizes
&lt;/h2&gt;

&lt;p&gt;Earlier the blocks had a fixed gas limit of 15 million gas. Now it can be increased to 30 million, by adjusting the base fee for the next coming block.&lt;/p&gt;

&lt;p&gt;Why is the base fee adjusted? Because it is used to maintain the equilibrium between the supply and burn rate of ETH. So the base fee for the next block is either increased or decreased depending on the size of the current block.&lt;/p&gt;

&lt;p&gt;The base fee can increase or decrease by a maximum of 12.5% per block if it crosses the 15 million gas limit. This jump in gas prices is the reason why the nodes will not execute 30 million gas blocks constantly.&lt;/p&gt;

&lt;p&gt;Because if they do, down the line even a simple transaction will require a lot of gas fees.&lt;/p&gt;




&lt;p&gt;That's all :)&lt;/p&gt;

</description>
      <category>discuss</category>
    </item>
  </channel>
</rss>
