<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: ShannonData.AI</title>
    <description>The latest articles on Forem by ShannonData.AI (@shannonbase).</description>
    <link>https://forem.com/shannonbase</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/shannonbase"/>
    <language>en</language>
    <item>
      <title>ShannonBase Javascript Engine</title>
      <dc:creator>ShannonData.AI</dc:creator>
      <pubDate>Mon, 27 Apr 2026 03:48:40 +0000</pubDate>
      <link>https://forem.com/shannonbase/shannonbase-javascript-engine-3hll</link>
      <guid>https://forem.com/shannonbase/shannonbase-javascript-engine-3hll</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltvu2r6l6tcctwyyurur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltvu2r6l6tcctwyyurur.png" alt="complain"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image shows a discussion where someone complained after the release of MySQL 9.7. It can open the feature of supporting JS routine.&lt;/p&gt;

&lt;p&gt;But don’t worry — ShannonBase already supports a JS engine, allowing you to write and execute JavaScript programs inside ShannonBase just like writing SQL stored procedures, and you can directly manipulate the local database within the JS program.&lt;/p&gt;

&lt;p&gt;The choice to integrate a JavaScript (JS) engine is mainly to lower the development barrier, expand the database’s processing capabilities, and improve support for modern data formats (especially JSON). By embedding a lightweight JS engine (based on GraalVM) into the database, developers can write stored procedures and functions directly in JavaScript, rather than being limited to traditional SQL or stored procedure languages.&lt;/p&gt;

&lt;p&gt;Expanding complex logic processing capabilities: SQL is a set-based declarative language, making it difficult to implement complex business logic or procedural processing. JavaScript, as an imperative programming language, is better suited for data transformations, custom algorithms, and other complex logic.&lt;/p&gt;

&lt;p&gt;Bridging the “last mile” of JSON support: JSON data is naturally compatible with JavaScript. Using JavaScript to handle stored procedures allows developers to manipulate in-memory JSON data in a more native and efficient way.&lt;/p&gt;

&lt;p&gt;Seamless compatibility with existing infrastructure: JavaScript stored programs work seamlessly with traditional SQL stored programs, InnoDB, Lakehouse, and the Rapid engine. This means you can write logic in JS while still leveraging Rapid’s high-performance computing capabilities.&lt;/p&gt;

&lt;p&gt;Wider developer ecosystem: JavaScript is one of the most popular programming languages. Using JS allows a large number of frontend or backend developers to easily perform advanced development inside the database without learning complex SQL procedural languages.&lt;/p&gt;

&lt;p&gt;More options: With the introduction of generative AI features, the JS engine provides flexibility for handling unstructured data and calling external APIs.&lt;/p&gt;

&lt;p&gt;In summary, with the JS engine, ShannonBase achieves “processing logic where the data resides,” avoiding the latency of moving data out of the database just to handle logic, thus improving overall analysis and development efficiency.&lt;/p&gt;

&lt;p&gt;Unlike Heatwave, ShannonBase does not use GraalVM as its JS engine; instead, it uses JerryScript. Compared to GraalVM, JerryScript offers the following advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extremely lightweight and low resource usage: JerryScript is designed to run in less than 64 KB of RAM and 200 KB of ROM. For a database, this means the JS engine’s memory overhead can be kept minimal, leaving most memory resources for core tasks like data processing, query caching, and connection management. In contrast, GraalVM is designed for general-purpose applications and microservices, with a relatively large base runtime that significantly increases the database’s memory burden.&lt;/li&gt;
&lt;li&gt;Small code size, easy integration and maintenance: JerryScript is written in C99, and its compiled binary is very small (e.g., only about 258 KB when compiled for ARM Thumb-2). This makes it easy to embed into ShannonBase’s C++ codebase without significantly increasing the size of the installation package or binary files. Moreover, as a pure interpreter without complex JIT frameworks, its behavior and resource consumption are more predictable when executing user-provided JS code within the database process, making sandboxing easier.&lt;/li&gt;
&lt;li&gt;Fast startup and no JIT warm-up overhead: JerryScript has no JIT mechanism, so there is no “warm-up” process — code always starts being interpreted immediately. This is ideal for database scenarios, where stored procedures or functions are typically short and called frequently. JerryScript’s “zero startup cost” avoids CPU and memory spikes caused by JIT compilation, ensuring stable database performance and predictable response times.&lt;/li&gt;
&lt;li&gt;Mature embedded API and snapshot support: JerryScript provides a mature C API, making it easy for applications to call and embed directly. Its unique snapshot support allows JavaScript source code to be precompiled into bytecode. These snapshots can be preloaded in the database, further reducing runtime parsing and compilation overhead and significantly improving execution efficiency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this lightweight JS engine, ShannonBase now supports JavaScript stored functions. Since JerryScript itself, as a lightweight JS engine, does not include database connection APIs like ODBC or DAO in its core standard library, ShannonBase extends its capabilities by providing the sys.exec_sql interface to execute SQL queries within JS functions. Through sys.exec_sql, you can directly manipulate the database from inside a JS function. The sys.exec_sql function returns results in JSON format by default.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;DELIMITER&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;FUNCTION&lt;/span&gt; &lt;span class="n"&gt;query_table&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;RETURNS&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt;
&lt;span class="k"&gt;LANGUAGE&lt;/span&gt; &lt;span class="n"&gt;JAVASCRIPT&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="err"&gt;$$&lt;/span&gt;
&lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;exec_sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;"select * from test"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="err"&gt;$$&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;DELIMITER&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Additionally, exec_sql inherits all the states of the current connection.&lt;br&gt;
Example execution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;mysql&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;DELIMITER &lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="gp"&gt;mysql&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;CREATE FUNCTION query_table&lt;span class="o"&gt;()&lt;/span&gt; RETURNS TEXT
&lt;span class="gp"&gt;    -&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;LANGUAGE JAVASCRIPT AS &lt;span class="nv"&gt;$$&lt;/span&gt;
&lt;span class="gp"&gt;    $&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;function &lt;/span&gt;query&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
&lt;span class="gp"&gt;    $&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;     &lt;span class="k"&gt;return &lt;/span&gt;sys.exec_sql&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"select * from test"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="gp"&gt;    $&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="gp"&gt;    $&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;return &lt;/span&gt;query&lt;span class="o"&gt;()&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="gp"&gt;    $&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$$&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="go"&gt;Query OK, 0 rows affected (0.01 sec)

&lt;/span&gt;&lt;span class="gp"&gt;mysql&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;DELIMITER &lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="gp"&gt;mysql&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;select &lt;/span&gt;query_table&lt;span class="o"&gt;()&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="go"&gt;+---------------------------------------------------------------------------------------------------------------------------------------------+
| query_table()                                                                                                                               |
+---------------------------------------------------------------------------------------------------------------------------------------------+
| [{"score":1.1,"name":"n1","id":1,"gender":"m"},{"score":2.2,"name":"n2","id":2,"gender":"f"},{"score":3.3,"name":"n3","id":3,"gender":"m"}] |
+---------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (2.95 sec)
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also use other features:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;DELIMITER&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;FUNCTION&lt;/span&gt; &lt;span class="n"&gt;IS_EVEN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;VAL&lt;/span&gt; &lt;span class="nb"&gt;INT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;RETURNS&lt;/span&gt; &lt;span class="nb"&gt;INT&lt;/span&gt;
&lt;span class="k"&gt;LANGUAGE&lt;/span&gt; &lt;span class="n"&gt;JAVASCRIPT&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="err"&gt;$$&lt;/span&gt;
&lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;isEven&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;num&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;isEven&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;VAL&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="err"&gt;$$&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;DELIMITER&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you like the project, please give me a star or submit a PR.&lt;/p&gt;

&lt;p&gt;⭐ &lt;a href="https://github.com/Shannon-Data/ShannonBase" rel="noopener noreferrer"&gt;Star the repo&lt;/a&gt;&lt;br&gt;
🧩 &lt;a href="https://github.com/Shannon-Data/ShannonBase/pulls" rel="noopener noreferrer"&gt;Submit PRs&lt;/a&gt;&lt;br&gt;
🐞 &lt;a href="https://github.com/Shannon-Data/ShannonBase/issues" rel="noopener noreferrer"&gt;Open Issues&lt;/a&gt;&lt;br&gt;
💬 &lt;a href="https://github.com/Shannon-Data/ShannonBase/discussions" rel="noopener noreferrer"&gt;Join Discussion&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mysql</category>
      <category>agents</category>
    </item>
    <item>
      <title>ShannonBase HTAP Architecture Analysis: From MySQL Optimizer to Vectorized Execution Engine — Overview</title>
      <dc:creator>ShannonData.AI</dc:creator>
      <pubDate>Mon, 27 Apr 2026 03:43:18 +0000</pubDate>
      <link>https://forem.com/shannonbase/shannonbase-htap-architecture-analysis-from-mysql-optimizer-to-vectorized-execution-engine--2be8</link>
      <guid>https://forem.com/shannonbase/shannonbase-htap-architecture-analysis-from-mysql-optimizer-to-vectorized-execution-engine--2be8</guid>
      <description>&lt;p&gt;&lt;strong&gt;T&lt;/strong&gt;his article is the overview in a series introducing ShannonBase’s query optimization and execution. Subsequent articles will provide detailed introductions to&lt;/p&gt;

&lt;p&gt;each module, systematically analyzing how ShannonBase implements its HTAP capabilities.&lt;/p&gt;

&lt;p&gt;In the modern database field, HTAP (Hybrid Transactional/Analytical Processing) capability is a core metric for evaluating high-performance databases.&lt;/p&gt;

&lt;p&gt;ShannonBase, through its Rapid engine (commonly referred to as IMCS — In-Memory Column Store), builds an efficient columnar vectorized execution flow on top of MySQL.&lt;/p&gt;

&lt;p&gt;The HTAP execution flow of ShannonBase can be summarized in the following five key stages.&lt;/p&gt;

&lt;p&gt;I. Plan Capture &amp;amp; Translation&lt;br&gt;
ShannonBase does not operate completely independently of MySQL; instead, it deeply integrates (Hooks) into MySQL’s optimizer workflow.&lt;/p&gt;

&lt;p&gt;Intervention Timing: In the current version, after MySQL’s optimizer generates the original AccessPath, ShannonBase’s optimizer begins its intervention. Future versions will advance this intervention to an earlier stage, intervening during MySQL’s optimization process to more quickly and accurately determine if the current join order is optimal.&lt;/p&gt;

&lt;p&gt;Translation Process: Once MySQL completes its optimization, the translate function in ShannonBase's Rapid engine optimizer is responsible for converting MySQL's optimized AccessPath tree into ShannonBase's internal QueryPlan.&lt;/p&gt;

&lt;p&gt;Node Conversion: It converts MySQL AccessPath nodes into query nodes supported by Rapid. For example, a MySQL aggregation path is converted into ShannonBase’s Aggregate node, and a filter path is converted into a Filter node.&lt;/p&gt;

&lt;p&gt;Fallback Mechanism: For complex operators that ShannonBase cannot yet handle, it encapsulates them using the MySQLNative class (see query_plan.h). This ensures the query can fall back to execution by the native MySQL engine, guaranteeing system robustness. The advantage of this approach is that it allows for an iterative process, first handling the query plans with the greatest impact on performance without affecting overall execution.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>database</category>
      <category>performance</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>ShannonBase HTAP Architecture Analysis: From MySQL Optimizer to Vectorized Execution Engine — Overview</title>
      <dc:creator>ShannonData.AI</dc:creator>
      <pubDate>Thu, 26 Feb 2026 02:49:31 +0000</pubDate>
      <link>https://forem.com/shannonbase/shannonbase-htap-architecture-analysis-from-mysql-optimizer-to-vectorized-execution-engine--561j</link>
      <guid>https://forem.com/shannonbase/shannonbase-htap-architecture-analysis-from-mysql-optimizer-to-vectorized-execution-engine--561j</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nbrgewpgwqv1rc328b8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nbrgewpgwqv1rc328b8.png" alt=" " width="800" height="111"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;T&lt;/strong&gt;his article is the overview in a series introducing ShannonBase’s query optimization and execution. Subsequent articles will provide detailed introductions to each module, systematically analyzing how ShannonBase implements its HTAP capabilities.&lt;/p&gt;

&lt;p&gt;In the modern database field, HTAP (Hybrid Transactional/Analytical Processing) capability is a core metric for evaluating high-performance databases.&lt;/p&gt;

&lt;p&gt;ShannonBase, through its Rapid engine (commonly referred to as IMCS — In-Memory Column Store), builds an efficient columnar vectorized execution flow on top of MySQL.&lt;/p&gt;

&lt;p&gt;The HTAP execution flow of ShannonBase can be summarized in the following five key stages.&lt;/p&gt;
&lt;h2&gt;
  
  
  I. Plan Capture &amp;amp; Translation
&lt;/h2&gt;

&lt;p&gt;ShannonBase does not operate completely independently of MySQL; instead, it deeply integrates (Hooks) into MySQL’s optimizer workflow.&lt;/p&gt;

&lt;p&gt;Intervention Timing: In the current version, after MySQL’s optimizer generates the original AccessPath, ShannonBase’s optimizer begins its intervention. Future versions will advance this intervention to an earlier stage, intervening during MySQL’s optimization process to more quickly and accurately determine if the current join order is optimal.&lt;/p&gt;

&lt;p&gt;Translation Process: Once MySQL completes its optimization, the translate function in ShannonBase's Rapid engine optimizer is responsible for converting MySQL's optimized AccessPath tree into ShannonBase's internal QueryPlan.&lt;/p&gt;

&lt;p&gt;Node Conversion: It converts MySQL AccessPath nodes into query nodes supported by Rapid. For example, a MySQL aggregation path is converted into ShannonBase’s Aggregate node, and a filter path is converted into a Filter node.&lt;/p&gt;

&lt;p&gt;Fallback Mechanism: For complex operators that ShannonBase cannot yet handle, it encapsulates them using the MySQLNative class (see query_plan.h). This ensures the query can fall back to execution by the native MySQL engine, guaranteeing system robustness. The advantage of this approach is that it allows for an iterative process, first handling the query plans with the greatest impact on performance without affecting overall execution.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Plan Optimizer::translate_access_path(OptimizeContext *ctx, THD *thd, AccessPath *path, const JOIN *join) {
  if (!path) return nullptr;
  switch (path-&amp;gt;type) {
    case AccessPath::TABLE_SCAN:
    case AccessPath::INDEX_SCAN:
    case AccessPath::INDEX_RANGE_SCAN: {
      auto scan = std::make_unique&amp;lt;ScanTable&amp;gt;();
      scan-&amp;gt;original_path = path;
      TABLE *table{nullptr};
      if (path-&amp;gt;type == AccessPath::INDEX_SCAN) {
        table = path-&amp;gt;index_scan().table;
        scan-&amp;gt;scan_type = ScanTable::ScanType::INDEX_SCAN;
      } else if (path-&amp;gt;type == AccessPath::INDEX_RANGE_SCAN) {
        const auto &amp;amp;irs = path-&amp;gt;index_range_scan();
        if (irs.used_key_part != nullptr &amp;amp;&amp;amp; irs.num_used_key_parts &amp;gt; 0 &amp;amp;&amp;amp; irs.used_key_part[0].field != nullptr)
          table = irs.used_key_part[0].field-&amp;gt;table;
        scan-&amp;gt;scan_type = ScanTable::ScanType::INDEX_SCAN;
      } else {
        table = path-&amp;gt;table_scan().table;
        scan-&amp;gt;scan_type = ScanTable::ScanType::FULL_TABLE_SCAN;
      }
      assert(table);
      auto share = ShannonBase::shannon_loaded_tables-&amp;gt;get(table-&amp;gt;s-&amp;gt;db.str, table-&amp;gt;s-&amp;gt;table_name.str);
      auto table_id = share ? share-&amp;gt;m_tableid : 0;
      scan-&amp;gt;rpd_table = (share-&amp;gt;is_partitioned) ? Imcs::Imcs::instance()-&amp;gt;get_rpd_parttable(table_id)
                                                : Imcs::Imcs::instance()-&amp;gt;get_rpd_table(table_id);
      assert(scan-&amp;gt;rpd_table);
      scan-&amp;gt;estimated_rows = path-&amp;gt;num_output_rows();
      scan-&amp;gt;source_table = table;
      return scan;
    } break;
    case AccessPath::HASH_JOIN: {
      auto hashjoin_node = std::make_unique&amp;lt;HashJoin&amp;gt;();
      hashjoin_node-&amp;gt;original_path = path;
      auto &amp;amp;param = path-&amp;gt;hash_join();
      // Recursively convert children
      hashjoin_node-&amp;gt;children.push_back(translate_access_path(ctx, thd, param.outer, join));
      hashjoin_node-&amp;gt;children.push_back(translate_access_path(ctx, thd, param.inner, join));
      // Extract Join Conditions
      if (param.join_predicate) {
        for (auto *cond : param.join_predicate-&amp;gt;expr-&amp;gt;equijoin_conditions) {
          hashjoin_node-&amp;gt;join_conditions.push_back(cond);
        }
        // Handle other conditions...
      }
      hashjoin_node-&amp;gt;allow_spill = param.allow_spill_to_disk;
      hashjoin_node-&amp;gt;estimated_rows = path-&amp;gt;num_output_rows();
      return hashjoin_node;
    } break;
    case AccessPath::NESTED_LOOP_JOIN: {
      auto nestloop_node = std::make_unique&amp;lt;NestLoopJoin&amp;gt;();
      nestloop_node-&amp;gt;original_path = path;
      auto &amp;amp;param = path-&amp;gt;nested_loop_join();
      // Recursively convert children
      nestloop_node-&amp;gt;children.push_back(translate_access_path(ctx, thd, param.outer, join));
      nestloop_node-&amp;gt;children.push_back(translate_access_path(ctx, thd, param.inner, join));
      nestloop_node-&amp;gt;pfs_batch_mode = param.pfs_batch_mode;
      nestloop_node-&amp;gt;already_expanded_predicates = param.already_expanded_predicates;
      // Extract Join Conditions
      nestloop_node-&amp;gt;source_join_predicate = param.join_predicate;
      if (param.join_predicate) {
        for (auto *cond : param.join_predicate-&amp;gt;expr-&amp;gt;equijoin_conditions) {
          nestloop_node-&amp;gt;join_conditions.push_back(cond);
        }
        // Handle other conditions...
      }
      nestloop_node-&amp;gt;equijoin_predicates = param.equijoin_predicates;
      return nestloop_node;
    } break;
    case AccessPath::AGGREGATE: {
      auto agg = std::make_unique&amp;lt;LocalAgg&amp;gt;();
      agg-&amp;gt;original_path = path;
      auto param = path-&amp;gt;aggregate();
      agg-&amp;gt;olap = param.olap;
      agg-&amp;gt;children.push_back(translate_access_path(ctx, thd, param.child, join));
      fill_aggregate_info(agg.get(), join);
      return agg;
    } break;
    case AccessPath::LIMIT_OFFSET: {
      auto limit = std::make_unique&amp;lt;Limit&amp;gt;();
      limit-&amp;gt;original_path = path;
      auto param = path-&amp;gt;limit_offset();
      limit-&amp;gt;limit = (param.limit - param.offset);  // mysql limit is (sql limit + sql offset)
      limit-&amp;gt;offset = param.offset;
      limit-&amp;gt;count_all_rows = param.count_all_rows;
      limit-&amp;gt;reject_multiple_rows = param.reject_multiple_rows;
      limit-&amp;gt;send_records_override = (param.send_records_override) ? *param.send_records_override : 0;
      limit-&amp;gt;children.push_back(translate_access_path(ctx, thd, param.child, join));
      return limit;
    } break;
    case AccessPath::FILTER: {
      auto filter = std::make_unique&amp;lt;Filter&amp;gt;();
      filter-&amp;gt;original_path = path;
      auto param = path-&amp;gt;filter();
      filter-&amp;gt;condition = param.condition;
      filter-&amp;gt;children.push_back(translate_access_path(ctx, thd, param.child, join));
      filter-&amp;gt;predict = Optimizer::convert_item_to_predicate(thd, param.condition);
      return filter;
    } break;
    case AccessPath::SORT: {
      auto param = path-&amp;gt;sort();
      // only it has `limit` clause, can be converted to `TopN`, such as `select xxx from xxx order by xx limit xx`.
      if (param.limit &amp;gt; 0 &amp;amp;&amp;amp; param.limit != HA_POS_ERROR) {
        auto topn = std::make_unique&amp;lt;TopN&amp;gt;();
        topn-&amp;gt;original_path = path;
        topn-&amp;gt;order = param.order;
        topn-&amp;gt;limit = param.limit;
        topn-&amp;gt;filesort = param.filesort;
        topn-&amp;gt;children.push_back(translate_access_path(ctx, thd, param.child, join));
        topn-&amp;gt;estimated_rows = std::min(topn-&amp;gt;children[0]-&amp;gt;estimated_rows, (ha_rows)param.limit);
        return topn;
      } else {
        // without `LIMIT` clause, keep it as `order by`
        auto sort = std::make_unique&amp;lt;Sort&amp;gt;();
        sort-&amp;gt;original_path = path;
        sort-&amp;gt;order = param.order;
        sort-&amp;gt;filesort = param.filesort;
        sort-&amp;gt;limit = HA_POS_ERROR;
        sort-&amp;gt;children.push_back(translate_access_path(ctx, thd, param.child, join));
        sort-&amp;gt;estimated_rows = sort-&amp;gt;children[0]-&amp;gt;estimated_rows;
        sort-&amp;gt;remove_duplicates = param.remove_duplicates;
        sort-&amp;gt;unwrap_rollup = param.unwrap_rollup;
        sort-&amp;gt;force_sort_rowids = param.force_sort_rowids;
        sort-&amp;gt;tables_to_get_rowid_for = param.tables_to_get_rowid_for;
        return sort;
      }
      assert(false);
      return nullptr;
    } break;
    case AccessPath::EQ_REF: {
      auto param = path-&amp;gt;eq_ref();
      // Check if this is a dynamic join condition (not a constant lookup)
      bool dynamic_lookup{false};
      for (uint i = 0; i &amp;lt; param.ref-&amp;gt;key_parts; i++) {
        Item *item = param.ref-&amp;gt;items[i];
        if (!item) continue;
        // Check if this item references fields from other tables or is a non-constant expression
        if (!item-&amp;gt;const_item()) {
          dynamic_lookup = true;
          break;
        }
      }
      if (!dynamic_lookup) {
        auto scan = std::make_unique&amp;lt;ScanTable&amp;gt;();
        scan-&amp;gt;original_path = path;
        scan-&amp;gt;source_table = param.table;
        scan-&amp;gt;scan_type = ScanTable::ScanType::EQ_REF_SCAN;
        auto share = ShannonBase::shannon_loaded_tables-&amp;gt;get(scan-&amp;gt;source_table-&amp;gt;s-&amp;gt;db.str,
                                                             scan-&amp;gt;source_table-&amp;gt;s-&amp;gt;table_name.str);
        auto table_id = share ? share-&amp;gt;m_tableid : 0;
        scan-&amp;gt;rpd_table = (share-&amp;gt;is_partitioned) ? Imcs::Imcs::instance()-&amp;gt;get_rpd_parttable(table_id)
                                                  : Imcs::Imcs::instance()-&amp;gt;get_rpd_table(table_id);
        assert(scan-&amp;gt;rpd_table);
        // If this is a dynamic join condition, we cannot convert it to a static Predicate. Return nullptr
        // to indicate this should be handled as a join condition during execution.
        // This is a JOIN condition like "c.customer_id = o.customer_id"
        // It should be handled by the join executor, not converted to a static filter predicate
        scan-&amp;gt;prune_predicate = Optimizer::convert_item_to_predicate(thd, param.ref, param.table);
        return scan;
      } else {  // if it's dynamic join condition(such as a.id = b.id), the join condition cannot be pushed down.
        auto eq_ref = std::make_unique&amp;lt;MySQLNative&amp;gt;();
        eq_ref-&amp;gt;original_path = path;
        return eq_ref;
      }
      assert(false);
      return nullptr;  // not reach forever.
    } break;
    default: {
      // if Rapid can not handle, then re-encapsulate to a Fallback node
      auto original = std::make_unique&amp;lt;MySQLNative&amp;gt;();
      original-&amp;gt;original_path = path;
      original-&amp;gt;estimated_rows = path-&amp;gt;num_output_rows();
      // no need to transalte anymore, because it's a MySQL AccessPath.
      return original;
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  II. Core Optimization Rules
&lt;/h2&gt;

&lt;p&gt;Once the internal QueryPlan is generated, the Optimizer applies a series of optimization rules. The core optimizations include:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
void Optimizer::AddDefaultRules() {
  // becareful the order of rules. The rules be applied in the order of added.
  // Make predicates available
  m_optimize_rules.emplace_back(std::make_unique&amp;lt;PredicatePushDown&amp;gt;());
  // Use predicates for IMCU pruning
  m_optimize_rules.emplace_back(std::make_unique&amp;lt;StorageIndexPrune&amp;gt;());
  // After predicates clarify needed columns
  m_optimize_rules.emplace_back(std::make_unique&amp;lt;ProjectionPruning&amp;gt;());
  // Before aggregation changes structure
  m_optimize_rules.emplace_back(std::make_unique&amp;lt;TopNPushDown&amp;gt;());
  // aggregation push down to lower level operators
  m_optimize_rules.emplace_back(std::make_unique&amp;lt;AggregationPushDown&amp;gt;());
  // Re-run after structure changes
  m_optimize_rules.emplace_back(std::make_unique&amp;lt;ProjectionPruning&amp;gt;());
  // Final reordering with all optimizations
  m_optimize_rules.emplace_back(std::make_unique&amp;lt;JoinReOrder&amp;gt;());
  m_registered.store(true, std::memory_order_relaxed);
}
Plan Optimizer::Optimize(const OptimizeContext *context, const THD *thd, const JOIN *join) {
  if (!m_registered.load()) AddDefaultRules();
  if (m_optimize_rules.empty()) return nullptr;
  QueryPlan plan;
  plan.root = get_query_plan(const_cast&amp;lt;OptimizeContext *&amp;gt;(context), const_cast&amp;lt;THD *&amp;gt;(thd), const_cast&amp;lt;JOIN *&amp;gt;(join));
  for (auto &amp;amp;rule : m_optimize_rules) {
    Timer rule_timer;
    rule-&amp;gt;apply(plan.root);
  }
  return std::move(plan.root);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Predicate PushDown:&lt;/strong&gt; Pushes filter conditions as far down as possible towards the storage layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Storage Index Prune:&lt;/strong&gt; This is key to HTAP performance improvement. With the Storage Index (see writable_access_path.inc), the engine can use statistical information (such as Max/Min) to exclude non-matching data blocks before reading them, significantly reducing I/O pressure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Projection Pruning:&lt;/strong&gt; A key advantage of columnar storage is reading only the required columns. The optimizer calculates projected_columns to ensure the execution engine loads only the relevant column data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Aggregation PushDown:&lt;/strong&gt; When possible, aggregation operations are completed directly during the scan process, avoiding the generation of a large number of intermediate row data.&lt;/p&gt;

&lt;h2&gt;
  
  
  III. Vectorized AccessPath Reconstruction
&lt;/h2&gt;

&lt;p&gt;Once the optimization is complete, the QueryPlan needs to be remapped back into an execution framework that MySQL can understand, but at this point, the path is already equipped with “vectorized” capabilities.&lt;br&gt;
The RapidAccessPath structure is defined in access_path.h:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  ...
  bool m_vectorized{false}; 
  ...
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The ToAccessPath method within it is responsible for completing this conversion and setting the m_vectorized flag to true. This marks that the query path is now ready to enter ShannonBase's high-performance execution channel.&lt;/p&gt;

&lt;h2&gt;
  
  
  IV. Vectorized Operator Instantiation (Iterator Creation)
&lt;/h2&gt;

&lt;p&gt;This is the core part of HTAP execution. The PathGenerator class is responsible for instantiating the AccessPath into a concrete RowIterator. ShannonBase retains MySQL's execution engine. The advantage of this approach is that it maximizes the reuse of the original execution mechanisms, related concepts, data structures, and so on.&lt;/p&gt;

&lt;p&gt;Vectorized Operator Library: Rapid implements a series of vectorized operators (currently completed operators include TableScanIterator, HashJoinIterator, AggregateIterator, etc., with more vectorized execution operators to be provided in the future).&lt;/p&gt;

&lt;p&gt;Execution Mode Selection: If path-&amp;gt;vectorized is true, the PathGenerator will prioritize creating ShannonBase's own fast operators.&lt;/p&gt;

&lt;p&gt;Data Flow Direction: These operators no longer process data in the traditional MySQL way of “one row at a time” (Iterative Row-at-a-time), but rather process data in “one batch/vector at a time,” fully leveraging the acceleration of modern CPU SIMD instruction sets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;unique_ptr_destroy_only&amp;lt;RowIterator&amp;gt; PathGenerator::CreateIteratorFromAccessPath(THD *thd, MEM_ROOT *mem_root,
                                                                                 OptimizeContext *context,
                                                                                 AccessPath *top_path, JOIN *top_join,
                                                                                 bool top_eligible_for_batch_mode) {
  unique_ptr_destroy_only&amp;lt;RowIterator&amp;gt; ret;
  Mem_root_array&amp;lt;IteratorToBeCreated&amp;gt; todo(mem_root);
  todo.push_back({top_path, top_join, top_eligible_for_batch_mode, &amp;amp;ret, {}});
  // The access path trees can be pretty deep, and the stack frames can be big
  // on certain compilers/setups, so instead of explicit recursion, we push jobs
  // onto a MEM_ROOT-backed stack. This uses a little more RAM (the MEM_ROOT
  // typically lives to the end of the query), but reduces the stack usage
  // greatly.
  //
  // The general rule is that if an iterator requires any children, it will push
  // jobs for their access paths at the end of the stack and then re-push
  // itself. When the children are instantiated and we get back to the original
  // iterator, we'll actually instantiate it. (We distinguish between the two
  // cases on basis of whether job.children has been allocated or not; the child
  // iterator's destination will point into this array. The child list needs
  // to be allocated in a way that doesn't move around if the TODO job list
  // is reallocated, which we do by means of allocating it directly on the
  // MEM_ROOT.)
  while (!todo.empty()) {
    IteratorToBeCreated job = todo.back();
    todo.pop_back();
    AccessPath *path = job.path;
    JOIN *join = job.join;
    bool eligible_for_batch_mode = job.eligible_for_batch_mode;
    if (job.join != nullptr) {
      assert(!job.join-&amp;gt;needs_finalize);
    }
    unique_ptr_destroy_only&amp;lt;RowIterator&amp;gt; iterator;
    ha_rows *examined_rows = nullptr;
    if (path-&amp;gt;count_examined_rows &amp;amp;&amp;amp; join != nullptr) {
      examined_rows = &amp;amp;join-&amp;gt;examined_rows;
    }
    switch (path-&amp;gt;type) {
      case AccessPath::TABLE_SCAN: {
        const auto &amp;amp;param = path-&amp;gt;table_scan();
        std::unique_ptr&amp;lt;Imcs::Predicate&amp;gt; predicate{nullptr};
        std::vector&amp;lt;uint32_t&amp;gt; projection;
        ha_rows limit{HA_POS_ERROR};
        ha_rows offset{0};
        bool use_storage_index{false};
        if (path-&amp;gt;secondary_engine_data) {
          auto rapid_scan_param = static_cast&amp;lt;RapidScanParameters *&amp;gt;(path-&amp;gt;secondary_engine_data);
          predicate = std::move(rapid_scan_param-&amp;gt;prune_predicate);
          projection = std::move(rapid_scan_param-&amp;gt;projected_columns);
          limit = rapid_scan_param-&amp;gt;limit;
          offset = rapid_scan_param-&amp;gt;offset;
          use_storage_index = true;
          rapid_scan_param-&amp;gt;~RapidScanParameters();
#ifndef NDEBUG
          if (predicate) {
            DBUG_PRINT("rapid_optimizer",
                       ("TABLE_SCAN: Passing predicate to iterator: %s", predicate-&amp;gt;to_string().c_str()));
          }
          if (!projection.empty()) {
            DBUG_PRINT("rapid_optimizer", ("TABLE_SCAN: Column projection enabled (%zu columns)", projection.size()));
          }
#endif
        }
        // Here param.table maybe a temp table/in-memory temp table.)
        if (path-&amp;gt;vectorized &amp;amp;&amp;amp; param.table-&amp;gt;s-&amp;gt;table_category == enum_table_category::TABLE_CATEGORY_USER) {
          assert(param.table-&amp;gt;s-&amp;gt;is_secondary_engine());
          iterator = NewIterator&amp;lt;ShannonBase::Executor::VectorizedTableScanIterator&amp;gt;(
              thd, mem_root, param.table, path-&amp;gt;num_output_rows(), examined_rows, std::move(predicate), projection,
              limit, offset, use_storage_index);
        } else
          iterator = NewIterator&amp;lt;TableScanIterator&amp;gt;(thd, mem_root, param.table, path-&amp;gt;num_output_rows(), examined_rows);
        break;
      }
      ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespace ShannonBase {
namespace Executor {
/**
 * VectorizedTableScanIterator - A vectorized table scan iterator that processes data in batches
 *
 * This iterator implements a vectorized execution model for table scanning, where data is
 * processed in batches rather than row-by-row. It leverages columnar storage and SIMD
 * optimizations to improve cache locality and CPU efficiency.
 */
class VectorizedTableScanIterator : public TableRowIterator {
 public:
  VectorizedTableScanIterator(THD *thd, TABLE *table, double expected_rows, ha_rows *examined_rows,
                              std::unique_ptr&amp;lt;Imcs::Predicate&amp;gt; predicate = nullptr,
                              const std::vector&amp;lt;uint32_t&amp;gt; &amp;amp;projection = {}, ha_rows limit = HA_POS_ERROR,
                              ha_rows offset = 0, bool use_storage_index = false);
  bool Init() override;
  int Read() override;
  /**
   * Set a filter function to be applied during scanning
   * @param filter Filter function that returns true for rows to keep
   */
  void set_filter(filter_func_t filter) { m_filter = filter; }
  size_t GetCurrentBatchSize() const { return m_batch_size; }
 private:
  size_t EstimateRowSize() const;
  /**
   * Calculate the optimal batch size based on system characteristics
   * Considers cache line size, row size, and expected row count
   * @param expected_rows Estimated number of rows to process
   * @return Optimal batch size in number of rows
   */
  size_t CalculateOptimalBatchSize(double expected_rows);
  void CacheActiveFields();
  /**
   * Preallocate memory for column chunks to avoid runtime allocations
   * Sets up the columnar storage buffers for batch processing
   */
  void PreallocateColumnChunks();
  int ReadNextBatch();
  inline void ClearBatchData() {
    for (auto &amp;amp;chunk : m_col_chunks) chunk.clear();
  }
  void UpdatePerformanceMetrics(std::chrono::high_resolution_clock::time_point start_time);
  /**
   * Adapt batch size based on runtime performance characteristics
   * Implements dynamic batch size adjustment for optimal performance
   */
  void AdaptBatchSize();
  /**
   * Populate the current row from batch data to MySQL row format
   * @return 0 on success, error code on failure
   */
  int PopulateCurrentRow();
  /**
   * Process field data from column chunk to MySQL field format
   * Dispatches to specialized handlers based on field type
   * @param field MySQL field structure
   * @param col_chunk Column chunk containing the data
   * @param rowid Row index within the current batch
   */
  inline void ProcessFieldData(Field *field, const ShannonBase::Executor::ColumnChunk &amp;amp;col_chunk, size_t rowid) {
    if (Utils::Util::is_string(field-&amp;gt;type()) || Utils::Util::is_blob(field-&amp;gt;type())) {
      ProcessStringField(field, col_chunk, rowid);
    } else {
      ProcessNumericField(field, col_chunk, rowid);
    }
  }
  ...
  }

  int VectorizedTableScanIterator::Read() {
  int result{ShannonBase::SHANNON_SUCCESS};
  if (m_batch_exhausted &amp;amp;&amp;amp; !m_eof_reached) {
    result = ReadNextBatch();
    if (result) {
      if (result == HA_ERR_END_OF_FILE &amp;amp;&amp;amp; m_curr_batch_size == 0) return HandleError(HA_ERR_END_OF_FILE);
      if (result != HA_ERR_END_OF_FILE) return HandleError(result);
    }
  }
  if (m_curr_row_in_batch &amp;gt;= m_curr_batch_size) {
    if (m_eof_reached) {
      return HandleError(HA_ERR_END_OF_FILE);
    } else {  // read the next batch.
      m_batch_exhausted = true;
      return Read();
    }
  }
  // fill up the data to table-&amp;gt;field
  result = PopulateCurrentRow();
  if (result) return HandleError(result);
  // move to the next row.
  m_curr_row_in_batch++;
  m_metrics.total_rows++;
  if (m_curr_row_in_batch &amp;gt;= m_curr_batch_size) {
    m_batch_exhausted = true;
    if (!m_eof_reached) m_curr_row_in_batch = 0;
  }
  return result;
}
int VectorizedTableScanIterator::ReadNextBatch() {
  auto batch_start = std::chrono::high_resolution_clock::now();
  ClearBatchData();
  size_t read_cnt = 0;
  int result = m_cursor-&amp;gt;next(m_batch_size, m_col_chunks, read_cnt);
  if (result != 0) {
    if (result == HA_ERR_END_OF_FILE) {
      m_eof_reached = true;
      if (read_cnt) {
        m_batch_exhausted = false;
        m_metrics.total_batches++;
        UpdatePerformanceMetrics(batch_start);
      }
      m_curr_batch_size = read_cnt;
      m_curr_row_in_batch = 0;
      return HA_ERR_END_OF_FILE;
    }
    if (++m_metrics.error_count &amp;gt; 10) return HA_ERR_GENERIC;
    if (result == HA_ERR_RECORD_DELETED &amp;amp;&amp;amp; !thd()-&amp;gt;killed) return ReadNextBatch();
    return result;
  }
  if (read_cnt == 0) {  // no data read, therefore set to EOF.
    m_eof_reached = true;
    return HA_ERR_END_OF_FILE;
  }
  m_curr_batch_size = read_cnt;
  m_curr_row_in_batch = 0;
  m_batch_exhausted = false;
  m_metrics.total_batches++;
  UpdatePerformanceMetrics(batch_start);
  AdaptBatchSize();
  return ShannonBase::SHANNON_SUCCESS;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  V. Storage Engine Integration
&lt;/h2&gt;

&lt;p&gt;Finally, the execution flow interfaces with the storage layer through ha_shannon_rapid.cc.&lt;/p&gt;

&lt;p&gt;Handler Management: The ha_rapid plugin opens tables via the open interface. If it is a partitioned table, it acquires rpd_parttable.&lt;/p&gt;

&lt;p&gt;Columnar Scan: Execution operators interact with the In-Memory Columnar Units (IMCU) in memory via the RapidCursor.&lt;/p&gt;

&lt;p&gt;HTAP Synchronization Mechanism: The SelfLoadManager and Populator mentioned in the code are responsible for synchronously loading/real-time streaming row-based data from InnoDB into the columnar storage space of the Rapid engine, ensuring that HTAP queries can see the latest transactional data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;ShannonBase’s HTAP execution flow follows a model of “deep integration, smooth enhancement.” It retains MySQL’s consistent external interface, but internally achieves its capabilities through:&lt;/p&gt;

&lt;p&gt;Intelligent Translation Layer: Seamlessly translate MySQL plans.&lt;br&gt;
High-Performance Optimization Rules: Implementing index pruning and predicate pushdown.&lt;/p&gt;

&lt;p&gt;Vectorized Execution Engine: Leveraging columnar storage and batch processing or vector processing to achieve a leap in analytical performance.&lt;/p&gt;

&lt;p&gt;This design allows users to directly gain the ability to handle large-scale analytical tasks without modifying business SQL, truly realizing the integration of OLTP and OLAP.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nbrgewpgwqv1rc328b8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nbrgewpgwqv1rc328b8.png" alt=" " width="800" height="111"&gt;&lt;/a&gt;&lt;br&gt;
ShannonBase is designed for data + AI&lt;br&gt;
• MySQL-compatible SQL engine&lt;br&gt;
• Built-in Vector &amp;amp; ML capabilities&lt;br&gt;
• HTAP / IMCS for AI-native workloads&lt;br&gt;
⭐ &lt;a href="https://github.com/Shannon-Data/ShannonBase" rel="noopener noreferrer"&gt;Star the repo&lt;/a&gt;&lt;br&gt;
🧩 &lt;a href="https://github.com/Shannon-Data/ShannonBase/pulls" rel="noopener noreferrer"&gt;Submit PRs&lt;/a&gt;&lt;br&gt;
🐞 &lt;a href="https://github.com/Shannon-Data/ShannonBase/issues" rel="noopener noreferrer"&gt;Open Issues&lt;/a&gt;&lt;br&gt;
💬 &lt;a href="https://github.com/Shannon-Data/ShannonBase/discussions" rel="noopener noreferrer"&gt;Join Discussion&lt;/a&gt;&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>opensource</category>
      <category>mysqlheatwave</category>
      <category>shannonbase</category>
    </item>
    <item>
      <title>Partition-Based Parallel Data Loading: An Architectural Practice for Improving Mass Data Import Efficiency</title>
      <dc:creator>ShannonData.AI</dc:creator>
      <pubDate>Thu, 16 Oct 2025 03:46:01 +0000</pubDate>
      <link>https://forem.com/shannonbase/partition-based-parallel-data-loading-an-architectural-practice-for-improving-mass-data-import-4jn4</link>
      <guid>https://forem.com/shannonbase/partition-based-parallel-data-loading-an-architectural-practice-for-improving-mass-data-import-4jn4</guid>
      <description>&lt;p&gt;In modern database systems, data volume is growing exponentially. How to efficiently load data from primary storage (such as InnoDB) to secondary engines (such as in-memory analytical engines, columnar engines, etc.) has become a critical performance challenge. Traditional serial loading methods often become a system bottleneck when dealing with large tables, especially partitioned tables. This article will delve into a parallel data loading scheme for partitioned tables, providing a detailed analysis of its design philosophy, implementation details, and the benefits it brings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrae6x08d78c3jsok9u2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrae6x08d78c3jsok9u2.png" alt=" " width="626" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I. Why Parallel Loading? Drawbacks of the Traditional Approach&lt;/p&gt;

&lt;p&gt;Traditional data loading typically employs a single-threaded sequential scan (Serial Scan). A worker thread reads each partition of the source data table in order, processes rows one by one, and then writes them to the target engine. The drawbacks of this model become glaringly obvious as data scales:&lt;/p&gt;

&lt;p&gt;Wasted Hardware Resources: Modern servers are commonly equipped with multi-core CPUs. The single-threaded model utilizes only one core, leaving a significant amount of CPU resources idle and failing to harness the hardware’s full potential.&lt;br&gt;
Significant Performance Bottleneck: The loading speed is entirely limited by the processing power of a single core and the throughput of a single I/O stream. When data volume reaches the terabyte level, the loading process can take hours, severely impacting data availability and business real-time requirements.&lt;br&gt;
Poor Scalability: Even if the server is upgraded with more CPU cores, the performance of serial loading cannot be improved. This architecture lacks horizontal scalability and cannot cope with future, faster data growth demands.&lt;br&gt;
Disregard for Partitioned Table Advantages: Partitioned tables physically split data into independent, individually manageable data blocks. This provides a natural foundation for parallel processing. However, traditional serial loading methods completely ignore this advantage, still treating the table as a single, massive entity.&lt;br&gt;
Therefore, to break the single-point performance bottleneck, fully utilize system resources, and embrace the architectural advantages brought by data partitioning, adopting a parallel loading solution is imperative.&lt;/p&gt;

&lt;p&gt;II. Key Architectural Design Points for Parallel Loading&lt;/p&gt;

&lt;p&gt;Our goal is to decompose the massive monolithic task of data loading into a set of smaller tasks that can be executed in parallel. Based on this concept, we have designed the following core points:&lt;/p&gt;

&lt;p&gt;Task Granularity: A single partition within a partitioned table is treated as the smallest unit of work for parallel processing. Loading the data of each partition is considered an independent task, making task division clear and natural.&lt;br&gt;
&lt;code&gt;unsigned int num_threads = std::thread::hardware_concurrency() * 0.8;&lt;br&gt;
  if (num_threads == 0) num_threads = SHANNON_PARTS_PARALLEL;&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Concurrency Model: A thread pool model is introduced. The system creates a group of worker threads based on hardware configuration (e.g., std:🧵:hardware_concurrency()). These threads atomically fetch (fetch-and-add) pending partition tasks from a shared task list, achieving dynamic load balancing.&lt;br&gt;
Resource Isolation &amp;amp; Context Management: This is the most critical and complex aspect of the parallel design. In a database kernel environment, multiple threads operating in parallel can easily cause race conditions and state pollution. Therefore, it is essential to provide a completely isolated execution context for each worker thread. This includes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bool PartitionLoadThreadContext::initialize(const Rapid_load_context *context) {
  // Create THD
  m_thd = new THD;
  if (!m_thd) return true;

  m_thd-&amp;gt;set_new_thread_id();
  m_thd-&amp;gt;thread_stack = (char *)this;
  m_thd-&amp;gt;set_command(COM_DAEMON);
  m_thd-&amp;gt;security_context()-&amp;gt;skip_grants();
  m_thd-&amp;gt;system_thread = NON_SYSTEM_THREAD;
  m_thd-&amp;gt;store_globals();
  m_thd-&amp;gt;lex-&amp;gt;sql_command = SQLCOM_SELECT;

  // Open table from source table share.
  TABLE_SHARE *share = context-&amp;gt;m_table-&amp;gt;s;
  m_table = (TABLE *)m_thd-&amp;gt;mem_root-&amp;gt;Alloc(sizeof(TABLE));
  if (!m_table) return true;
  // get a copy of source TABLE object with its table share. TABLE will be used for feteching data from part tables.
  // we will clone a new handler for using multi-cursor. The invoker[mysql_secodary_load_unload] hold the refcnt
  // of shhare, here, we dont need to warry about its be released.
  if (open_table_from_share(m_thd, share, share-&amp;gt;path.str, 0, SKIP_NEW_HANDLER, 0, m_table, false, nullptr)) {
    return true;
  }

  m_table-&amp;gt;in_use = m_thd;
  m_table-&amp;gt;alias_name_used = context-&amp;gt;m_table-&amp;gt;alias_name_used;
  m_table-&amp;gt;read_set = context-&amp;gt;m_table-&amp;gt;read_set;
  m_table-&amp;gt;write_set = context-&amp;gt;m_table-&amp;gt;write_set;

  return false;
}

bool PartitionLoadThreadContext::clone_handler(ha_innopart *file, const Rapid_load_context *context,
                                               std::mutex &amp;amp;clone_mutex) {
  std::lock_guard&amp;lt;std::mutex&amp;gt; lock(clone_mutex);
  THD *original_thd = context-&amp;gt;m_table-&amp;gt;in_use;
  context-&amp;gt;m_table-&amp;gt;in_use = m_thd;
  m_handler = static_cast&amp;lt;ha_innopart *&amp;gt;(file-&amp;gt;clone(context-&amp;gt;m_table-&amp;gt;s-&amp;gt;normalized_path.str, m_thd-&amp;gt;mem_root));
  context-&amp;gt;m_table-&amp;gt;in_use = original_thd;

  if (!m_handler) return true;

  m_handler-&amp;gt;change_table_ptr(m_table, m_table-&amp;gt;s);
  m_table-&amp;gt;file = m_handler;

  // Note: ha_open() is not needed because:
  // 1. ha_innopart::clone() inherits the open state from the source handler
  // 2. change_table_ptr() updates internal pointers while preserving the open state
  // 3. Partition-level operations (rnd_init_in_part/rnd_next_in_part) work directly

  return false;
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Robust Error Handling Mechanism: In a parallel system, the failure of any single task can affect the whole. We designed a “Fast-Fail” mechanism:&lt;/p&gt;

&lt;p&gt;Safe Resource Management: Utilize C++’s RAII (Resource Acquisition Is Initialization) idiom to ensure correct resource release. For example, the PartitionLoadHandlerLock class acquires the external lock for InnoDB in its constructor and automatically releases it in its destructor. This ensures the lock is always released, regardless of whether an exception is thrown during execution, thus avoiding deadlocks.&lt;br&gt;
III. Introduction to Core Implementation Details&lt;/p&gt;

&lt;p&gt;Based on the above design, let’s look at the specific implementation in the code:&lt;/p&gt;

&lt;p&gt;Task Creation and Distribution: The load_innodbpart_parallel function first iterates through context-&amp;gt;m_extra_info.m_partition_infos, creating a partition_load_task_t task structure for each partition and storing it in a tasks vector. An atomic counter std::atomic_size_t task_idx serves as the shared task index, allowing all worker threads to safely claim tasks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;unsigned int num_threads = std::thread::hardware_concurrency() * 0.8;
  if (num_threads == 0) num_threads = SHANNON_PARTS_PARALLEL;
  std::vector&amp;lt;partition_load_task_t&amp;gt; tasks;
  tasks.reserve(context-&amp;gt;m_extra_info.m_partition_infos.size());
  for (auto &amp;amp;[part_name, part_id] : context-&amp;gt;m_extra_info.m_partition_infos) {
    partition_load_task_t task;
    task.part_id = part_id;
    task.part_key = part_name + "#" + std::to_string(part_id);
    task.result = ShannonBase::SHANNON_SUCCESS;
    task.rows_loaded = 0;
    tasks.push_back(std::move(task));
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thread Pool and Work Unit: A temporary thread pool is created via std::vector&lt;a&gt;std::thread&lt;/a&gt; workers_pool. The core logic executed by each thread is encapsulated in the worker_func lambda expression.&lt;br&gt;
Thread Context Initialization (PartitionLoadThreadContext): At the beginning of worker_func, each thread creates an instance of PartitionLoadThreadContext. Within its initialize() and clone_handler() methods, the crucial operations of creating an independent THD and cloning the ha_innopart handler are performed, ensuring that the execution environments of threads do not interfere with each other.&lt;br&gt;
Single Partition Loading Logic (load_one_partition): This lambda function is the actual data mover. It receives a partition task and executes the following steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;auto load_one_partition = [&amp;amp;](partition_load_task_t &amp;amp;task,
                                ha_innopart *task_handler) -&amp;gt; int {  // Lambda: load a partition.
    int result{ShannonBase::SHANNON_SUCCESS};
    task.rows_loaded = 0;
    if (task_handler == nullptr) {
      std::lock_guard&amp;lt;std::mutex&amp;gt; lock(error_mutex);
      task.error_msg = "Handler clone is null for partition " + std::to_string(task.part_id);
      task.result = HA_ERR_GENERIC;
      return HA_ERR_GENERIC;
    }
    Rapid_load_context::extra_info_t::m_active_part_key = task.part_key;
    ....
    if (task_handler-&amp;gt;inited == handler::NONE &amp;amp;&amp;amp; task_handler-&amp;gt;rnd_init_in_part(task.part_id, true)) {
       ....
    }
    part_initialized = true;
    int tmp{HA_ERR_GENERIC};
    std::unique_ptr&amp;lt;uchar[]&amp;gt; rec_buff = std::make_unique&amp;lt;uchar[]&amp;gt;(context-&amp;gt;m_table-&amp;gt;s-&amp;gt;rec_buff_length);
    memset(rec_buff.get(), 0, context-&amp;gt;m_table-&amp;gt;s-&amp;gt;rec_buff_length);
    while ((tmp = task_handler-&amp;gt;rnd_next_in_part(task.part_id, rec_buff.get())) != HA_ERR_END_OF_FILE) {
      if (tmp == HA_ERR_KEY_NOT_FOUND) break;
      ....
      auto partition_ptr = part_tb_ptr-&amp;gt;get_partition(task.part_key);
      ...  
      // parttable is shared_ptr/unique_ptr to PartTable
      if (partition_ptr-&amp;gt;write(context, rec_buff.get(), context-&amp;gt;m_table-&amp;gt;s-&amp;gt;reclength, col_offsets.data(),
                               context-&amp;gt;m_table-&amp;gt;s-&amp;gt;fields, null_byte_offsets.data(), null_bitmasks.data())) {
        std::lock_guard&amp;lt;std::mutex&amp;gt; lock(error_mutex);
        task.error_msg = "load data from " + sch_name + "." + table_name + " to imcs failed";
        task.result = HA_ERR_GENERIC;
        return HA_ERR_GENERIC;
      }
      memset(rec_buff.get(), 0, context-&amp;gt;m_table-&amp;gt;s-&amp;gt;rec_buff_length);
      task.rows_loaded++;
      if (tmp == HA_ERR_RECORD_DELETED &amp;amp;&amp;amp; !context-&amp;gt;m_thd-&amp;gt;killed) continue;
    }
    task.result = ShannonBase::SHANNON_SUCCESS;
    return result;
  };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lock and Transaction Management (PartitionLoadHandlerLock): Within worker_func, the instantiation of PartitionLoadHandlerLock automatically performs the ha_external_lock(m_thd, F_RDLCK) operation, applying the necessary lock for data reading. When worker_func ends, the lock is automatically released via its destructor.&lt;br&gt;
IV. Summary&lt;/p&gt;

&lt;p&gt;The partition-based parallel loading scheme introduced in this article successfully transforms a time-consuming serial task into an efficient parallel workflow. Through carefully designed task granularity, thread model, resource isolation, and error handling mechanisms, this implementation not only significantly improves data loading performance and fully unleashes the potential of modern multi-core hardware but also ensures operational stability and security within the database environment.&lt;/p&gt;

&lt;p&gt;Practice has proven that this architecture has excellent scalability: as the number of CPU cores and table partitions increases, the throughput of data loading can grow almost linearly. It serves as a typical example of combining modern C++ parallel programming techniques with database kernel knowledge, providing a mature, efficient, and reliable engineering solution for the rapid import of massive data.&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>mysqlheatwave</category>
      <category>opensource</category>
      <category>ai</category>
    </item>
    <item>
      <title>ShannonBase — The Next-Gen HTAP Database for the AI Era</title>
      <dc:creator>ShannonData.AI</dc:creator>
      <pubDate>Tue, 16 Sep 2025 02:28:51 +0000</pubDate>
      <link>https://forem.com/shannonbase/shannonbase-the-next-gen-htap-database-for-the-ai-era-305</link>
      <guid>https://forem.com/shannonbase/shannonbase-the-next-gen-htap-database-for-the-ai-era-305</guid>
      <description>&lt;p&gt;ShannonBase (by Shannon Data AI) is a MySQL-compatible HTAP (Hybrid Transactional/Analytical Processing) database designed and optimized for modern big-data and AI workloads. Think of it as "MySQL for the AI era": it keeps familiar SQL and operational semantics while adding native embedding/vector support, built-in machine learning, a columnar in-memory engine, and a lightweight JavaScript runtime. The result is a unified platform where OLTP, OLAP, vector search and ML workflows can run together with minimal data movement.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Shannon-Data/ShannonBase" rel="noopener noreferrer"&gt;https://github.com/Shannon-Data/ShannonBase&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Design Principles&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zero Data Movement: keep data, embeddings, models and inference as close to storage as possible - inside the database - to reduce latency, cost and operational complexity.&lt;/li&gt;
&lt;li&gt;Native ML &amp;amp; Vector Support: provide first-class vector types, embedding pipelines and in-DB ML training/inference primitives.&lt;/li&gt;
&lt;li&gt;HTAP with Intelligent Routing: combine row and column engines and route work to the best engine via cost-based and ML-driven decisions.&lt;/li&gt;
&lt;li&gt;SQL-First Developer Experience: enable data scientists and application engineers to use SQL (and optional JS stored procedures) rather than stitching many systems together.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Architecture Overview&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;InnoDB + Rapid (IMCS Column Store)
InnoDB (row store): handles transactional, write-heavy OLTP workloads and durability.&lt;/li&gt;
&lt;li&gt;Rapid (IMCS - In-Memory Column Store): a columnar, memory-optimized engine for analytics, aggregations and vector/semantic search.&lt;/li&gt;
&lt;li&gt;Intelligent Workload Routing: queries are dispatched to InnoDB or Rapid using a cost model and optional ML models that learn which engine yields better performance for a given query pattern.&lt;/li&gt;
&lt;li&gt;Version Linking &amp;amp; MVCC: Rapid supports MVCC via a version-linking mechanism so analytical reads observe consistent snapshots while writes occur in InnoDB.&lt;/li&gt;
&lt;li&gt;Redo-log Synchronization: InnoDB changes are applied to Rapid by replaying redo logs (synchronously), keeping both engines consistent without ETL.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Multimodal Data Types&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Structured data: classic SQL columns and indexing.&lt;/li&gt;
&lt;li&gt;JSON: efficient JSON storage and querying (MySQL-style semantics, extended where needed).&lt;/li&gt;
&lt;li&gt;GIS: spatial types and ST_* functions for location queries.&lt;/li&gt;
&lt;li&gt;VECTOR: native vector column type and helper functions (eg. Distance, etc.) for embeddings and similarity search.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Native Machine Learning&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Embedded model runtime: integrates LightGBM (and optionally XGBoost or other engines) so you can train and infer inside the DB.
Stored procedures for ML: sys.ML_TRAIN, sys.ML_PREDICT_ROW, sys.ML_MODEL_LOAD, etc. - train models on DB tables and run predictions in place.&lt;/li&gt;
&lt;li&gt;Model imports: load pre-trained models to save training time and standardize inference.&lt;/li&gt;
&lt;li&gt;ONNX/ONNXRuntime support: run portable models (including small LLMs or local ONNX LLMs) where appropriate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Retrieval-Augmented Generation (RAG) &amp;amp; Embeddings&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Embedding routines: built-in embedding APIs and stored procedures to create and manage embedding tables.&lt;/li&gt;
&lt;li&gt;Vector stores &amp;amp; RAG helpers: utilities to embed tables, perform ANN / nearest-neighbor searches, and assemble RAG inputs for LLMs (local or remote).&lt;/li&gt;
&lt;li&gt;LLM integration: run generation via integrated routines or attach an ONNX LLM runtime for on-prem inference.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Multilingual Stored Procedures: JerryScript&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;JavaScript engine: JerryScript embedded for writing stored procedures and UDFs in JavaScript as an alternative to pure SQL, lowering the barrier for custom logic and preprocessing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Benefits &amp;amp; Target Use Cases&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Benefits&lt;br&gt;
Unified platform for OLTP, OLAP, vector search and ML - reduces system sprawl.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Low latency for inference and semantic search because embeddings and models live next to the data.&lt;br&gt;
Familiar SQL surface with optional JS for custom logic.&lt;br&gt;
HTAP efficiency via in-memory column engine for analytics and row engine for transactional integrity.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Target Use Cases&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enterprise knowledge bases and RAG systems
Real-time personalization and recommendation (online training &amp;amp; inference)
Interactive analytics and BI over fresh transactional data
Spatial analytics and location-aware services
Simplified MLOps for small/medium in-DB models and feature stores&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
ShannonBase brings together a pragmatic set of features to address the needs of modern AI and analytics applications: an HTAP architecture with intelligent routing, native vector and embedding support, in-database ML training and inference, and a small JavaScript runtime for programmable extensions. For organizations that want to reduce data movement, simplify the analytics + ML stack, and deliver low-latency semantic search and in-place inference, ShannonBase offers a compelling, SQL-centric path forward.&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>machinelearning</category>
      <category>mysqlheatwave</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
