<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ryan Xu</title>
    <description>The latest articles on Forem by Ryan Xu (@oninebx).</description>
    <link>https://forem.com/oninebx</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/oninebx"/>
    <language>en</language>
    <item>
      <title>A Lightweight, Plugin-Oriented ETL Engine for Data Synchronization Built on Akka.NET</title>
      <dc:creator>Ryan Xu</dc:creator>
      <pubDate>Mon, 12 Jan 2026 19:59:11 +0000</pubDate>
      <link>https://forem.com/oninebx/a-lightweight-plugin-oriented-etl-engine-for-data-synchronization-built-on-akkanet-20cd</link>
      <guid>https://forem.com/oninebx/a-lightweight-plugin-oriented-etl-engine-for-data-synchronization-built-on-akkanet-20cd</guid>
      <description>&lt;h2&gt;
  
  
  Data Synchronization Everywhere
&lt;/h2&gt;

&lt;p&gt;Data synchronization between business systems is extremely common in real-world software projects. As organizations grow, data no longer lives in a single system, turning consistency and timely data propagation into an ongoing engineering concern.&lt;/p&gt;

&lt;p&gt;In projects I’ve been involved in, data synchronization has been a recurring practical necessity. This ranges from synchronizing employee and organizational data from OA systems for authorization and identity management, to batch-oriented scenarios such as monthly imports of bank reconciliation CSV files. In more complex domains like healthcare, data synchronization is often a prerequisite for compliant data usage: only carefully selected subsets of PMS data are synchronized into a portal application, where they are replicated into both SQL Server and Elasticsearch to simplify system design and enable efficient, compliant querying.&lt;/p&gt;

&lt;p&gt;Across multiple projects, data synchronization repeatedly surfaced as a cross-cutting concern. Instead of addressing it through ad-hoc, project-specific solutions, I began designing a lightweight and flexible synchronization engine. It provides a configurable, observable, and extensible foundation for building business-specific workflows, while avoiding the complexity and cost of full-scale ETL platforms. The architecture is intentionally open, enabling customization where domain logic is required, yet keeping the core simple and focused.&lt;/p&gt;

&lt;h2&gt;
  
  
  Motivation and Design Goals
&lt;/h2&gt;

&lt;p&gt;Akka.NET actors are isolated, stateful, and message-driven, and are organized under supervisors that monitor and manage failures. This provides &lt;strong&gt;concurrency, fault tolerance, and predictable recovery&lt;/strong&gt;, making actors ideal for building reliable and observable data synchronization workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Concurrency model in Data Synchronization
&lt;/h3&gt;

&lt;p&gt;Data synchronization workflows can be modeled as a set of &lt;strong&gt;pipelines&lt;/strong&gt;, where each pipeline represents a distinct category of data or synchronization use case. Different data types often follow different rules, schedules, and destinations, making this separation both natural and practical.&lt;/p&gt;

&lt;p&gt;Within each pipeline, multiple &lt;strong&gt;workers&lt;/strong&gt; process the same type of data from different sources or partitions. This structure enables parallel execution while keeping responsibilities clear: pipelines define &lt;em&gt;what&lt;/em&gt; is synchronized, and workers define &lt;em&gt;how&lt;/em&gt; the workload is parallelized.&lt;/p&gt;

&lt;h3&gt;
  
  
  Plugin-Based Pipelines and Workers for ETL
&lt;/h3&gt;

&lt;p&gt;In the context of pipelines and workers, &lt;strong&gt;ETL&lt;/strong&gt; refers to the classic &lt;strong&gt;Extract, Transform, Load&lt;/strong&gt; process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Extract&lt;/strong&gt;: Workers pull data from various sources, such as files, databases, or APIs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transform&lt;/strong&gt;: Plugins apply business rules, validations, or data transformations to the extracted data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load&lt;/strong&gt;: Processed data is persisted into the target system, such as SQL or NoSQL databases.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64wx5q73o1uw2o67l0vx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64wx5q73o1uw2o67l0vx.png" alt="etl architecture" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each pipeline represents a &lt;strong&gt;distinct category of data&lt;/strong&gt;, and its workers handle data from multiple sources or partitions. The ETL process is executed within each worker using a &lt;strong&gt;plugin-based design&lt;/strong&gt;, which makes the workflow modular, reusable, and easy to extend.&lt;br&gt;
By separating &lt;strong&gt;workflow orchestration (Pipeline)&lt;/strong&gt; from &lt;strong&gt;execution (Worker)&lt;/strong&gt; and &lt;strong&gt;processing logic (Plugin)&lt;/strong&gt;, this ETL model allows concurrent, traceable, and maintainable data synchronization across diverse sources and destinations.&lt;/p&gt;

&lt;p&gt;In addition to extract, transform, and load plugins, the pipeline also supports state persistence with a &lt;strong&gt;HistoryStore plugin&lt;/strong&gt;.&lt;br&gt;
This plugin is responsible for &lt;strong&gt;persisting ETL execution state&lt;/strong&gt;, such as cursors, offsets, checkpoints, or watermarks, during data synchronization.&lt;br&gt;
Typical use cases include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Recording the last processed timestamp or ID&lt;/li&gt;
&lt;li&gt;Persisting file offsets or row numbers&lt;/li&gt;
&lt;li&gt;Supporting incremental and resumable synchronization&lt;/li&gt;
&lt;li&gt;Enabling safe retries and recovery after failures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This makes ETL workflows &lt;strong&gt;stateful, resumable, and fault-tolerant&lt;/strong&gt;, without coupling state management to business logic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitoring and Scheduling Pipelines
&lt;/h3&gt;

&lt;p&gt;In projects I’ve worked on, &lt;strong&gt;data synchronization&lt;/strong&gt; was typically monitored by &lt;strong&gt;manually analyzing logs&lt;/strong&gt;.&lt;br&gt;
While effective for debugging, this approach does not scale well: logs are fragmented, hard to correlate across concurrent workers, and heavily reliant on human interpretation.&lt;/p&gt;

&lt;p&gt;To address this, &lt;strong&gt;observability&lt;/strong&gt; was considered a core design concern. Synchronization progress and execution states are tracked in a structured and visualized way, reducing reliance on log inspection and making pipelines easier to monitor and operate.&lt;/p&gt;

&lt;p&gt;The engine introduces a set of synchronization-specific events (&lt;strong&gt;SyncEvents&lt;/strong&gt;) to make pipeline execution observable. These events represent meaningful lifecycle changes and progress updates, and are streamed to the frontend in real time via &lt;strong&gt;SignalR&lt;/strong&gt;. By exposing structured execution signals instead of raw logs, synchronization workflows become easier to monitor, track, and reason about.&lt;/p&gt;

&lt;p&gt;Another key feature is &lt;strong&gt;pipeline scheduling&lt;/strong&gt;, designed to free business users from repetitive and manual execution — I’ve received far too many complaints about this kind of repetitive work. Each pipeline declares its own schedule using cron expressions, allowing the engine to &lt;strong&gt;autonomously execute synchronization tasks&lt;/strong&gt;. Combined with real-time monitoring, this makes pipeline runs both self-managed and visually observable.&lt;/p&gt;

&lt;h2&gt;
  
  
  MVP: CSV to SQLite
&lt;/h2&gt;

&lt;p&gt;I’ve built an MVP with plugin-based pipelines and workers, autonomous scheduling, and real-time monitoring. Inspired by real-world scenarios like transforming CSV files into business system databases, it already puts these design principles into practice, and I plan to develop more plugins to support additional use cases as the system evolves.&lt;/p&gt;

&lt;p&gt;Beyond CSV-to-SQLite synchronization, it also supports a wide range of features aligned with its core design principles, including flexible pipeline configuration, an extensible plugin architecture, and a SignalR-based real-time communication protocol with event mapping and processing architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/oninebx/AkkaSync/releases/tag/assets-0.1.0-mvp.1" rel="noopener noreferrer"&gt;Try MVP now&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftw3nrmq0kzdjbb6pj6fx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftw3nrmq0kzdjbb6pj6fx.png" alt="akkasync mvp dashboard" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqr1guivedwz3v4efvyml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqr1guivedwz3v4efvyml.png" alt="akkasync mvp sqlite" width="800" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/oninebx/AkkaSync" rel="noopener noreferrer"&gt;https://github.com/oninebx/AkkaSync&lt;/a&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>csharp</category>
      <category>dataengineering</category>
      <category>etl</category>
    </item>
    <item>
      <title>Building a Smart Editor: Automatically Detect URLs and Convert Them to Hyperlinks</title>
      <dc:creator>Ryan Xu</dc:creator>
      <pubDate>Sun, 13 Oct 2024 11:47:16 +0000</pubDate>
      <link>https://forem.com/oninebx/building-a-smart-editor-automatically-detect-urls-and-convert-them-to-hyperlinks-ilg</link>
      <guid>https://forem.com/oninebx/building-a-smart-editor-automatically-detect-urls-and-convert-them-to-hyperlinks-ilg</guid>
      <description>&lt;p&gt;This is an idea I came up with at work to improve the user experience. It involves implementing a text box that automatically detects URLs and converts them into hyperlinks as the user types(Source code &lt;a href="https://github.com/oninebx/AutolinkEditor" rel="noopener noreferrer"&gt;Github/AutolinkEditor&lt;/a&gt;). This cool feature is somewhat tricky to implement, and the following issues must be addressed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accurately detect URLs within the text&lt;/li&gt;
&lt;li&gt;Maintain the cursor position after converting the URL string into a hyperlink&lt;/li&gt;
&lt;li&gt;Update the target URL accordingly when users edit the hyperlink text&lt;/li&gt;
&lt;li&gt;Preserve line breaks in the text&lt;/li&gt;
&lt;li&gt;Support pasting rich text while retaining both text and line breaks, with the text style matching the format of the text box.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8y7314mcbkfafgigibun.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8y7314mcbkfafgigibun.gif" alt="Image description" width="480" height="234"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
 if(target &amp;amp;&amp;amp; target.contentEditable){
  ...
  target.contentEditable = true;
  target.focus();
 }
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The conversion is driven by “onkeyup” and “onpaste” events. To reduce the frequency of conversions, a delay mechanism is implemented with “setTimeout”, where the conversion logic is triggered only after the user stops typing for 1 second by default.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;idle(func, delay = 1000) {
      ...
      const idleHandler = function(...args) {
        if(this[timer]){
          clearTimeout(this[timer]);
          this[timer] = null;
        }
        this[timer] = setTimeout(() =&amp;gt; {
          func(...args);
          this[timer] = null;
        }, delay);

      };
      return idleHandler.bind(this);
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Identify and extract URLs with regular expression
&lt;/h2&gt;

&lt;p&gt;I didn’t intend to spend time crafting the perfect regex for matching URLs, so I found a usable one via a search engine. If anyone has a better one, feel free to let me know!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
const URLRegex = /^(https?:\/\/(([a-zA-Z0-9]+-?)+[a-zA-Z0-9]+\.)+(([a-zA-Z0-9]+-?)+[a-zA-Z0-9]+))(:\d+)?(\/.*)?(\?.*)?(#.*)?$/;
const URLInTextRegex = /(https?:\/\/(([a-zA-Z0-9]+-?)+[a-zA-Z0-9]+\.)+(([a-zA-Z0-9]+-?)+[a-zA-Z0-9]+))(:\d+)?(\/.*)?(\?.*)?(#.*)?/;
...

if(URLRegex.test(text)){
  result += `&amp;lt;a href="${escapeHtml(text)}"&amp;gt;${escapeHtml(text)}&amp;lt;/a&amp;gt;`;
}else {
  // text contains url
  let textContent = text;
  let match;
  while ((match = URLInTextRegex.exec(textContent)) !== null) {
    const url = match[0];
    const beforeUrl = textContent.slice(0, match.index);
    const afterUrl = textContent.slice(match.index + url.length);

    result += escapeHtml(beforeUrl);
    result += `&amp;lt;a href="${escapeHtml(url)}"&amp;gt;${escapeHtml(url)}&amp;lt;/a&amp;gt;`;
    textContent = afterUrl;
  }
  result += escapeHtml(textContent); // Append any remaining text
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Restoring the cursor position after conversion
&lt;/h2&gt;

&lt;p&gt;With &lt;strong&gt;document.createRange&lt;/strong&gt; and &lt;strong&gt;window.getSelectionfunctions&lt;/strong&gt;, calculate the cursor position within the node’s text. Since converting URLs into hyperlinks only adds tags without modifying the text content, the cursor can be restored based on the previously recorded position.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;saveSelection: target =&amp;gt; {
      ...
          const range = window.getSelection().getRangeAt(0);
          var preSelectionRange = range.cloneRange();
          preSelectionRange.selectNodeContents(target);
          preSelectionRange.setEnd(range.startContainer, range.startOffset);
          var start = preSelectionRange.toString().length;

          return {
              start: start,
              end: start + range.toString().length
          }
        ...
    }
...
restoreSelection: (target, position) =&amp;gt; {
     ...
        const range = document.createRange();
        range.setStart(target, 0);
        range.collapse(true);

        let foundStart = false;
        let stop = false;
        let node;
        let charIndex = 0;
        const nodeStack = [target];
        while (!stop &amp;amp;&amp;amp; (node = nodeStack.pop())) {
          switch (node.nodeType) {
            case 1: // element
              for (let i = node.childNodes.length - 1; i &amp;gt;= 0; i--) {
                nodeStack.push(node.childNodes[i]);
              }
              break;
            case 3: // text
              const nextCharIndex = charIndex + node.length;
              if (!foundStart &amp;amp;&amp;amp; position.start &amp;gt;= charIndex &amp;amp;&amp;amp; position.start &amp;lt;= nextCharIndex) {
                range.setStart(node, position.start - charIndex);
                foundStart = true;
              }
              if (foundStart &amp;amp;&amp;amp; position.end &amp;gt;= charIndex &amp;amp;&amp;amp; position.end &amp;lt;= nextCharIndex) {
                range.setEnd(node, position.end - charIndex);
                stop = true;
              }
              charIndex = nextCharIndex;
              break;
          }

        if(foundStart) {
          const selection = window.getSelection();
          selection.removeAllRanges();
          selection.addRange(range);
        }
  ...
    }
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more details, please read &lt;a href="https://stackoverflow.com/questions/17678843/cant-restore-selection-after-html-modify-even-if-its-the-same-html" rel="noopener noreferrer"&gt;Can’t restore selection after HTML modify, even if it’s the same HTML&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Update or remove when editing hyperlink
&lt;/h2&gt;

&lt;p&gt;Sometimes we create hyperlinks where the text and the target URL are the same(called ‘simple hyperlinks’ here). For example, the following HTML shows this kind of hyperlink.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;lt;a href="http://www.example.com"&amp;gt;http://www.example.com&amp;lt;/a&amp;gt;&lt;/code&gt;&lt;br&gt;
For such links, when the hyperlink text is modified, the target URL should also be automatically updated to keep them in sync. To make the logic more robust, the link will be converted back to plain text when the hyperlink text is no longer a valid URL.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;handleAnchor: anchor =&amp;gt; {
  ...
    const text = anchor.textContent;
    if(URLRegex.test(text)){
      return nodeHandler.makePlainAnchor(anchor);
    }else {
      return anchor.textContent;
    }
  ...
}
...
makePlainAnchor: target =&amp;gt; {
  ...
  const result = document.createElement("a");
  result.href = target.href;
  result.textContent = target.textContent;
  return result;
  ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To implement this feature, I store the ‘simple hyperlinks’ in an object and update them in real-time during the onpaste, onkeyup, and onfocus events to ensure that the above logic only handles simple hyperlinks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;target.onpaste = initializer.idle(e =&amp;gt; {
  ...
  inclusion = contentConvertor.indexAnchors(target);
}, 0);

const handleKeyup = initializer.idle(e =&amp;gt; {
  ...
  inclusion = contentConvertor.indexAnchors(target);
  ...
}, 1000);

target.onkeyup = handleKeyup;
target.onfocus = e =&amp;gt; {
  inclusion = contentConvertor.indexAnchors(target);
}

...

indexAnchors(target) {
  const inclusion = {};
  ...
  const anchorTags = target.querySelectorAll('a');
  if(anchorTags) {
    const idPrefix = target.id === "" ? target.dataset.id : target.id;

    anchorTags.forEach((anchor, index) =&amp;gt; {
      const anchorId = anchor.dataset.id ?? `${idPrefix}-anchor-${index}`;
      if(anchor.href.replace(/\/+$/, '').toLowerCase() === anchor.textContent.toLowerCase()) {
        if(!anchor.dataset.id){
          anchor.setAttribute('data-id', anchorId);
        }
        inclusion[[anchorId]] = anchor.href;
      }
    });
  }
  return Object.keys(inclusion).length === 0 ? null : inclusion;
  ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Handle line breaks and styles
&lt;/h2&gt;

&lt;p&gt;When handling pasted rich text, the editor will automatically style the text with the editor’s text styles. To maintain formatting, &lt;br&gt; tags in the rich text and all hyperlinks will be preserved. Handling input text is more complex. When the user presses Enter to add a new line, a div element is added to the editor, which the editor replaces with a &lt;br&gt; to maintain the formatting.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node.childNodes.forEach(child =&amp;gt; {
  if (child.nodeType === 1) { 
    if(child.tagName === 'A') { // anchar element
      const key = child.id === "" ? child.dataset.id : child.id;

      if(inclusion &amp;amp;&amp;amp; inclusion[key]){
        const disposedAnchor = handleAnchor(child);
        if(disposedAnchor){
          if(disposedAnchor instanceof HTMLAnchorElement) {
            disposedAnchor.href = disposedAnchor.textContent;
          }
          result += disposedAnchor.outerHTML ?? disposedAnchor;
        }
      }else {
        result += makePlainAnchor(child)?.outerHTML ?? "";
      }
    }else { 
      result += compensateBR(child) + this.extractTextAndAnchor(child, inclusion, nodeHandler);
    }
  } 
});

...
const ElementsOfBR = new Set([
  'block',
  'block flex',
  'block flow',
  'block flow-root',
  'block grid',
  'list-item',
]);
compensateBR: target =&amp;gt; {
  if(target &amp;amp;&amp;amp; 
    (target instanceof HTMLBRElement || ElementsOfBR.has(window.getComputedStyle(target).display))){
      return "&amp;lt;br /&amp;gt;";
  }
  return "";
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;This article describes some practical techniques used to implement a simple editor, such as common events like onkeyup and onpaste, how to use Selection and Range to restore the cursor position, and how to handle the nodes of an element to achieve the editor's functionality. While regular expressions are not the focus of this article, a complete regex can enhance the editor's robustness in identifying specific strings (the regex used in this article will remain open for modification). You can access the source code via &lt;a href="https://github.com/oninebx/AutolinkEditor" rel="noopener noreferrer"&gt;Github/AutolilnkEditor&lt;/a&gt; to get more details if it is helpful for your project.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>opensource</category>
      <category>github</category>
    </item>
  </channel>
</rss>
