<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kiswono Prayogo</title>
    <description>The latest articles on Forem by Kiswono Prayogo (@kokizzu).</description>
    <link>https://forem.com/kokizzu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kokizzu"/>
    <language>en</language>
    <item>
      <title>OLLAMA with AMD GPU (ROCm)</title>
      <dc:creator>Kiswono Prayogo</dc:creator>
      <pubDate>Tue, 27 Feb 2024 18:40:06 +0000</pubDate>
      <link>https://forem.com/kokizzu/ollama-with-amd-gpu-rocm-1p3i</link>
      <guid>https://forem.com/kokizzu/ollama-with-amd-gpu-rocm-1p3i</guid>
      <description>&lt;p&gt;Today we're gonna test ollama (&lt;a href="http://kokizzu.blogspot.com/2023/12/benchmarking-llm-models.html"&gt;just like previous article&lt;/a&gt;) with AMD GPU, to do this you'll need to run docker, for example using this docker compose file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: "3.7"

services:
  ollama:
    container_name: ollama
    image: ollama/ollama:0.1.22-rocm
    environment:
      HSA_OVERRIDE_GFX_VERSION: 10.3.0 # only if you are using 6600XT
    volumes:
      - /usr/share/ollama/.ollama:/root/.ollama # reuse existing model
      #- ./etc__resolv.conf:/etc/resolv.conf # if your dns sucks
    devices:
      - /dev/dri
      - /dev/kfd
    restart: unless-stopped

  ollama-webui:
    image: ghcr.io/ollama-webui/ollama-webui:main
    container_name: ollama-webui
    ports:
      - "3122:8080"
    volumes:
      - ./ollama-webui:/app/backend/data
    environment:
      - 'OLLAMA_API_BASE_URL=http://ollama:11434/api'
    restart: unless-stopped
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To run it you just need to execute&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it `docker ps | grep ollama/ollama | cut -f 1 -d ' '` bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure that you have already have &lt;a href="https://rocm.docs.amd.com/projects/install-on-linux/en/latest/"&gt;ROCm&lt;/a&gt; installed&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ dpkg -l | grep rocm | cut -d ' ' -f 3
rocm-cmake
rocm-core
rocm-device-libs
rocm-hip-libraries
rocm-hip-runtime
rocm-hip-runtime-dev
rocm-hip-sdk
rocm-language-runtime
rocm-llvm
rocm-ocl-icd
rocm-opencl
rocm-opencl-runtime
rocm-smi-lib
rocminfo
$ cat /etc/apt/sources.list.d/* | grep -i 'rocm'
deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/amdgpu/6.0/ubuntu jammy main
deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/rocm/apt/6.0 jammy main
deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/rocm/apt/6.0 jammy main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example in previous article the same statement here would took around 60s, but using GPU, only took 20-30s (3x-2x faster):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;time ollama run codellama 'show me inplace mergesort using golang'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or from outside:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;time docker exec -it `docker ps | grep ollama/ollama | cut -f 1 -d ' '` ollama run codellama 'show me inplace mergesort using golang'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;long output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;real    0m30.528s
CPU: 0.02s      Real: 21.07s    RAM: 25088KB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NOTE: the answer from codellama above are wrong, since it's not in-place merge sort, also even so it's just normal merge sort using slice would overwrite the underlaying array causing wrong result.&lt;/p&gt;

&lt;p&gt;You can also visit &lt;a href="http://localhost:3122"&gt;http://localhost:3122&lt;/a&gt; for web UI.&lt;/p&gt;

&lt;p&gt;this article originally posted &lt;a href="http://kokizzu.blogspot.com/2024/01/ollama-with-amd-gpu-rocm.html"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ollama</category>
      <category>llm</category>
      <category>amd</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Writing UDF for Clickhouse using Golang</title>
      <dc:creator>Kiswono Prayogo</dc:creator>
      <pubDate>Tue, 27 Feb 2024 18:36:37 +0000</pubDate>
      <link>https://forem.com/kokizzu/writing-udf-for-clickhouse-using-golang-2ko5</link>
      <guid>https://forem.com/kokizzu/writing-udf-for-clickhouse-using-golang-2ko5</guid>
      <description>&lt;p&gt;Today we're going to create an &lt;a href="https://clickhouse.com/docs/en/sql-reference/functions/udf"&gt;UDF&lt;/a&gt; (User-defined Function) in Golang that can be run inside Clickhouse query, this function will parse uuid v1 and return timestamp of it since Clickhouse doesn't have this function &lt;a href="https://github.com/ClickHouse/ClickHouse/issues/60148"&gt;for now&lt;/a&gt;. Inspired from the &lt;a href="https://stackoverflow.com/questions/71236415/how-to-send-multiple-arguments-to-executable-udf-in-clickhouse"&gt;python&lt;/a&gt; version with &lt;a href="https://clickhouse.com/docs/en/interfaces/formats#tabseparated"&gt;TabSeparated&lt;/a&gt; delimiter (since it's easiest to parse), UDF in Clickhouse will read line by line (each row is each line, and each text separated with tab is each column/cell value):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main
import (
    "bufio"
    "encoding/binary"
    "encoding/hex"
    "fmt"
    "os"
    "strings"
    "time"
)
func main() {
    scanner := bufio.NewScanner(os.Stdin)
    scanner.Split(bufio.ScanLines)
    for scanner.Scan() {
        id, _ := FromString(scanner.Text())
        fmt.Println(id.Time())
    }
}
func (me UUID) Nanoseconds() int64 {
    time_low := int64(binary.BigEndian.Uint32(me[0:4]))
    time_mid := int64(binary.BigEndian.Uint16(me[4:6]))
    time_hi := int64((binary.BigEndian.Uint16(me[6:8]) &amp;amp; 0x0fff))
    return int64((((time_low) + (time_mid &amp;lt;&amp;lt; 32) + (time_hi &amp;lt;&amp;lt; 48)) - epochStart) * 100)
}
func (me UUID) Time() time.Time {
    nsec := me.Nanoseconds()
    return time.Unix(nsec/1e9, nsec%1e9).UTC()
}
// code below Copyright (C) 2013 by Maxim Bublis &amp;lt;b@codemonkey.ru&amp;gt;
// see https://github.com/satori/go.uuid
// Difference in 100-nanosecond intervals between
// UUID epoch (October 15, 1582) and Unix epoch (January 1, 1970).
const epochStart = 122192928000000000
// UUID representation compliant with specification
// described in RFC 4122.
type UUID [16]byte
// FromString returns UUID parsed from string input.
// Following formats are supported:
// "6ba7b810-9dad-11d1-80b4-00c04fd430c8",
// "{6ba7b810-9dad-11d1-80b4-00c04fd430c8}",
// "urn:uuid:6ba7b810-9dad-11d1-80b4-00c04fd430c8"
func FromString(input string) (u UUID, err error) {
    s := strings.Replace(input, "-", "", -1)
    if len(s) == 41 &amp;amp;&amp;amp; s[:9] == "urn:uuid:" {
        s = s[9:]
    } else if len(s) == 34 &amp;amp;&amp;amp; s[0] == '{' &amp;amp;&amp;amp; s[33] == '}' {
        s = s[1:33]
    }
    if len(s) != 32 {
        err = fmt.Errorf("uuid: invalid UUID string: %s", input)
        return
    }
    b := []byte(s)
    _, err = hex.Decode(u[:], b)
    return
}
// Returns canonical string representation of UUID:
// xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.
func (u UUID) String() string {
    return fmt.Sprintf("%x-%x-%x-%x-%x",
        u[:4], u[4:6], u[6:8], u[8:10], u[10:])
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Compile and put it with proper owner and permission on /var/lib/clickhouse/user_scripts/uuid2timestr and create /etc/clickhouse-server/uuid2timestr_function.xml (must be have proper suffix) containing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;functions&amp;gt;
    &amp;lt;function&amp;gt;
        &amp;lt;type&amp;gt;executable&amp;lt;/type&amp;gt;
        &amp;lt;name&amp;gt;uuid2timestr&amp;lt;/name&amp;gt;
        &amp;lt;return_type&amp;gt;String&amp;lt;/return_type&amp;gt;
        &amp;lt;argument&amp;gt;
            &amp;lt;type&amp;gt;String&amp;lt;/type&amp;gt;
        &amp;lt;/argument&amp;gt;
        &amp;lt;format&amp;gt;TabSeparated&amp;lt;/format&amp;gt;
        &amp;lt;command&amp;gt;uuid2timestr&amp;lt;/command&amp;gt;
        &amp;lt;lifetime&amp;gt;0&amp;lt;/lifetime&amp;gt;
    &amp;lt;/function&amp;gt;
&amp;lt;/functions&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;after that you can restart Clickhouse (sudo systemctl restart clickhouse-server or sudo clickhouse restart) depends on how you install it (apt or binary setup).&lt;/p&gt;

&lt;h2&gt;
  
  
  Usage
&lt;/h2&gt;

&lt;p&gt;to make sure it's loaded, you can just find this line on the log:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Trace&amp;gt; ExternalUserDefinedExecutableFunctionsLoader: Loading config file '/etc/clickhouse-server/uuid2timestr_function.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;then just run a query using that function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT uuid2timestr('51038948-97ea-11ee-b7e0-52de156a77d8')

┌─uuid2timestr('51038948-97ea-11ee-b7e0-52de156a77d8')─┐
│ 2023-12-11 05:58:33.2391752 +0000 UTC                │
└──────────────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this article originally posted &lt;a href="http://kokizzu.blogspot.com/2024/02/writing-udf-for-clickhouse-using-golang.html"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>clickhouse</category>
      <category>go</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Dump/export Cassandra/BigQuery tables and import to Clickhouse</title>
      <dc:creator>Kiswono Prayogo</dc:creator>
      <pubDate>Tue, 27 Feb 2024 18:32:44 +0000</pubDate>
      <link>https://forem.com/kokizzu/dumpexport-cassandrabigquery-tables-and-import-to-clickhouse-2kb0</link>
      <guid>https://forem.com/kokizzu/dumpexport-cassandrabigquery-tables-and-import-to-clickhouse-2kb0</guid>
      <description>&lt;p&gt;Today we're gonna dump Cassandra table and put it to Clickhouse. Cassandra is columnar database but use as OLTP since it have really good distributed capability (customizable replication factor, multi-cluster/region, clustered/partitioned by default -- so good for multitenant applications), but if we need to do analytics queries or some complex query, it became super sucks, even with ScyllaDB's materialized view (which only good for recap/summary). To dump Cassandra database, all you need to do just construct a query and use &lt;a href="https://docs.datastax.com/en/dsbulk/docs/reference/dsbulk-cmd.html"&gt;dsbulk&lt;/a&gt;, something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./dsbulk unload -delim '|' -k "KEYSPACE1" \
   -query "SELECT col1,col2,col3 FROM table1" -c csv \
   -u 'USERNAME1' -p 'PASSWORD1' \
   -b secure-bundle.zip | tr '\\' '"' |
    gzip -9 &amp;gt; table1_dump_YYYYMMDD.csv.gz ;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;tr&lt;/code&gt; command above used to unescape backslash, since &lt;code&gt;dsbulk&lt;/code&gt; export csv not in proper format &lt;code&gt;(\"&lt;/code&gt; not &lt;code&gt;"")&lt;/code&gt;, after than you can just restore it by running something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE table1 (
    col1 String,
    col2 Int64,
    col3 UUID,
) ENGINE = ReplacingMergeTree()
ORDER BY (col1, col2);

SET format_csv_delimiter = '|';
SET input_format_csv_skip_first_lines = 1;

INSERT INTO table1
FROM INFILE 'table1_dump_YYYYMMDD.csv.gz'
FORMAT CSV;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  BigQuery
&lt;/h2&gt;

&lt;p&gt;Similar to Clickhouse, BigQuery is one of the best analytical engine (because of unlimited compute and massively parallel storage), but it comes with cost, improper partitioning/clustering (even with proper one, because it's limited to only 1 column unlike Clickhouse that can do more) with large table will do a huge scan (&lt;a href="https://cloud.google.com/bigquery/pricing%20per%20TiB"&gt;$6.25&lt;/a&gt; and a lot of compute slot, if combined with materialized view or periodic query on cron, it would definitely kill your wallet. To dump from BigQuery all you need to do just create GCS (Google Cloud Storage) bucket then run some query something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EXPORT DATA
  OPTIONS (
    uri = 'gs://BUCKET1/table2_dump/1-*.parquet',
    format = 'PARQUET',
    overwrite = true
    --, compression = 'GZIP' -- causing import failed: ZLIB_INFLATE_FAILED
  )
AS (
  SELECT * FROM `dataset1.table2`
);

-- it's better to create snapshot table
-- if you do WHERE filter on above query, eg.
CREATE TABLE dataset1.table2_filtered_snapshot AS
  SELECT * FROM `dataset1.table2` WHERE col1 = 'yourFilter';

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Not using compression because it's failed to import, not sure why. The parquet files will be shown on your bucket, click on "Remove public access prevention", and allow it to be publicly available with gcloud command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud storage buckets add-iam-policy-binding gs://BUCKET1 --member=allUsers --role=roles/storage.objectViewer
# remove-iam-policy-binding to undo this
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then just restore it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE table2 (
          Col1 String,
          Col2 DateTime,
          Col3 Int32
) ENGINE = ReplacingMergeTree()
ORDER BY (Col1, Col2, Col3);

SET parallel_distributed_insert_select = 1;

INSERT INTO table2
SELECT Col1, Col2, Col3
FROM s3Cluster(
    'default',
    'https://storage.googleapis.com/BUCKET1/table2_dump/1-*.parquet',
    '', -- s3 access id, remove or leave empty if public
    '' -- s3 secret key, remove or leave empty if public
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this article originally posted &lt;a href="http://kokizzu.blogspot.com/2024/02/dump-cassandrabigquery-and-import-to.html"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cassandra</category>
      <category>bigquery</category>
      <category>clickhouse</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to use DNS SDK in Golang</title>
      <dc:creator>Kiswono Prayogo</dc:creator>
      <pubDate>Wed, 31 May 2023 07:18:56 +0000</pubDate>
      <link>https://forem.com/gcoreofficial/how-to-use-dns-sdk-in-golang-55cl</link>
      <guid>https://forem.com/gcoreofficial/how-to-use-dns-sdk-in-golang-55cl</guid>
      <description>&lt;p&gt;So we're gonna try to manipulate DNS records using go SDK (not REST API directly). I went through first 2 page of google search results, and companies that providing SDK for Go were:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;   IBM &lt;a href="https://github.com/IBM/networking-go-sdk"&gt;networking-go-sdk&lt;/a&gt; - 161.26.0.10 and 161.26.0.11 - timedout resolving their own website&lt;/li&gt;
&lt;li&gt;   AWS &lt;a href="https://docs.aws.amazon.com/sdk-for-go/api/service/route53/"&gt;route53&lt;/a&gt; - 169.254.169.253 - timedout resolving their own website&lt;/li&gt;
&lt;li&gt;   DNSimple &lt;a href="https://dnsimple.com/api/go"&gt;dnsimple-go&lt;/a&gt; - 162.159.27.4 and 199.247.155.53 - 160-180ms and 70-75ms from SG&lt;/li&gt;
&lt;li&gt;   Google &lt;a href="https://github.com/googleapis/google-api-go-client/tree/main/examples"&gt;googleapis&lt;/a&gt; - 8.8.8.8 and 8.8.4.4 - 0ms for both from SG&lt;/li&gt;
&lt;li&gt;   GCore &lt;a href="https://github.com/G-Core/gcore-dns-sdk-go"&gt;gcore-dns-sdk-go&lt;/a&gt; - 199.247.155.53 and 2.56.220.2 - 0ms and 0-171ms (171ms on first hit only, the rest is 0ms) from SG&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I've used google SDK before for non-DNS stuff, a bit too raw and so many required steps. You have to create a project, enable API, create service account, set permission for that account, download credentials.json, then hit using their SDK -- not really straightforward, so today we're gonna try G-Core's DNS, apparently it's very easy, just need to visit their website and sign up, profile &amp;gt; API Tokens &amp;gt; Create Token, copy it to some file (for example: &lt;code&gt;.token&lt;/code&gt; file).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G-G5K77F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a4ekvy5lraudyh699awx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G-G5K77F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a4ekvy5lraudyh699awx.jpg" alt="create token" width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is example how you can create a zone, add an A record, and delete everything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
  "context"
  _ "embed"
  "strings"
  "time"  
  "github.com/G-Core/gcore-dns-sdk-go"
  "github.com/kokizzu/gotro/L"
)

//go:embed .token
var apiToken stringfunc main() {
  apiToken = strings.TrimSpace(apiToken)

  // init SDK
  sdk := dnssdk.NewClient(dnssdk.PermanentAPIKeyAuth(apiToken), func(client *dnssdk.Client) {
    client.Debug = true
  })
  ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
  defer cancel()
  const zoneName = `benalu2.dev`

  // create zone
  _, err := sdk.CreateZone(ctx, zoneName)
  if err != nil &amp;amp;&amp;amp; !strings.Contains(err.Error(), `already exists`) {
    L.PanicIf(err, `sdk.CreateZone`)
  }

  // get zone
  zoneResp, err := sdk.Zone(ctx, zoneName)
  L.PanicIf(err, `sdk.Zone`)
  L.Describe(zoneResp)
  // add A record
  err = sdk.AddZoneRRSet(ctx,
    zoneName,        // zone
    `www.`+zoneName, // name
    `A`,             // rrtype
    []dnssdk.ResourceRecord{
      { // https://apidocs.gcore.com/dns#tag/rrsets/operation/CreateRRSet
        Content: []any{
          `194.233.65.174`,
        },
      },
    },
    120, // TTL
  )
  L.PanicIf(err, `AddZoneRRSet`)

  // get A record
  rr, err := sdk.RRSet(ctx, zoneName, `www.`+zoneName, `A`)
  L.PanicIf(err, `sdk.RRSet`)
  L.Describe(rr)  // delete A record
  err = sdk.DeleteRRSet(ctx, zoneName, `www.`+zoneName, `A`)
  L.PanicIf(err, `sdk.DeleteRRSet`)

  // delete zone
  err = sdk.DeleteZone(ctx, zoneName)
  L.PanicIf(err, `sdk.DeleteZone`)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full source code repo is &lt;a href="https://github.com/kokizzu/dns1"&gt;here&lt;/a&gt;. Apparently it's very easy to manipulate DNS record using their SDK, after adding record programmatically, all I need to do is just delegate (set authoritative nameserver) to their NS: ns1.gcorelabs.net and ns2.gcdn.services.In my case because I bought the domain name on google domains, then I just need to change this: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vLavQiy---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i9q2pdbn6ajpyxep7h0u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vLavQiy---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i9q2pdbn6ajpyxep7h0u.png" alt="Image description" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then just wait it to be delegated properly (until all DNS servers that still caching the old authorized NS cleared up), I guess that it. This article republished with permission from kokizzu's personal &lt;a href="https://kokizzu.blogspot.com/2023/04/how-to-use-dns-sdk-in-golang.html"&gt;blog&lt;/a&gt; &lt;/p&gt;

</description>
      <category>go</category>
      <category>network</category>
      <category>development</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
