<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: borislav nikolov</title>
    <description>The latest articles on Forem by borislav nikolov (@jackdoe).</description>
    <link>https://forem.com/jackdoe</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jackdoe"/>
    <language>en</language>
    <item>
      <title>tty only</title>
      <dc:creator>borislav nikolov</dc:creator>
      <pubDate>Thu, 12 Aug 2021 21:35:45 +0000</pubDate>
      <link>https://forem.com/jackdoe/tty-only-1ijn</link>
      <guid>https://forem.com/jackdoe/tty-only-1ijn</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu304wn2e1cbnd25lezi1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu304wn2e1cbnd25lezi1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I used only the tty (no X installed) for 5 nights. It was relaxing, and now I go back to it every time I am overwhelmed. &lt;/p&gt;

&lt;p&gt;I have to seriously re-think the way I spend time on my&lt;br&gt;
computer. Working only on a tty is completely calming experience, there are no ads, no hundreds of tabs open, no pressure, only code and text. I also noticed I read way way way less news, hacker or not.&lt;/p&gt;

&lt;p&gt;BTW, browsing was actually way better than I thought, between eww, w3m, lynx, links2 and links2 -g, and sometimes just reading the html dump, I was able to navigate the modern web with reasonable success.&lt;/p&gt;
&lt;h1&gt;
  
  
  Laptop
&lt;/h1&gt;

&lt;p&gt;I bought used t440p laptop, they go from 100 to 300E. Super sturdy machine, like the opposite of my xps. Very easy to open, and very hackable.&lt;/p&gt;
&lt;h1&gt;
  
  
  Setup
&lt;/h1&gt;

&lt;p&gt;Install debian and do some basic fixes to make the tty usable.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;rate (whoa slow keyboard rate pisses me off)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;add 'kbdrate -r 30 -d 0' to /etc/rc.local&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ctrl+/ etc&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;super annoying that &lt;code&gt;ctrl+/&lt;/code&gt; sends Delete and I want to bind it to &lt;code&gt;undo&lt;/code&gt;, to do that you have to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ showkeys

press ctrl + /

check out the keycode, (in my case it is 53)

$ dumpkeys &amp;gt; keymap

change
    control keycode  53 = Delete
    shift   control keycode  53 = Delete

to
    control keycode  53 = Control_underscore
    shift   control keycode  53 = Meta_underscore

and 51 and 52 to
    shift   control keycode  51 = Control_asciicircum
    shift   control keycode  52 = Meta_asciicircum
    (I use S-C-. and S-C-, for cursor)

add 'loadkeys path/to/keymap' in /etc/rc.local

then the bindings:

(define-key global-map (kbd "C-_") 'undo)
(define-key global-map (kbd "M-_") 'undo)
(define-key global-map (kbd "M-^") 'mc/mark-next-like-this)
(define-key global-map (kbd "C-^") 'mc/mark-previous-like-this)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;mouse&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;install consolation or gpm&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cursor&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;instead of block cursor this will show the tty setup cursor (blinking underscore)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(setq visible-cursor nil)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;otherwise emacs has blinking (HZ/5 ~200ms) block cursor which is horrible, so replace it with blinking underscore to take less attention from your eyes. I tried all kinds of ways to stop the blinking but none worked.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;brightness
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;max=$(cat /sys/class/backlight/intel_backlight/max_brightness)
echo -n $max | sudo tee /sys/class/backlight/intel_backlight/brightness
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;font&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;install terminus and then add the setfont command to your bash/zshrc&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;setfont Uni3-Terminus20x10 # 12x6 14 16 22x11 24x12 28x14 32x16
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;emacs
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;init.el:

(package-initialize)
(require 'go-mode)
(require 'multiple-cursors)
(require 'tramp)
(load-library "view")
(require 'cc-mode)
(require 'ido)
(require 'compile)

(setq tramp-default-method "ssh")
(setq undo-limit 20000000)
(setq undo-strong-limit 40000000)

(defun delete-word (arg)
  "Delete characters backward until encountering the beginning of a word.
With argument ARG, do this that many times."
  (interactive "p")
  (delete-region (point) (progn (backward-word arg) (point))))

(define-key global-map (kbd "C-h") 'delete-backward-char)
(define-key global-map (kbd "M-C-h") 'backward-kill-word)
(define-key global-map (kbd "C-_") 'undo)
(define-key global-map (kbd "M-_") 'redo)

(define-key global-map (kbd "&amp;lt;M-backspace&amp;gt;") 'delete-word)
(define-key global-map (kbd "C-M-h") 'delete-word)


(define-key global-map (kbd "&amp;lt;f2&amp;gt;") 'compile)
(define-key global-map (kbd "&amp;lt;f1&amp;gt;") 'next-error)
(define-key global-map [C-tab] 'indent-region)
(define-key global-map (kbd "M-^") 'mc/mark-next-like-this)
(define-key global-map (kbd "C-^") 'mc/mark-previous-like-this)

(global-unset-key (kbd "C-t"))

(ido-mode 1)

(menu-bar-mode -1)
(tool-bar-mode -1)
(scroll-bar-mode -1)

(setq inhibit-startup-message t)

(global-linum-mode 0)
(display-time-mode 1)
(global-font-lock-mode -1)
(gpm-mouse-mode -1)
(setq backward-delete-char-untabify nil)

(display-battery-mode 1)
(setq make-backup-files nil)
(setq auto-save-deault nil)

(show-paren-mode 1)
(setq show-paren-delay 0.0)
(setq show-paren-style 'parenthesis)
(transient-mark-mode t)
(fset 'yes-or-no-p 'y-or-n-p)

(defun custom-go-mode-hook ()
  (setq gofmt-command "goimports")
  (add-hook 'before-save-hook 'gofmt-before-save)
  (if (not (string-match "go" compile-command))
      (set (make-local-variable 'compile-command)
           "go generate &amp;amp;&amp;amp; go build -v &amp;amp;&amp;amp; go test -v &amp;amp;&amp;amp; go vet &amp;amp;&amp;amp;  golangci-lint run"))
  (if (not (string-match "go" compile-command))
      (set (make-local-variable 'compile-command)
           "go build -v &amp;amp;&amp;amp; go test -v &amp;amp;&amp;amp; go vet"))
  (local-set-key (kbd "M-.") 'godef-jump)
  (local-set-key (kbd "M-,") 'pop-tag-mark)
)

(add-hook 'go-mode-hook 'custom-go-mode-hook)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.bashrc:

export VISUAL='emacsclient -ct'
export EDITOR='emacsclient -ct'
alias e='emacsclient -ct'
alias emacs=e
alias vi=e
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;also start emacs daemon:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/.config/systemd/user/emacs.service

[Unit]
Description=Emacs text editor
Documentation=info:emacs man:emacs(1) https://gnu.org/software/emacs/

[Service]
Type=forking
ExecStart=/usr/bin/emacs --daemon
ExecStop=/usr/bin/emacsclient --eval "(kill-emacs)"
Environment=SSH_AUTH_SOCK=%t/keyring/ssh
Restart=on-failure


[Install]
WantedBy=default.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;fzf&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;setup zsh with shared history and fzf&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;lock&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;vlock -all on pm suspend&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;email (personal)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I use &lt;code&gt;forgotten&lt;/code&gt; (github.com/jackdoe/forgotten) to manage encrypted list of passwords, so just use it with mutt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.muttrc

source "/usr/bin/forgotten -key gmail -mutt |"
set from = "xyz@example.com"
set realname = "aa bb"
set use_from = yes
set envelope_from = yes
set smtp_url = "smtp://xyz@example.com@smtp.gmail.com:587"
set smtp_pass =$my_pass
set imap_user = "xyz@example.com"
set imap_pass =$my_pass
set folder = "imaps://imap.gmail.com:993"
set spoolfile = "+INBOX"
set ssl_force_tls = yes
bind index G imap-fetch-mail
set editor = "emacsclient -ct"
set charset = "utf-8"
set record = ''
set header_cache = "~/.mutt/cache/headers"
set message_cachedir = "~/.mutt/cache/bodies"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Day 1:
&lt;/h1&gt;

&lt;p&gt;After installing and doing the basic setup it was pretty late, so I didnt do much more.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;links2 works nice, but too graphical, its ok to see images though&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;non ips display is total shit for console; bought new one from amazon, we will see this saturday&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;replace gpm with consolation&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Day 2
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;patched consolation to support right touchpad click because myh middle button is not very good&lt;/li&gt;
&lt;li&gt;fixed the ips display; it is a total gamechanger. the black is so much more black than before; amazing for tty&lt;/li&gt;
&lt;li&gt;fix alsa default card
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;gt; /etc/asound.conf &amp;lt;&amp;lt;EOF
defaults.pcm.card 1
defaults.ctl.card 1
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;download some royalty free music and played it with mplayer&lt;/li&gt;
&lt;li&gt;battery lasts quite a lot&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I wish eww worked better, I hate going out of emacs&lt;br&gt;
I am so happy with the new display.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;setup tlp; cap max freq at 1000mhz, I should've gotten i5 instead of i7, this i7 is just hot for no reason. Will disable hyper threading, maybe it will feel better. Or maybe I should open the laptop and clean it. Even with 1 core and no hyperthreading and cap on 1ghz it still gets hot, I guess its cleaning time; will actually check on amazon for i5, shouldnt be very expensive. Though despite the temperature, battery lasts for 6 hours (possibly more)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;went back to gpm; having mouse in links2 -g seems better&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Day 3
&lt;/h1&gt;

&lt;p&gt;The new screen arrived, took exactly 5 minutes to replace, I love working on machines that are easy to repair.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;wow that is a good screen haha, I forgot how much nicer it is&lt;/li&gt;
&lt;li&gt;i5 cpus are super expensive; no way I am buying one&lt;/li&gt;
&lt;li&gt;removed thermald and started using thinkfan with lower threshold (had to add options thinkpad_acpi experimental=1 fan_control=1 in modprobe.d/thinkfan.conf), so using fan level 1 at 40C is much better because when it gets to 50C it is just hot on my palm&lt;/li&gt;
&lt;li&gt;still thinking of what to write&lt;/li&gt;
&lt;li&gt;it is incredibly calming having no windows&lt;/li&gt;
&lt;li&gt;I did open and clean up the turbine a bit, but wasnt much to clean it is however really nice to have a laptop that is meant to be open&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Day 4
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;made copy paste http service on top of unix socket, so I can copy between emacs and zsh without using ansi-term (&lt;a href="https://github.com/jackdoe/pasta" rel="noopener noreferrer"&gt;https://github.com/jackdoe/pasta&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There is no way to programatically get the current selection from the linux kernel, so I made a patch to add new ioctl to be able to get the selection, so i can use it with M-w and C-y.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# build custom kernel on debian:

cd /usr/src &amp;amp;&amp;amp; \
git clone --depth=1 \
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git

# change what you want, and then
# copy your config from linux-config-5.xx into the trunk and then:

make -j8 bindep-pkg LOCALVERSION=-xyz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;working without X for the whole day, was pretty fun.&lt;/p&gt;

&lt;p&gt;This is the getsel ioctl patch&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;commit 5cf882d8b74747bbc08463d83cf80509c920edca
Author: borislav nikolov &amp;lt;jack@sofialondonmoskva.com&amp;gt;
Date:   Sat Mar 21 23:42:22 2020 +0100

    patch for GETSEL
    add copy_selection_to_user

diff --git a/drivers/tty/vt/selection.c b/drivers/tty/vt/selection.c
index d54a549c5892..9b26dec762dd 100644
--- a/drivers/tty/vt/selection.c
+++ b/drivers/tty/vt/selection.c
@@ -6,6 +6,7 @@
  *                struct tty_struct *)'
  *     'int set_selection_kernel(struct tiocl_selection *, struct tty_struct *)'
  *     'void clear_selection(void)'
+ *     'int copy_selection_to_user(char __user *)'
  *     'int paste_selection(struct tty_struct *)'
  *     'int sel_loadlut(char __user *)'
  *
@@ -71,6 +72,45 @@ sel_pos(int n, bool unicode)
    return inverse_translate(vc_sel.cons, screen_glyph(vc_sel.cons, n), 0);
 }

+/**
+ * copy_selection_to_user      -   get current selection
+ *
+ * Get a copy of current selection, console lock does not have to
+ * be held
+ */
+int copy_selection_to_user(char __user *arg)
+{
+   int get_sel_user_size;
+   int ret;
+
+   if (copy_from_user(&amp;amp;get_sel_user_size,
+              arg,
+              sizeof(vc_sel.buf_len)))
+       return -EFAULT;
+
+   mutex_lock(&amp;amp;vc_sel.lock);
+
+   if (get_sel_user_size &amp;lt; vc_sel.buf_len) {
+
+       mutex_unlock(&amp;amp;vc_sel.lock);
+
+       return -EFAULT;
+   }
+
+   ret = copy_to_user(arg,
+              &amp;amp;vc_sel.buf_len,
+              sizeof(vc_sel.buf_len));
+   if (ret == 0)
+       ret = copy_to_user(arg+sizeof(vc_sel.buf_len),
+                  vc_sel.buffer,
+                  vc_sel.buf_len);
+
+   mutex_unlock(&amp;amp;vc_sel.lock);
+
+   return ret;
+}
+EXPORT_SYMBOL_GPL(copy_selection_to_user);
+
 /**
  * clear_selection     -   remove current selection
  *
diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
index 309a39197be0..2b7eb55aafa3 100644
--- a/drivers/tty/vt/vt.c
+++ b/drivers/tty/vt/vt.c
@@ -3061,6 +3061,9 @@ int tioclinux(struct tty_struct *tty, unsigned long arg)
        case TIOCL_PASTESEL:
            ret = paste_selection(tty);
            break;
+       case TIOCL_GETSEL:
+           ret = copy_selection_to_user(p+1);
+           break;
        case TIOCL_UNBLANKSCREEN:
            console_lock();
            unblank_screen();
diff --git a/include/linux/selection.h b/include/linux/selection.h
index 5b890ef5b59f..7cb971795013 100644
--- a/include/linux/selection.h
+++ b/include/linux/selection.h
@@ -15,6 +15,7 @@ struct tty_struct;
 struct vc_data;

 extern void clear_selection(void);
+extern int copy_selection_to_user(char __user *arg);
 extern int set_selection_user(const struct tiocl_selection __user *sel,
                  struct tty_struct *tty);
 extern int set_selection_kernel(struct tiocl_selection *v,
diff --git a/include/uapi/linux/tiocl.h b/include/uapi/linux/tiocl.h
index b32acc229024..055ebda041d4 100644
--- a/include/uapi/linux/tiocl.h
+++ b/include/uapi/linux/tiocl.h
@@ -20,6 +20,7 @@ struct tiocl_selection {
 };

 #define TIOCL_PASTESEL 3   /* paste previous selection */
+#define TIOCL_GETSEL   18  /* get current selection */
 #define TIOCL_UNBLANKSCREEN    4   /* unblank screen */

 #define TIOCL_SELLOADLUT   5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to use the patch you need something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#include &amp;lt;stdio.h&amp;gt;
#include &amp;lt;sys/ioctl.h&amp;gt;
#include &amp;lt;linux/tiocl.h&amp;gt;
#include &amp;lt;sys/types.h&amp;gt;
#include &amp;lt;sys/stat.h&amp;gt;
#include &amp;lt;fcntl.h&amp;gt;
#include &amp;lt;stdlib.h&amp;gt;
#include &amp;lt;unistd.h&amp;gt;
#include &amp;lt;strings.h&amp;gt;


struct getsel {
  char code;
  int size;
  char data[0];
} __attribute((__packed__));

struct getsel * get_selection(int size) {
  struct getsel *d = (struct getsel *) malloc(size + sizeof(struct getsel));
  if (d == NULL) {
    perror("malloc");
    exit(1);
  }

  bzero(d, size + sizeof(struct getsel));

  d-&amp;gt;code = 18; // TIOCL_GETSEL
  d-&amp;gt;size = size;

  int fd = open("/dev/tty",O_RDWR);
  if (ioctl(fd, TIOCLINUX, d) &amp;lt; 0) {
    perror("paste: TIOCLINUX");
    exit(1);
  }
  close(fd);
  return d;
}

int main(void) {
  int size = 200;
  struct getsel *d = get_selection(size);

  printf("size: %d\n",d-&amp;gt;size);
  for (int i = 0; i &amp;lt; size; i++) {
    if (d-&amp;gt;data[i]) {
      printf("data[%d] = %d\n",i, d-&amp;gt;data[i]);
    }
  }
  d-&amp;gt;data[d-&amp;gt;size-1] = '\0';
  printf("string: %s\n", d-&amp;gt;data);
  free(d);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;anyway, there are still issues to be solved, for example when I press ctrl+y while &lt;code&gt;cat&lt;/code&gt; is open I cant past inside, because cat hoards the input.. I should patch the usb driver to take it before cat..&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;moved man to be within emacs
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;in .zshrc:
man() {
  emacsclient -ct -e '(man "'$1'")'
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;still need to get man9&lt;/p&gt;

&lt;h1&gt;
  
  
  Day 5
&lt;/h1&gt;

&lt;p&gt;fucking cursor blinking is annoying the hell out of me&lt;br&gt;
I tried all kinds of tricks to disable it from tty and emacs&lt;br&gt;
it is always fucking blinking.&lt;/p&gt;

&lt;p&gt;so&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(setq visible-cursor nil)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and patch it in the kernel!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;commit afda0f8175fe560d86e1f2ec0b33a9f25b3bf13f
Author: borislav nikolov &amp;lt;jack@sofialondonmoskva.com&amp;gt;
Date:   Wed Apr 1 09:13:55 2020 +0200

    fuck blinking underline

diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
index 2b7eb55aafa3..b8a7478c9f98 100644
--- a/drivers/tty/vt/vt.c
+++ b/drivers/tty/vt/vt.c
@@ -2306,13 +2306,6 @@ static void do_con_trol(struct tty_struct *tty, struct vc_data *vc, int c)
                set_mode(vc, 0);
            return;
        case 'c':
-           if (vc-&amp;gt;vc_priv == EPdec) {
-               if (vc-&amp;gt;vc_par[0])
-                   vc-&amp;gt;vc_cursor_type = vc-&amp;gt;vc_par[0] | (vc-&amp;gt;vc_par[1] &amp;lt;&amp;lt; 8) | (vc-&amp;gt;vc_par[2] &amp;lt;&amp;lt; 16);
-               else
-                   vc-&amp;gt;vc_cursor_type = cur_default;
-               return;
-           }
            break;
        case 'm':
            if (vc-&amp;gt;vc_priv == EPdec) {
diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
index bb6ae995c2e5..721f326b01e6 100644
--- a/drivers/video/fbdev/core/fbcon.c
+++ b/drivers/video/fbdev/core/fbcon.c
@@ -173,7 +173,7 @@ static const struct consw fb_con;

 static int fbcon_set_origin(struct vc_data *);

-static int fbcon_cursor_noblink;
+static int fbcon_cursor_noblink = 1;

 #define divides(a, b)  ((!(a) || (b)%(a)) ? 0 : 1)

@@ -3527,13 +3527,8 @@ static ssize_t store_cursor_blink(struct device *device,

    blink = simple_strtoul(buf, last, 0);

-   if (blink) {
-       fbcon_cursor_noblink = 0;
-       fbcon_add_cursor_timer(info);
-   } else {
-       fbcon_cursor_noblink = 1;
-       fbcon_del_cursor_timer(info);
-   }
+   fbcon_cursor_noblink = 1;
+   fbcon_del_cursor_timer(info);

 err:
    console_unlock();
diff --git a/include/linux/console_struct.h b/include/linux/console_struct.h
index 24d4c16e3ae0..b21061f8aad7 100644
--- a/include/linux/console_struct.h
+++ b/include/linux/console_struct.h
@@ -166,7 +166,7 @@ extern void vc_SAK(struct work_struct *work);
 #define CUR_HWMASK 0x0f
 #define CUR_SWMASK 0xfff0

-#define CUR_DEFAULT CUR_UNDERLINE
+#define CUR_DEFAULT CUR_BLOCK

 bool con_is_visible(const struct vc_data *vc);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;applying it..&lt;/p&gt;

&lt;p&gt;Whoaaa, this is so beautiful.. just a block █, absolutely amazing.&lt;/p&gt;

&lt;p&gt;I did not expect the blinking to put so much mental pressure, having █ just sitting not doing anything, just telling you where you are, its like meditation for the eyes.&lt;/p&gt;

&lt;p&gt;Conclusion:&lt;/p&gt;

&lt;p&gt;I have to seriously re-think the way I spend time on my other&lt;br&gt;
computer. Working only on a tty is completely calming experience, there are no ads, no pressure, only code and text.&lt;/p&gt;

&lt;p&gt;I will keep working on it to improve the tty experience, and I will actively work on reducing my dependency on the modern web, e.g. use &lt;code&gt;go doc&lt;/code&gt; more than google, build local search indexes etc. I hope to will some of it to my daily work life.&lt;/p&gt;

&lt;p&gt;PS:&lt;/p&gt;

&lt;p&gt;Those kernel patches are just for fun, don't take them seriously.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>programming</category>
      <category>burnout</category>
    </item>
    <item>
      <title>Outages and Good Vibes
</title>
      <dc:creator>borislav nikolov</dc:creator>
      <pubDate>Wed, 11 Aug 2021 15:30:07 +0000</pubDate>
      <link>https://forem.com/rekki/outages-and-good-vibes-51jb</link>
      <guid>https://forem.com/rekki/outages-and-good-vibes-51jb</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F180E6pf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/trhwsp1xrs2xxq2g2bii.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F180E6pf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/trhwsp1xrs2xxq2g2bii.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Recently I made a post of how we moved from elixir to go because we needed more dev power, and elixir's strengths were not panacea to us (which made the community very upset), which was something I hate, but now I would like to talk now about something I love.&lt;/p&gt;

&lt;p&gt;Outages.&lt;/p&gt;

&lt;p&gt;I love outages.&lt;/p&gt;

&lt;p&gt;When pagerduty calls, or someone posts 'hey something is wrong' on #tech-escalation, and my adrenaline starts pumping, I get so alive!&lt;/p&gt;

&lt;p&gt;This is the list of things we do when outage starts.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Make a hangouts call
&lt;/h3&gt;

&lt;p&gt;Before anyone knows what is going on, just make a call. Don't discuss on slack, don't wait. Make a call and ask for help, even if it is false alarm.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Pick a leader
&lt;/h3&gt;

&lt;p&gt;It is super important to say who leads, it is almost always the person with most context, but sometimes they are busy doing reconnaissance, in that case it is usually me, or I pick one who I know can lead.&lt;/p&gt;

&lt;p&gt;The call leader will resolve any deadlocks, make sure people are not duplicating work, as every second is essential, and also find support resources if needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Divide and Conquer
&lt;/h3&gt;

&lt;p&gt;The leader starts assigning tasks, one person needs to identify the surface area of the bleeding, one has to preliminary rollback any service that is even remotely related, one has to start investigating the impact and communicate with Customer Success.&lt;/p&gt;

&lt;p&gt;Always rollback first, think later. This is also why it is very important to have rollouts and rollbacks reasonably fast. The faster you can roll out, the faster you can roll back when you see something is wrong.&lt;/p&gt;

&lt;p&gt;The leader also must call other people if needed, and also notify Ronen (our CEO) about the status.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Cleanup
&lt;/h3&gt;

&lt;p&gt;After the issue is fixed, dedicate a small team to work with CS to do damage control, talk with the affected users and do as much as possible to mitigate the damage.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. RFO
&lt;/h3&gt;

&lt;p&gt;Write a small document identifying what happened and what we learned, and what we have to work on in order to be better.&lt;/p&gt;

&lt;p&gt;Talk especially about:&lt;/p&gt;

&lt;p&gt;a) Alerting and Monitoring&lt;/p&gt;

&lt;p&gt;Did we find out about the issue ourselves or customer had to tell us? It is at upmost importance to be able to find about issues before they impact users, sometimes this means we missed something in our end-to-end test and it has to be fixed asap.&lt;/p&gt;

&lt;p&gt;b) Rough timeline&lt;/p&gt;

&lt;p&gt;We must write a rough timeline so we can investigate if we can speedup the process somehow, for example, how much time did it take between the first error message and the creation of the hangouts call, how long did it take to select the tasks and etc.&lt;/p&gt;

&lt;p&gt;c) What was the impact&lt;/p&gt;

&lt;p&gt;Just rough estimate about affected users&lt;/p&gt;

&lt;p&gt;d) What was the fix, and how can we avoid having this &lt;em&gt;class&lt;/em&gt; of issues.&lt;/p&gt;

&lt;p&gt;Can we fix this with better linting? Can we tweak our process a bit?&lt;/p&gt;




&lt;p&gt;The best part about an outage, is that it makes me be part of a team. Of course working with my team every day is also nice, but its the difference of camping with friends and camping with friends while being attacked by a grizzly bear. Outages are just exciting. The atmosphere so nice, how everyone has everyone's back. There is zero blame. Everybody trying their best to help.&lt;/p&gt;

&lt;p&gt;Now during covid, I think outages is indispensable in bringing our remote team together.&lt;/p&gt;

&lt;p&gt;Good vibes!&lt;/p&gt;

&lt;p&gt;PS: pls if you are the kind of person who is looking for someone to point to when shit hits the fan, don't apply.&lt;/p&gt;

</description>
      <category>outage</category>
      <category>rfo</category>
      <category>team</category>
    </item>
    <item>
      <title>the voxel syndrome</title>
      <dc:creator>borislav nikolov</dc:creator>
      <pubDate>Wed, 28 Jul 2021 21:46:09 +0000</pubDate>
      <link>https://forem.com/rekki/the-voxel-syndrome-3f30</link>
      <guid>https://forem.com/rekki/the-voxel-syndrome-3f30</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fXuAnYrA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kxukn4i26rpti2h0az9h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fXuAnYrA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kxukn4i26rpti2h0az9h.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the first time in two years I met a man, and I felt like I am talking to a fellow man. A man that is not my self.&lt;/p&gt;

&lt;p&gt;When two psyches interact, the poet said, the consequences are infinite, as each psyche is total.&lt;/p&gt;

&lt;p&gt;And here we are, talking pictures, ghosts as Derrida would say, talking to each other, day after day, discussing the new feature we will ship.&lt;/p&gt;

&lt;p&gt;But when two pictures talk, so much is lost.&lt;/p&gt;

&lt;p&gt;This will be studied for decades to come, the ghosts talking phenomenon. I wonder what name they would give it, I kind of hope its 'the voxel syndrome' or something cool.&lt;/p&gt;

&lt;p&gt;This is not a post about remote work or office work. It is a post about us, about my team, and my company. &lt;/p&gt;

&lt;p&gt;The engineering community has pushed the 'I am more effective from home' thing for a while, and I absolutely support it, I am absolutely more effective from home. No distractions, no commute, no drama, no 20$ for coffee per day. I am confident we can deliver way more features like that, but it feels like I am alone in a multi player game.&lt;/p&gt;

&lt;p&gt;I don't know if you ever had this experience, everything is there, and the world is working, you can do quests, and slay dragons, but its just not fun. I think there are people who enjoy that kind of work, just as there are people who enjoy playing solo games, and people who don't.&lt;/p&gt;

&lt;p&gt;We are building a 0 to 1 product, it is difficult as fuck, its not "ship a bunch of features" thing or "move those tasks from in-progress to complete". Fuck the tasks. New ground we have to conquer.&lt;/p&gt;

&lt;p&gt;People say "managers want people to go to the office because they want to control" and I say to those people, fuck you. Why are you working somewhere where the managers control you? Quit and stop bitching.&lt;/p&gt;

&lt;p&gt;We need to cook together, we need to eat together, to share ideas and ship.&lt;/p&gt;

&lt;p&gt;I hope we can go back to the office for 1 day a week, and I promise you, this day we will have a feast every week! And we will eat like Monkey D. Luffy after his adventures.&lt;/p&gt;

&lt;p&gt;-b&lt;/p&gt;

</description>
      <category>remote</category>
    </item>
    <item>
      <title>Mutation is life / Boring Technology</title>
      <dc:creator>borislav nikolov</dc:creator>
      <pubDate>Mon, 26 Jul 2021 14:12:11 +0000</pubDate>
      <link>https://forem.com/rekki/mutation-is-life-boring-technology-11h0</link>
      <guid>https://forem.com/rekki/mutation-is-life-boring-technology-11h0</guid>
      <description>&lt;p&gt;TLDR:&lt;br&gt;
We had horrible outage where rabbitmq node ran out of memory because I&lt;br&gt;
forgot to unbind a queue (one of 50s or so), and we lost a whole bunch&lt;br&gt;
of in-producer-memory state, this made our whole infrastructure&lt;br&gt;
completely probabilistic and it took us 2 hours of blood sweat and&lt;br&gt;
tears to recover the lost data. We should've just used postgres as a&lt;br&gt;
queue.&lt;/p&gt;

&lt;p&gt;In almost any system being built today we have users that perform&lt;br&gt;
actions to mutate state. This is life, to create side effects, yes,&lt;br&gt;
mutation is life.&lt;/p&gt;

&lt;p&gt;I will illustrate it like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ user(U) ]&amp;lt;------+
    |             |
    | action(A)   |
    v             |
[ receiver (R)]   ^ world
    |             | change
    |             |
    v             |
[ state (S) ] ----+

receiver:
   this is usually a backend endpoint

user:
   in our case is a chef

state:
   in our case: creating an order
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can move those pieces in any way, for example the action can be a&lt;br&gt;
function of the state mutation, instead of the other way around, or&lt;br&gt;
the receiver of the action can be the user itself and the user&lt;br&gt;
directly mutates the state, etc. This however is what technical&lt;br&gt;
implementation means, looks the same to the outside, but has very&lt;br&gt;
different emergent properties.This is what this post is about,&lt;br&gt;
emergent behavior and chaos.&lt;/p&gt;

&lt;p&gt;Lets a possible technical implementation of 'user' creating an 'order'&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;user sends action to an endpoint /create-order&lt;/li&gt;
&lt;li&gt;backend code pushes to 'order.new'&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;a) consumer of 'order.new'&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;order is picked up transformed a bit and written to a database&lt;/li&gt;
&lt;li&gt;another message is sent to 'order.created' queue&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;a) consumer of 'order.created'&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create audit log of who/when/what&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;b) consumer 'order.created'&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;send email to the interested parties&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;c) consumer 'order.created'&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;builds an email and sends it to 'email.created'&lt;/li&gt;
&lt;li&gt;logs the email for archiving purposes&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;d) consumer 'order.created'&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create chat message with the order&lt;/li&gt;
&lt;li&gt;push to 'message.created'&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;e) consumer 'order.created'&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;extract features for DS&lt;/li&gt;
&lt;li&gt;copy to salesforce etc&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;a) consumer of 'message.created'&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;send push notification&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;a) consumer of 'email.created'&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;sends the email and then pushes to a queue
email.sent&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;a) consumer of 'email.sent'&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;marks the email as sent&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is pretty much what we have now (maybe a bit more complicated but&lt;br&gt;
not much), you can trivially add more reactive components listening to&lt;br&gt;
specific topics, and you can fanout and etc just with rabbitmq.&lt;/p&gt;

&lt;p&gt;It is super flexible and extendible, re-triable etc.&lt;/p&gt;

&lt;p&gt;Of course it was not designed like that, but it grew over the years,&lt;br&gt;
adding bits and pieces here and there, it is very easy to&lt;br&gt;
unconsciously complicate it.&lt;/p&gt;

&lt;p&gt;Everything was really good until I did some refactor and stopped&lt;br&gt;
consuming one of the topics that was a clone of 'order.created', and I&lt;br&gt;
forgot to unbind the queue, so it kept getting messages but nobody was&lt;br&gt;
draining it, and a the RMQ node ran out of memory. Only 1 out of 3,&lt;br&gt;
because we still use elixir for that process, we were relaying on&lt;br&gt;
elixir's in-memory stability to keep a buffer of messages to resend if&lt;br&gt;
need, which of course I killed when I restarted the cluster because I&lt;br&gt;
wasn't sure what the fuck is going on.&lt;/p&gt;

&lt;p&gt;That meant that 30% of all requests went to the abyss, the true abyss.&lt;br&gt;
We had to stay to 5am to glue bits and pieces and to connect the state.&lt;/p&gt;

&lt;p&gt;Caused the worse outage I have ever been firefighting, and once I was&lt;br&gt;
involved in solving an outage that we were selling hotels for 1/100th&lt;br&gt;
of the price, losing millions of euros.&lt;/p&gt;

&lt;p&gt;Now lets discuss another implementation of the same thing:&lt;br&gt;
'user' creating an 'order':&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;user sends action to an endpoint /create-order&lt;/li&gt;
&lt;li&gt;backend code:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  begin
  insert the order
  insert the message (push_notification_sent_at = null)
  insert the email (sent_at = null, delivered_at = null)
  insert log
  commit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;a) secondly cronjob
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  select (for update) messages where push_notification_sent_at is null
  send the push notification
  update the message
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;b) secondly cronjob
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  select (for update) email where sent_at is null
  send the email
  update the email table
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the database as a queue, works totally fine, we can scale&lt;br&gt;
postgress vertically &lt;em&gt;forever&lt;/em&gt;. Why forever you ask? Because there are&lt;br&gt;
~30000 restaurants in London, and we can geo-shard it, and there is&lt;br&gt;
physical upper bound on amount of data in a region.&lt;/p&gt;

&lt;p&gt;Fucking queues, it is so easy to overuse them, without knowing it&lt;br&gt;
creeps up on you, and in the end you have infrastructure spaghetti.&lt;/p&gt;

&lt;p&gt;Anyway We are migrating from queues to transactions and fuck it, I cant keep&lt;br&gt;
it in my head, ultimately it all ends up in postgres anyway, just with&lt;br&gt;
extra steps.&lt;/p&gt;

&lt;p&gt;Fuck.&lt;/p&gt;

&lt;p&gt;The morale of the story is:&lt;br&gt;
If shit ends up in postgres anyway, and you can afford to directly&lt;br&gt;
write to it (which is not always the case), just write to it.&lt;/p&gt;

&lt;p&gt;Do boring technologoy, the way we wrote php3 shit 20 years ago, get&lt;br&gt;
the state and write it in the database, even though mysql didn't have&lt;br&gt;
transactions (it was helpfully accepting to BEGIN/COMMIT though haha),&lt;br&gt;
it was ok.&lt;/p&gt;

&lt;p&gt;PS:&lt;br&gt;
There were 2 missed deliveries, but CS handled them like a&lt;br&gt;
king. Sending uber to pick up the things form the supplier and sending&lt;br&gt;
it to the chef and etc. It is much easier to firefight when you know&lt;br&gt;
CustomerSuccess has your back.&lt;/p&gt;

&lt;p&gt;PPS: I think outages are the best, everyone groups up and we solve the&lt;br&gt;
problem, some panic, some adrenalin, some pressure, but in the end the&lt;br&gt;
whole company becomes more of a team.&lt;/p&gt;

</description>
      <category>boring</category>
    </item>
    <item>
      <title>Work in the kitchen.</title>
      <dc:creator>borislav nikolov</dc:creator>
      <pubDate>Wed, 21 Jul 2021 13:22:00 +0000</pubDate>
      <link>https://forem.com/rekki/work-in-the-kitchen-4ifm</link>
      <guid>https://forem.com/rekki/work-in-the-kitchen-4ifm</guid>
      <description>&lt;p&gt;TLDR:&lt;br&gt;
Use your product, and talk to your users, as a developer there is no&lt;br&gt;
better way to work on the things that matter. Almost every other path&lt;br&gt;
leads to institutional imperative[1] and deepest technical debt that&lt;br&gt;
is almost impossible to pay.&lt;/p&gt;

&lt;p&gt;There is no better way to code than to work in the kitchen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h6p2ah5he49tesypwwo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h6p2ah5he49tesypwwo.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hi,&lt;/p&gt;

&lt;p&gt;I joined REKKI 8-9 months ago, REKKI is an ordering app that looks&lt;br&gt;
like a chat app, that chefs use to order from their suppliers (kind of&lt;br&gt;
like inverse deliveroo). 'Whatsapp for chefs' is how Ronen described&lt;br&gt;
it when we met (best fucking CEO I have ever seen, it is pointless to&lt;br&gt;
talk here about his vision and etc., but if you get a chance to meet&lt;br&gt;
him, don't miss it). My mom has a small grocery shop in Sofia, and I&lt;br&gt;
know how much pain is for her to order from her suppliers, and also I&lt;br&gt;
believe that the whole producer-&amp;gt;supplier-&amp;gt;restaurant-&amp;gt;consumer market&lt;br&gt;
is completely non transparent and everything can be improved.&lt;/p&gt;

&lt;p&gt;So as I said, I joined 8-9 months ago, and then I really did not&lt;br&gt;
understand what is the &lt;em&gt;actual&lt;/em&gt; difference between whatsapp and REKKI,&lt;br&gt;
so I asked Ronen if he can set me up to work in a restaurant for 2&lt;br&gt;
weeks. "2 weeks? are you crazy, who will write code?" he said, so he&lt;br&gt;
gave me 2 days.&lt;/p&gt;

&lt;p&gt;My first 10 hour shift was few days after that.&lt;/p&gt;

&lt;p&gt;My expectations were fairly low, besides me being in complete panic&lt;br&gt;
and outside of my comfort zone, I thought I will wash the dishes and&lt;br&gt;
observe the dynamics.&lt;/p&gt;

&lt;p&gt;I wanted to not be 'minus' to the staff, so I pre-trained myself how&lt;br&gt;
to use the industrial dishwashers watching youtube videos[2]. BTW&lt;br&gt;
those things have the best UX ever, the colors they use to convey&lt;br&gt;
information, the way everything works is just incredible.&lt;/p&gt;

&lt;p&gt;PANIC! The day has arrived! I calmed myself down by watching more&lt;br&gt;
dish-washing videos from the metro on my way to Restaurant X. I was&lt;br&gt;
there at 14:00 (as agreed).&lt;/p&gt;

&lt;p&gt;The restaurant is fairly big, and the kitchen is in the middle, so&lt;br&gt;
everyone can see you cooking, and the dishwasher was in the back in&lt;br&gt;
smaller kitchen. You can't imagine my smile when it was one of the&lt;br&gt;
dishwashers I have pre-trained for.&lt;/p&gt;

&lt;p&gt;The staff was very welcoming, and when I told them I trained for the&lt;br&gt;
dishwasher they were happy for me to wash the dishes while they do&lt;br&gt;
some veg-prep (that's kitchen slang for "vegetable preparation" like&lt;br&gt;
chopping potatoes haha).  Shortly after that they gave me a knife and&lt;br&gt;
some carrots to peel,I am fairly good with the knife (I do&lt;br&gt;
leatherworking[3] and woodworking as a hobby), but in no way I am&lt;br&gt;
comparable with a chef, for them knifes are more like body parts than&lt;br&gt;
anything else, still I didn't do bad, and when they saw I didn't draw&lt;br&gt;
any blood they gave me more and more veg-prep work.&lt;/p&gt;

&lt;p&gt;The restaurant opens at 18:00, and the guests arrive around 18:30, we&lt;br&gt;
ate around 18:00 and then the guests started coming, I was in kitchen&lt;br&gt;
(wearing apron and everything, also I have just as much tattoos as the&lt;br&gt;
chefs, so on the outside there was no question if I am a chef or not).&lt;/p&gt;

&lt;p&gt;Because of my veg-prep and dish-washing performance(I suspect) they&lt;br&gt;
gave me actual dish to cook: mushroom ravioli.&lt;br&gt;
I think it involved those things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;long mushroom stems (the ones I cut earlier)&lt;/li&gt;
&lt;li&gt;some other mushrooms&lt;/li&gt;
&lt;li&gt;truffle butter&lt;/li&gt;
&lt;li&gt;chicken broth
  one of the tricks I learned, everything tastes better
  if you use chicken broth instead of water&lt;/li&gt;
&lt;li&gt;some sauce I forgot how it is called&lt;/li&gt;
&lt;li&gt;stirring until mushrooms get golden&lt;/li&gt;
&lt;li&gt;some olive oil in the end
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkckcrgl5jdd67qdnlkeg.jpg" alt="Alt Text"&gt;
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2pl1wzvlj8x7nhhnbnx.jpg" alt="Alt Text"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;BTW, you can also check out my soup recipe&lt;br&gt;
&lt;a href="https://txt.black/%7Ejack/soup.txt" rel="noopener noreferrer"&gt;https://txt.black/~jack/soup.txt&lt;/a&gt; if you are interested in my cooking&lt;br&gt;
abilities.&lt;/p&gt;

&lt;p&gt;Turns out &lt;em&gt;all&lt;/em&gt; people order food in the same time, I am telling you I&lt;br&gt;
have been dealing with distributed system consensus for the last 6-7&lt;br&gt;
years, and I never thought unorganized entities can act in this way&lt;br&gt;
without behavior emerging from simple rule (like ducks following), its&lt;br&gt;
CHAOS, I have to do so much mushroom ravioli.. PANIC...You know how&lt;br&gt;
all chefs stir the pan by tossing the food in the air? Well I have to&lt;br&gt;
use a spoon to stir; also all the guests can see me using a spoon..&lt;br&gt;
Anyway I did ok, nobody returned their food! At 22:00 It is the first&lt;br&gt;
time I can step outside for a breath of fresh air for 5 minutes.&lt;/p&gt;

&lt;p&gt;Then things get easier, from 22:00 to 00:00 guests only drink, and we&lt;br&gt;
started cleaning the kitchen around 23:30, my last metro leaves at&lt;br&gt;
00:15, so if I don't leave at 00:00 I won't catch it, and of course&lt;br&gt;
there are guests that won't leave, despite restaurant closing.&lt;/p&gt;

&lt;p&gt;My legs hurt, from 14:00 to 00:00 nonstop, bloody hell this is some&lt;br&gt;
brutal work. I couldn't take notes, or do anything, just chop chop&lt;br&gt;
chop, stir stir stir.&lt;/p&gt;

&lt;p&gt;BTW If the people of RestaurantX happen to read this post, thank you&lt;br&gt;
so much for letting me cook! You rock!&lt;/p&gt;

&lt;p&gt;Few weeks after that I got my second shift, in a small michelin star&lt;br&gt;
restaurant. Things went in quite similar way, but I was more&lt;br&gt;
confident, I did some dishes, a lot of veg-prep, which they thought&lt;br&gt;
was boring, but I kind of liked it, there is not that many times in my&lt;br&gt;
life when I know &lt;em&gt;exactly&lt;/em&gt; what to do.&lt;/p&gt;

&lt;p&gt;Again from 14:00 to 00:00, and again people did not leave when&lt;br&gt;
restaurant is closing, fuck people just leave! Chefs are tired!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I will do a small divergence here to tell you about the tech stack&lt;br&gt;
we had when I joined:&lt;/p&gt;

&lt;p&gt;Elixir was chosen, because whatsapp uses erlang, and we need&lt;br&gt;
presence and "who is typing" feature as any serious chat app.&lt;/p&gt;

&lt;p&gt;In order to build a chat app, elixir makes a lot of sense, hot code&lt;br&gt;
reloading can keep the sockets alive during deploys, you have native&lt;br&gt;
presence with phoenix channels, etc. also super easy to build&lt;br&gt;
channels and joining and leaving and everything.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So I came back to work, and let me tell you REKKI is &lt;em&gt;not&lt;/em&gt; a Whatsapp&lt;br&gt;
for chefs app. It is an ordering app that looks like a chat, but there&lt;br&gt;
is &lt;em&gt;no&lt;/em&gt; way that two chefs are in the app in the same time, if one is&lt;br&gt;
ordering from suppliers, the others are cooking, or veg-prepping or&lt;br&gt;
anything but not ordering.&lt;br&gt;
Oh and I forgot to mention: the internet in the kitchen is total shit,&lt;br&gt;
constantly switching wifi and 3g, and in some parts you have neither.&lt;/p&gt;

&lt;p&gt;Phoenix channels(abstraction on top of websocket) are full duplex, but&lt;br&gt;
serial, which means you basically can't block on your endpoints&lt;br&gt;
because no new requests can come through the channel, so the payload&lt;br&gt;
was sent to rabbitmq and then something else executed it. This lead to&lt;br&gt;
incredibly interconnected and complex (especially with retries) system&lt;br&gt;
in order to write a record in a database, but it was a requirement if&lt;br&gt;
we want "someone is typing", except we don't!&lt;br&gt;
And guess how good the websockets perform with the kitchen's internet.&lt;br&gt;
Heartbeats, reconnects, flushing queues, pending actions etc etc, all&lt;br&gt;
had to be taken care of.&lt;br&gt;
This is &lt;em&gt;true&lt;/em&gt; technical debt, you can only solve by doubling down on&lt;br&gt;
the problem you already have, and of course, you make it work, and you&lt;br&gt;
improve it, but you can only make incremental improvements.&lt;/p&gt;

&lt;p&gt;You see if you work in the kitchen you see what your technology is&lt;br&gt;
doing, and what your users &lt;em&gt;do&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Now we are moving from elixir to go, we moved from channels to&lt;br&gt;
endpoints, pulled up the queue actions in the endpoints so the chef&lt;br&gt;
gets immediate feedback if there is any kind of error with their&lt;br&gt;
request. I have to say I love the app now, how snappy and clean it is.&lt;br&gt;
We moved away from elixir because we can't get devs and also every one&lt;br&gt;
can read and write go (it has like ten reserved words..), and also&lt;br&gt;
empowered frontend devs to become fullstack devs and make their&lt;br&gt;
endpoints. (also it is super annoying to write business logic in a&lt;br&gt;
functional language)&lt;/p&gt;

&lt;p&gt;There is no better way to code than to work in the kitchen.&lt;/p&gt;

&lt;p&gt;--&lt;br&gt;
Fri  7 Feb 19:38:07 CET 2020&lt;br&gt;
Borislav Nikolov (&lt;a href="https://github.com/jackdoe" rel="noopener noreferrer"&gt;https://github.com/jackdoe&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;p&gt;[1] &lt;a href="https://www.berkshirehathaway.com/letters/1989.html" rel="noopener noreferrer"&gt;https://www.berkshirehathaway.com/letters/1989.html&lt;/a&gt;&lt;br&gt;
   BERKSHIRE HATHAWAY 1989 shareholders letter&lt;/p&gt;

&lt;p&gt;[2] &lt;a href="https://www.youtube.com/results?search_query=dishwashing+training" rel="noopener noreferrer"&gt;https://www.youtube.com/results?search_query=dishwashing+training&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[3] &lt;a href="https://txt.black/%7Ejack/jar-to-mug.txt" rel="noopener noreferrer"&gt;https://txt.black/~jack/jar-to-mug.txt&lt;/a&gt;&lt;/p&gt;

</description>
      <category>product</category>
    </item>
    <item>
      <title>Interface Dispatch</title>
      <dc:creator>borislav nikolov</dc:creator>
      <pubDate>Wed, 21 Jul 2021 13:07:11 +0000</pubDate>
      <link>https://forem.com/rekki/interface-dispatch-920</link>
      <guid>https://forem.com/rekki/interface-dispatch-920</guid>
      <description>&lt;p&gt;TLDR:&lt;/p&gt;

&lt;p&gt;Always measure before you optimize, I introduced super shitty bug to &lt;a href="https://github.com/rekki/go-query"&gt;https://github.com/rekki/go-query&lt;/a&gt;, because I thought the bottleneck is in the binary search implementation, and gained nothing; calling interface methods in hot for loops adds ~10-15 instructions per call.&lt;/p&gt;

&lt;p&gt;Hi,&lt;/p&gt;

&lt;p&gt;I want to illustrate why intuition in the field of performance is&lt;br&gt;
often misleading, and just like propaganda, the more you think you you are immune to the illusion, the more effect it has on you.&lt;/p&gt;

&lt;p&gt;Let me tell you when I introduced terrible unreproducible and contextual bug in a library we use everywhere.&lt;/p&gt;
&lt;h1&gt;
  
  
  Cost of simple things
&lt;/h1&gt;

&lt;p&gt;First, we will discuss the cost of simple things, like counting from 1&lt;br&gt;
a million.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import "testing"

var sum = 0

func BenchmarkNoop(b *testing.B) {
        for i := 0; i &amp;lt; b.N; i++ {
                sum++
        }
}

// go test -bench=.
// BenchmarkNoop-8         873429528                1.36 ns/op
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;the for loop needs to do one comparison and one jump&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for i:= 0; i &amp;lt; 1000000; i++ {
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;is something like&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0: inc i
1: cmp i, 1000000
2: jlt 10
3: ... code
4: jmp 0
10: .. rest of program
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;you can also think about it implemented with goto:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;i = 0
check:
if i &amp;gt; 1000000
   goto done

for loop code
...
goto check

done:
rest of program
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;(it is also very interesting to investigate the modern branch&lt;br&gt;
predictors)&lt;/p&gt;

&lt;p&gt;In order to mitigate the loop overhead sometimes it is possible to&lt;br&gt;
unroll the loop, which will do something like:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;i++
code
i++
code
i++
code
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Of course, not all instructions are created equal, today it is far&lt;br&gt;
from intuitive what exactly is happening (especially now when people&lt;br&gt;
don't own the computers they run their code on), but for me a good&lt;br&gt;
intuition is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;bitwise, sum, local jump is fast&lt;/li&gt;
&lt;li&gt;division by 2 is natural, so it is fast (x &amp;gt;&amp;gt; 1). division is a search &lt;a href="https://youtu.be/o4-CwDo2zpg?t=354"&gt;https://youtu.be/o4-CwDo2zpg?t=354&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;same with multiplication anyting but 2 is slow (x &amp;lt;&amp;lt; 1)&lt;/li&gt;
&lt;li&gt;almost all other math is slow&lt;/li&gt;
&lt;li&gt;locality is king&lt;/li&gt;
&lt;li&gt;hot method calling adds ~10 instructions cost &lt;code&gt;for i:=0; i&amp;lt;n; i++ { f() }&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;atomic operations are ~25 instructions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My CPU is ~3ghz, which means it can do 3 * 10^9 things per second, so&lt;br&gt;
we have the for loop in the benchmark so 1.36 per iteration sounds&lt;br&gt;
about right, considering about 4 things per iteration.&lt;br&gt;
(&lt;a href="https://en.wikipedia.org/wiki/Arithmetic_logic_unit"&gt;https://en.wikipedia.org/wiki/Arithmetic_logic_unit&lt;/a&gt;)&lt;br&gt;
Now that we are so far abstracted from the machine, it is incredibly&lt;br&gt;
difficult to have good intuition about how much things cost, so I&lt;br&gt;
usually multiply my guess by 5 or so (to be cloud native, haha).&lt;/p&gt;

&lt;p&gt;Anyway, let's move to the cost of calling a function.&lt;/p&gt;
&lt;h1&gt;
  
  
  Function calling
&lt;/h1&gt;

&lt;p&gt;Calling functions is quite simple. We need to prepare its parameters&lt;br&gt;
and then call it. This involves putting their values on the call stack&lt;br&gt;
(offset by SP) and then invoking CALL. The function itself uses those&lt;br&gt;
parameters and then invokes RET.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// tell the go compiler to not inline the function by doing go:noinline
// semantic comment
//go:noinline
func add(a, b int) (int, int) {
        return a + b, 1000
}
// go build main.go &amp;amp;&amp;amp; go tool objdump -s main.add main
MOVQ $0x0, 0x18(SP)
MOVQ $0x0, 0x20(SP)
MOVQ 0x8(SP), AX
ADDQ 0x10(SP), AX
MOVQ AX, 0x18(SP)
MOVQ $0x3e8, 0x20(SP)
RET

func main() {
        a, b := add(100, 200)
        add(a, b)
}

// go build main.go &amp;amp;&amp;amp; go tool objdump -s main.main main
MOVQ $0x64, 0(SP)
MOVQ $0xc8, 0x8(SP)
CALL main.add(SB)
MOVQ 0x10(SP), AX
MOVQ AX, 0x38(SP)
MOVQ 0x18(SP), AX
MOVQ AX, 0x30(SP)
MOVQ 0x38(SP), AX
MOVQ AX, 0x28(SP)
MOVQ 0x30(SP), AX
MOVQ AX, 0x20(SP)
MOVQ 0x28(SP), AX
MOVQ AX, 0(SP)
MOVQ 0x20(SP), AX
MOVQ AX, 0x8(SP)
CALL main.add(SB)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;So calling a function with two parameters, that returns 2 values does&lt;br&gt;
at least 6 things. (prepare the call stack, call, prepare the return&lt;br&gt;
stack, return)&lt;/p&gt;
&lt;h1&gt;
  
  
  Inlining
&lt;/h1&gt;

&lt;p&gt;Because of the calling overhead, all compiled languages try to inline the functions so it won't have to do do the CALL,RET and stack preparation work.&lt;/p&gt;

&lt;p&gt;First let's discuss how the CPU executes code.  In the Von Neumann&lt;br&gt;
architecture(which is pretty much all modern computers), code and data are in the same memory, so it is only in the eye of the beholder if something is code or data, pretty much the same way as it is up to you if something is the character 'a' or the integer 97, when 'a' is encoded as 01100001.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Von_Neumann_architecture"&gt;https://en.wikipedia.org/wiki/Von_Neumann_architecture&lt;/a&gt;&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/Harvard_architecture"&gt;https://en.wikipedia.org/wiki/Harvard_architecture&lt;/a&gt; (alternative&lt;br&gt;
architecture that has separate memory for code and data)&lt;/p&gt;

&lt;p&gt;Let's create super simple 8 bit CPU with two 1 byte general purpose&lt;br&gt;
registers, R0 and R1:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1 MOV addr, r0   copy addr in r0
2 MOV addr, r1   copy addr in r1
3 MOV r0, addr   copy r0 in addr
4 MOV r1, addr   copy r1 in addr
5 MOV r0, $value store $value in r0
6 MOV r1, $value store $value in r1
7 ADD  adds r0 and r1 and stores value in r0
8 JMP  addr jump to given address
9 HAL  stop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(from Richard Buckland's 4 bit CPU)&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=gTeDX4yAdyU"&gt;https://www.youtube.com/watch?v=gTeDX4yAdyU&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;in our example, we will set the CPU to execute from memory address 0,&lt;br&gt;
and an example program that counts forever would look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0000: MOV r0, $0
0001: MOV r1, $1
0002: ADD
0003: MOV r0, 10
0004: JMP 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now let's look at the memory layout byte by byte&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;addr:0  00000101 // mov r0, $0
addr:1  00000000 
addr:2  00000110 // mov r1, $1
addr:3  00000001 
addr:4  00000111 // add
addr:5  00000011 // mov r0, 10
addr:6  00001010
addr:7  00001000 // jmp 1
addr:8  00000001
addr:9  00000000 // nothing
addr:10 00000000 // result of mov r0, 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, there is no difference between address 0 and address&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The only thing stopping the cpu to execute address 8,9,10 is the
JMP on address 7, that makes it go to address 0&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The way the cpu actually executes it is by using fetch-decode-execute&lt;br&gt;
cycle (&lt;a href="https://en.wikipedia.org/wiki/Instruction_cycle"&gt;https://en.wikipedia.org/wiki/Instruction_cycle&lt;/a&gt;)&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Fetch Stage: The next instruction is fetched from the memory address
that is currently stored in the program counter and stored into the
instruction register. At the end of the fetch operation, the PC points
to the next instruction that will be read at the next cycle.

Decode Stage: During this stage, the encoded instruction presented in
the instruction register is interpreted by the decoder.

Execute Stage: The control unit of the CPU passes the decoded
information as a sequence of control signals to the relevant function
units of the CPU to perform the actions required by the instruction,
such as reading values from registers, passing them to the ALU to
perform mathematical or logic functions on them, and writing the
result back to a register. If the ALU is involved, it sends a
condition signal back to the CU. The result generated by the operation
is stored in the main memory or sent to an output device. Based on the
feedback from the ALU, the PC may be updated to a different address
from which the next instruction will be fetched.

Repeat Cycle
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;To access ram is in the order of 100ns, so if we have to waste 100ns&lt;br&gt;
for every instruction the whole thing will be horrible, to mitigate&lt;br&gt;
this CPUs have hierarchy of caches (tlb[21], l1,l2,l3).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;L1 cache reference                           0.5 ns
Executing Instruction                        1   ns
Branch mispredict                            5   ns
L2 cache reference                           7   ns 14x L1 cache
Mutex lock/unlock                           25   ns
Main memory reference                      100   ns 20x  L2 cache,
                                                    200x L1 cache
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="http://norvig.com/21-days.html#answers"&gt;http://norvig.com/21-days.html#answers&lt;/a&gt;&lt;br&gt;
&lt;a href="https://gist.github.com/jboner/2841832"&gt;https://gist.github.com/jboner/2841832&lt;/a&gt;&lt;br&gt;
&lt;a href="http://www.cim.mcgill.ca/%7Elanger/273/18-notes.pdf"&gt;http://www.cim.mcgill.ca/~langer/273/18-notes.pdf&lt;/a&gt;&lt;br&gt;
&lt;a href="https://people.freebsd.org/%7Elstewart/articles/cpumemory.pdf"&gt;https://people.freebsd.org/~lstewart/articles/cpumemory.pdf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The l1 cache is usually split into an instruction cache and data&lt;br&gt;
cache, and it is in the order of ~32kb per core (how this cache&lt;br&gt;
hierarchy works in the modern multi-core CPUs is a topic on its own,&lt;br&gt;
but lets think as if we have one core only).&lt;/p&gt;

&lt;p&gt;There are other mitigations to how slow the fetch-execute cycle is,&lt;br&gt;
such as instruction pipelining, vectorization, out of order execution&lt;br&gt;
etc (you can check them out on the bottom of the instruction cycle&lt;br&gt;
wiki page &lt;a href="https://en.wikipedia.org/wiki/Instruction_cycle"&gt;https://en.wikipedia.org/wiki/Instruction_cycle&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Instruction_pipelining"&gt;https://en.wikipedia.org/wiki/Instruction_pipelining&lt;/a&gt;&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hazard_(computer_architecture)"&gt;https://en.wikipedia.org/wiki/Hazard_(computer_architecture)&lt;/a&gt;&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/Out-of-order_execution"&gt;https://en.wikipedia.org/wiki/Out-of-order_execution&lt;/a&gt;&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/Speculative_execution"&gt;https://en.wikipedia.org/wiki/Speculative_execution&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Modern CPUs are incredibly complicated systems, and the model I use is&lt;br&gt;
that in most case (unless proven otherwise) "it does one thing at a&lt;br&gt;
time" works ok to do back-of-the-napkin calculations. (e.g., some&lt;br&gt;
modern hashing algorithms that abuse vectorization can hash incredible&lt;br&gt;
amounts of data)&lt;/p&gt;

&lt;p&gt;So, inlining is a dance.&lt;/p&gt;

&lt;p&gt;I will refer to:&lt;br&gt;
    &lt;a href="http://www.cs.technion.ac.il/users/yechiel/c++-faq/inline-and-perf.html"&gt;http://www.cs.technion.ac.il/users/yechiel/c++-faq/inline-and-perf.html&lt;/a&gt;&lt;br&gt;
    [9.3] Do inline functions improve performance?&lt;br&gt;
    Yes and no. Sometimes. Maybe.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;There are no simple answers. Inline functions might make the code
faster, they might make it slower. They might make the executable
larger, they might make it smaller. They might cause thrashing,
they might prevent thrashing. And they might be, and often are,
totally irrelevant to speed.

inline functions might make it faster:
  As shown above, procedural integration might remove a bunch of
  unnecessary instructions, which might make things run faster.

inline functions might make it slower: 
  Too much inlining might cause code bloat, which might cause
  "thrashing" on demand-paged virtual-memory systems. In other
  words, if the executable size is too big, the system might spend
  most of its time going out to disk to fetch the next chunk of
  code.

inline functions might make it larger:
  This is the notion of code bloat, as described above. For
  example, if a system has 100 inline functions each of which
  expands to 100 bytes of executable code and is called in 100
  places, that's an increase of 1MB. Is that 1MB going to cause
  problems? Who knows, but it is possible that that last 1MB could
  cause the system to "thrash," and that could slow things down.

inline functions might make it smaller:
  The compiler often generates more code to push/pop
  registers/parameters than it would by inline-expanding the
  function's body. This happens with very small functions, and it
  also happens with large functions when the optimizer is able to
  remove a lot of redundant code through procedural integration â€”
  that is, when the optimizer is able to make the large function
  small.

inline functions might cause thrashing:
  Inlining might increase the size of the binary executable, and
  that might cause thrashing.

inline functions might prevent thrashing
  The working set size (number of pages that need to be in memory
  at once) might go down even if the executable size goes up. When
  f() calls g(), the code is often on two distinct pages; when the
  compiler procedurally integrates the code of g() into f(), the
  code is often on the same page.

inline functions might increase the number of cache misses: 
  Inlining might cause an inner loop to span across multiple lines
  of the memory cache, and that might cause thrashing of the
  memory-cache.

inline functions might decrease the number of cache misses:
  Inlining usually improves locality of reference within the
  binary code, which might decrease the number of cache lines
  needed to store the code of an inner loop. This ultimately could
  cause a CPU-bound application to run faster.

inline functions might be irrelevant to speed: 
  Most systems are not CPU-bound. Most systems are I/O-bound,
  database-bound or network-bound, meaning the bottleneck in the
  system's overall performance is the file system, the database or
  the network. Unless your "CPU meter" is pegged at 100%, inline
  functions probably won't make your system faster. (Even in
  CPU-bound systems, inline will help only when used within the
  bottleneck itself, and the bottleneck is typically in only a
  small percentage of the code.)

There are no simple answers: You have to play with it to see what
is best. Do not settle for simplistic answers like, "Never use
inline functions" or "Always use inline functions" or "Use inline
functions if and only if the function is less than N lines of
code." These one-size-fits-all rules may be easy to write down,
but they will produce sub-optimal results.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  Interfaces
&lt;/h1&gt;

&lt;p&gt;Lets say we have a common case where a struct gets an io.Writer&lt;br&gt;
interface to write into:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Database struct {
        writer io.Writer
}
func (d *Database) SetWriter(f io.Writer) error {
        d.writer = f
}
....
func (d *Database) writeBlob(b []byte) error {
        checksum := hash(b)
        _, err := d.writer.Write(checksum)
        if err != nil {
                return err
        }
        _, err := d.writer.Write(b)
        return err
}
....
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now because d.writer can change at runtime, the compiler can never&lt;br&gt;
know what the other end of d.writer is, so it can never inline it even&lt;br&gt;
if it wants to (you can imagine the actual os.File.Write is just doing&lt;br&gt;
the write syscall)&lt;/p&gt;

&lt;p&gt;Another issue is that the thing on the other end of d.writer could be&lt;br&gt;
a pointer, so it has to be checked because if it is and it is nil (0),&lt;br&gt;
we must panic accordingly.&lt;/p&gt;

&lt;p&gt;The way this works is pretty much:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if x == nil
   goto panic
... code

return x

panic:
help build a stacktrace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;(pseudo code)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;START:
  CMPQ 0x10(CX), SP
  JBE CALL_PANIC

  MOVQ 0x40(SP), AX
  TESTQ AX, AX
  JLE PANIC
  ... work work ...
TRACE:
  MOVQ CX, 0x50(SP)
  MOVQ 0x28(SP), BP
  ADDQ $0x30, SP
  RET
PANIC:
  XORL CX, CX // cx = 0
  JMP TRACE
CALL_PANIC:
  CALL runtime.morestack_noctxt(SB)
  JMP START
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That means on every function call to writeBlob has to check if&lt;br&gt;
d.writer is nil, because it can be and there is no way for the&lt;br&gt;
compiler to know at compile time, and then prepare the stack and call&lt;br&gt;
it.&lt;/p&gt;

&lt;h1&gt;
  
  
  How Slow is Dynamic Dispatch exactly?
&lt;/h1&gt;

&lt;p&gt;More concrete example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

type Operation interface {
        Apply() int
}

type Number struct {
        n int
}

func (x Number) Apply() int {
        return x.n
}

type Add struct {
        Operations []Operation
}

func (x Add) Apply() int {
        r := 0
        for _, v := range x.Operations {
                r += v.Apply()
        }
        return r
}

type Sub struct {
        Operations []Operation
}

func (x Sub) Apply() int {
        r := 0
        for _, v := range x.Operations {
                r -= v.Apply()
        }
        return r
}

type AddCustom struct {
        Operations []Number
}

func (x AddCustom) Apply() int {
        r := 0
        for _, v := range x.Operations {
                r += v.Apply()
        }
        return r
}

func main() {
        n := 0
        op := Add{Operations: []Operation{Number{n: 5}, Number{n: 6}}}
        n += op.Apply()

        opc := AddCustom{Operations: []Number{Number{n: 5}, Number{n: 6}}}
        n += opc.Apply()
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Lets look at main.Add.Apply and main.AddCustom.Apply&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// go build main.go &amp;amp;&amp;amp; go tool objdump main
TEXT main.Add.Apply(SB) main.go
MOVQ FS:0xfffffff8, CX
CMPQ 0x10(CX), SP
JBE 0x4526c7
SUBQ $0x30, SP
MOVQ BP, 0x28(SP)
LEAQ 0x28(SP), BP
MOVQ 0x40(SP), AX
TESTQ AX, AX
JLE 0x4526c3
MOVQ 0x38(SP), CX
XORL DX, DX
XORL BX, BX
JMP 0x452678
MOVQ 0x20(SP), SI
ADDQ $0x10, SI
MOVQ AX, DX
MOVQ CX, BX
MOVQ SI, CX
MOVQ CX, 0x20(SP)
MOVQ DX, 0x18(SP)
MOVQ BX, 0x10(SP)
MOVQ 0(CX), AX
MOVQ 0x8(CX), SI
MOVQ 0x18(AX), AX
MOVQ SI, 0(SP)
CALL AX
MOVQ 0x18(SP), AX
INCQ AX
MOVQ 0x10(SP), CX
ADDQ 0x8(SP), CX
MOVQ 0x40(SP), DX
CMPQ DX, AX
JL 0x452666
MOVQ CX, 0x50(SP)
MOVQ 0x28(SP), BP
ADDQ $0x30, SP
RET
XORL CX, CX
JMP 0x4526b4
CALL runtime.morestack_noctxt(SB)
JMP main.Add.Apply(SB)

TEXT main.AddCustom.Apply(SB) main.go
MOVQ 0x8(SP), AX
MOVQ 0x10(SP), CX
XORL DX, DX
XORL BX, BX
JMP 0x4526fa
MOVQ 0(AX)(DX*8), SI
INCQ DX
ADDQ SI, BX
CMPQ CX, DX
JL 0x4526f0
MOVQ BX, 0x20(SP)
RET
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Benchmark:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;goos: linux&lt;br&gt;
goarch: amd64&lt;br&gt;
BenchmarkInterface-8    171836106                6.81 ns/op&lt;br&gt;
BenchmarkInline-8       424364508                2.70 ns/op&lt;br&gt;
BenchmarkNoop-8         898746903                1.36 ns/op&lt;br&gt;
PASS&lt;br&gt;
ok      command-line-arguments  4.673s&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Optimization&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;You know the golden rule: always measure first.&lt;/p&gt;

&lt;p&gt;When I wrote &lt;a href="https://github.com/rekki/go-query"&gt;https://github.com/rekki/go-query&lt;/a&gt; I thought the slowest&lt;br&gt;
part was the binary search, I thought: well..the loop cant be&lt;br&gt;
unrolled, and the branchy algorithm is too difficult on the&lt;br&gt;
branch-predictor, so it is probably the slowest, I will just optimize&lt;br&gt;
it, and I did.&lt;/p&gt;

&lt;p&gt;Total waste of time, the overhead of the interface dispatch is 70% of&lt;br&gt;
the time, not only I wasted my time, but I introduced a super shitty&lt;br&gt;
bug that was not caught by 90% coverage and etc, luckily it did not&lt;br&gt;
end up in production (by sheer luck!).&lt;/p&gt;

&lt;p&gt;Anyway, this is a reminder to myself: always measure.&lt;/p&gt;

&lt;p&gt;Oh BTW, of course, this whole thing almost never hits you, ~10 extra&lt;br&gt;
instructions per call never matter. (almost)&lt;/p&gt;

&lt;p&gt;How to measure though? Considering what is executed in the benchmark&lt;br&gt;
is very likely not the thing being executed in production, so for&lt;br&gt;
example, I could have a benchmark that is super fast and nice, but&lt;br&gt;
when it runs in prod, the thing gets inlined and starts page&lt;br&gt;
thrashing.&lt;/p&gt;

&lt;p&gt;There is a whole spectrum of tools to use from statistics powered&lt;br&gt;
micro benchmark tools to flamegraph[19] visualizations, and the whole&lt;br&gt;
topic is a post on its own, but I believe anything you do to measure,&lt;br&gt;
will be better than intuition.&lt;/p&gt;

&lt;p&gt;Luckily go's profiling tools are incredible[20] and you can get good&lt;br&gt;
insights very quickly.&lt;/p&gt;

&lt;p&gt;BTW, I did have a profile for go-query before I went into optimizing,&lt;br&gt;
and I saw that the cost is in the Next() call, but because I did not&lt;br&gt;
get deeper into it and I decided to start with optimizing the binary&lt;br&gt;
search, instead of looking carefully into the data. As the poet says:&lt;br&gt;
"everywhere you look, you see what you are looking for".&lt;/p&gt;

&lt;p&gt;--&lt;br&gt;
Fri  7 Feb 18:08:42 CET 2020,&lt;br&gt;
Borislav Nikolov (&lt;a href="https://github.com/jackdoe"&gt;https://github.com/jackdoe&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;References:&lt;br&gt;
[1] &lt;a href="https://github.com/rekki/go-query"&gt;https://github.com/rekki/go-query&lt;/a&gt;&lt;br&gt;
   Low level full text search library.&lt;/p&gt;

&lt;p&gt;[2] &lt;a href="https://youtu.be/o4-CwDo2zpg?t=354"&gt;https://youtu.be/o4-CwDo2zpg?t=354&lt;/a&gt;&lt;br&gt;
   Fastware - Andrei Alexandrescu&lt;/p&gt;

&lt;p&gt;[3] &lt;a href="https://www.youtube.com/watch?v=Qq_WaiwzOtI"&gt;https://www.youtube.com/watch?v=Qq_WaiwzOtI&lt;/a&gt;&lt;br&gt;
   CppCon 2014: Andrei Alexandrescu "Optimization Tips - Mo' Hustle&lt;br&gt;
   Mo' Problems"&lt;/p&gt;

&lt;p&gt;[4] &lt;a href="https://www.youtube.com/watch?v=FJJTYQYB1JQ"&gt;https://www.youtube.com/watch?v=FJJTYQYB1JQ&lt;/a&gt;&lt;br&gt;
   CppCon 2019: Andrei Alexandrescu â€œSpeed Is Found In The Minds of&lt;br&gt;
   People"&lt;/p&gt;

&lt;p&gt;[5] &lt;a href="https://en.wikipedia.org/wiki/Arithmetic_logic_unit"&gt;https://en.wikipedia.org/wiki/Arithmetic_logic_unit&lt;/a&gt;&lt;br&gt;
[6] &lt;a href="https://en.wikipedia.org/wiki/Von_Neumann_architecture"&gt;https://en.wikipedia.org/wiki/Von_Neumann_architecture&lt;/a&gt;&lt;br&gt;
[7] &lt;a href="https://en.wikipedia.org/wiki/Harvard_architecture"&gt;https://en.wikipedia.org/wiki/Harvard_architecture&lt;/a&gt;&lt;br&gt;
[8] &lt;a href="https://en.wikipedia.org/wiki/Instruction_cycle"&gt;https://en.wikipedia.org/wiki/Instruction_cycle&lt;/a&gt;&lt;br&gt;
[9] &lt;a href="https://en.wikipedia.org/wiki/Instruction_pipelining"&gt;https://en.wikipedia.org/wiki/Instruction_pipelining&lt;/a&gt;&lt;br&gt;
[10] &lt;a href="https://en.wikipedia.org/wiki/Hazard_(computer_architecture)"&gt;https://en.wikipedia.org/wiki/Hazard_(computer_architecture)&lt;/a&gt;&lt;br&gt;
[11] &lt;a href="https://en.wikipedia.org/wiki/Out-of-order_execution"&gt;https://en.wikipedia.org/wiki/Out-of-order_execution&lt;/a&gt;&lt;br&gt;
[12] &lt;a href="https://en.wikipedia.org/wiki/Speculative_execution"&gt;https://en.wikipedia.org/wiki/Speculative_execution&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[13] &lt;a href="https://www.youtube.com/watch?v=gTeDX4yAdyU"&gt;https://www.youtube.com/watch?v=gTeDX4yAdyU&lt;/a&gt;&lt;br&gt;
   Lecture 3: Machine Code - Richard Buckland UNSW&lt;/p&gt;

&lt;p&gt;[14] &lt;a href="http://norvig.com/21-days.html#answers"&gt;http://norvig.com/21-days.html#answers&lt;/a&gt;&lt;br&gt;
   Basic numbers for back of the napkin calculations&lt;/p&gt;

&lt;p&gt;[15] &lt;a href="https://gist.github.com/jboner/2841832"&gt;https://gist.github.com/jboner/2841832&lt;/a&gt;&lt;br&gt;
   Basic numbers for back of the napkin calculations&lt;/p&gt;

&lt;p&gt;[16] &lt;a href="http://www.cim.mcgill.ca/%7Elanger/273/18-notes.pdf"&gt;http://www.cim.mcgill.ca/~langer/273/18-notes.pdf&lt;/a&gt;&lt;br&gt;
   TLB misses&lt;/p&gt;

&lt;p&gt;[17] &lt;a href="https://people.freebsd.org/%7Elstewart/articles/cpumemory.pdf"&gt;https://people.freebsd.org/~lstewart/articles/cpumemory.pdf&lt;/a&gt;&lt;br&gt;
   What Every Programmer Should Know About Memory&lt;/p&gt;

&lt;p&gt;[18] &lt;a href="http://www.cs.technion.ac.il/users/yechiel/c++-faq/inline-and-perf.html"&gt;http://www.cs.technion.ac.il/users/yechiel/c++-faq/inline-and-perf.html&lt;/a&gt;&lt;br&gt;
   [9.3] Do inline functions improve performance?&lt;/p&gt;

&lt;p&gt;[19] &lt;a href="http://www.brendangregg.com/flamegraphs.html"&gt;http://www.brendangregg.com/flamegraphs.html&lt;/a&gt;&lt;br&gt;
   Flamegraphs&lt;/p&gt;

&lt;p&gt;[20] &lt;a href="https://blog.golang.org/profiling-go-programs"&gt;https://blog.golang.org/profiling-go-programs&lt;/a&gt;&lt;br&gt;
   Profiling Go Programs&lt;/p&gt;

&lt;p&gt;[21] &lt;a href="https://en.wikipedia.org/wiki/Translation_lookaside_buffer"&gt;https://en.wikipedia.org/wiki/Translation_lookaside_buffer&lt;/a&gt;&lt;br&gt;
   Translation lookaside buffer&lt;/p&gt;

&lt;p&gt;[22] &lt;a href="https://godbolt.org"&gt;https://godbolt.org&lt;/a&gt;&lt;br&gt;
   Compiler Explorer is an interactive online compiler which shows the&lt;br&gt;
   assembly output of compiled C++, Rust, Go (and many more) code.&lt;/p&gt;

</description>
      <category>performance</category>
    </item>
  </channel>
</rss>
