<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Erik Riedel, PhD</title>
    <description>The latest articles on Forem by Erik Riedel, PhD (@riedelatwork).</description>
    <link>https://forem.com/riedelatwork</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/riedelatwork"/>
    <language>en</language>
    <item>
      <title>discovering storage with k8s via rancher</title>
      <dc:creator>Erik Riedel, PhD</dc:creator>
      <pubDate>Wed, 13 May 2020 22:05:39 +0000</pubDate>
      <link>https://forem.com/riedelatwork/discovering-storage-with-k8s-via-rancher-2l73</link>
      <guid>https://forem.com/riedelatwork/discovering-storage-with-k8s-via-rancher-2l73</guid>
      <description>&lt;p&gt;as part of my presentation "From Servers to Serverless in Ten Minutes" (&lt;a href="https://noti.st/er1p/PcTlyj/from-servers-to-serverless"&gt;slides&lt;/a&gt;) presented during the &lt;a href="https://www.opencompute.org/summit/global-summit"&gt;OCP Virtual Summit&lt;/a&gt; on 12 May 2020, I promised to describe our storage setup.&lt;/p&gt;

&lt;p&gt;we had two system setups, as discussed in the talk&lt;/p&gt;

&lt;h2&gt;
  
  
  deskside - Sesame Discovery Fast-Start
&lt;/h2&gt;

&lt;p&gt;our deskside cluster using the Sesame Discovery Fast-Start unit consists of four nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;four  - Leopard ORv2-DDR4, dual E5-2678 v3 @ 2.50GHz, 256GB memory, 1TB NVMe
three - Leopard ORv2-DDR4, dual E5-2678 v3 @ 2.50GHz, 256GB memory, 1TB NVMe
two   - Leopard ORv2-DDR4, dual E5-2678 v3 @ 2.50GHz, 256GB memory, 1TB NVMe
one   - Leopard ORv2-DDR4, dual E5-2678 v3 @ 2.50GHz, 256GB memory, 1TB NVMe
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;where each node was configured with four individual NVMe drives:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;disk    WDS250G3X0C /dev/nvme0n1 (WD Black SN750 256GB NVMe flash)
disk    WDS250G3X0C /dev/nvme1n1 (WD Black SN750 256GB NVMe flash)
disk    WDS250G3X0C /dev/nvme2n1 (WD Black SN750 256GB NVMe flash)
disk    WDS250G3X0C /dev/nvme3n1 (WD Black SN750 256GB NVMe flash)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;so a total of 4TB of NVMe flash capacity across a 4-node cluster&lt;/p&gt;

&lt;p&gt;a detailed view of the Discovery hardware is shown in this short (2 min) video: &lt;iframe width="710" height="399" src="https://www.youtube.com/embed/HFg4jIJU_G4"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  OpenEBS for Kubernetes
&lt;/h2&gt;

&lt;p&gt;we deployed a Rancher RKE environment, with the bootstrap methods outlined in the talk slides, then just did a simple helm install for the OpenEBS install, following the instructions at:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.openebs.io/docs/next/installation.html"&gt;https://docs.openebs.io/docs/next/installation.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;we used command-line &lt;em&gt;kubectl&lt;/em&gt;, as well as the graphical interface, with a straightforward setup experience using either method. it took less than 20 minutes from click to ready.&lt;/p&gt;

&lt;h2&gt;
  
  
  rack-level - Sesame for Open Systems
&lt;/h2&gt;

&lt;p&gt;for our rack-level deployment, we have four nodes using JBOD HDD storage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nlou14 - Leopard ORv2-DDR4, dual E5-2678 v3 @ 2.50GHz, 128GB memory, 864TB HDD
    36   Vendor: ATA      Model: HGST HUH721212AL Rev: W3D0  12TB

nlou12 - Leopard ORv2-DDR4, dual E5-2678 v3 @ 2.50GHz, 128GB memory, 864TB HDD
    36   Vendor: ATA      Model: HGST HUH721212AL Rev: W3D0  12TB

nrou14 - Leopard ORv2-DDR4, dual E5-2678 v3 @ 2.50GHz, 128GB memory, 864TB HDD
    36   Vendor: ATA      Model: HGST HUH721212AL Rev: W3D0  12TB

nrou12 - Leopard ORv2-DDR4, dual E5-2678 v3 @ 2.50GHz, 128GB memory, 864TB HDD
    36   Vendor: ATA      Model: HGST HUH721212AL Rev: W3D0  12TB
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;and two nodes using NVMe flash storage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nlou17 - Leopard ORv2-DDR4, dual CPU E5-2680 v4 @ 2.40GHz, 256GB memory, 15TB NVMe
    4    nvme   WUS4BB038D4M9E4     3.84  TB   (WD SN640 3.84TB)

nlou15 - Leopard ORv2-DDR4, dual E5-2678 v3 @ 2.50GHz, 128GB memory, 15TB NVMe
    4    nvme   WUS4BB038D4M9E4     3.84  TB   (WD SN640 3.84TB)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;this means that our cluster was able to expose a total of 1.7 PB of HDD capacity and 30TB of NVMe flash capacity to the kubernetes workloads&lt;/p&gt;

&lt;p&gt;a typical cluster setup from our customers might consist of four or five JBOD nodes - up to 4.3 PB of total HDD storage - and a half dozen flash nodes - 90TB of NVMe flash - to support up to 18 compute nodes (432 cores, 9.2 TB of memory) - all connected with dual 25G ethernet via our top-of-rack 32-port 100G switch&lt;/p&gt;

&lt;p&gt;this configuration in a Sesame rack brings balanced storage, compute, and networking in a cost-effective solution&lt;/p&gt;

&lt;p&gt;a detailed view of our rack-scale solutions can be seen in this short (4 min) video: &lt;iframe width="710" height="399" src="https://www.youtube.com/embed/QLjSeMt99gc"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;full details of these offerings, as well as contact info can be found at our website &lt;a href="https://sesame.com"&gt;sesame.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;comments and questions welcome - thanks for reading!&lt;/p&gt;

&lt;p&gt;in addition to OpenEBS, we have also tested the Ceph and OpenIO software-defined storage solutions on the same hardware nodes - more on those experiences in our next post!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>hardware</category>
      <category>infrastructure</category>
    </item>
  </channel>
</rss>
