<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Nikhil Malik</title>
    <description>The latest articles on Forem by Nikhil Malik (@nikhilmalik).</description>
    <link>https://forem.com/nikhilmalik</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/nikhilmalik"/>
    <language>en</language>
    <item>
      <title>L4-L7 Performance: Comparing LoxiLB, MetalLB, NGINX, HAProxy</title>
      <dc:creator>Nikhil Malik</dc:creator>
      <pubDate>Wed, 04 Dec 2024 06:47:43 +0000</pubDate>
      <link>https://forem.com/nikhilmalik/l4-l7-performance-comparing-loxilb-metallb-nginx-haproxy-1eh0</link>
      <guid>https://forem.com/nikhilmalik/l4-l7-performance-comparing-loxilb-metallb-nginx-haproxy-1eh0</guid>
      <description>&lt;p&gt;As Kubernetes continues to dominate the cloud-native ecosystem, the need for high-performance, scalable, and efficient networking solutions has become paramount. This blog compares LoxiLB with MetalLB as Kubernetes service load balancers and pits LoxiLB against NGINX and HAProxy for Kubernetes ingress. These comparisons mainly focus on performance for modern cloud-native workloads.&lt;/p&gt;

&lt;h1&gt;
  
  
  Comparing Kubernetes Service Load Balancer
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Before we dig into the numbers, let me give our readers a short introduction about LoxiLB - A high-performance, cloud-native load balancer built for Kubernetes. LoxiLB is optimized for modern workloads with advanced features like eBPF acceleration, Proxy Protocol support, and multi-cluster networking.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvks2yr5rpsgu3qwuqgtt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvks2yr5rpsgu3qwuqgtt.png" alt="Image description" width="532" height="577"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the other hand, MetalLB is an open-source load balancer spec controller which uses iptables/ipvs as datapath. It is designed specifically for Kubernetes clusters running on bare-metal environments. It implements Layer 2 (ARP/NDP) and Layer 3 (BGP) protocols for IP address management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;System Details:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;k3s version v1.30.6+k3s1 (1829eaae)&lt;/li&gt;
&lt;li&gt;CNI: Flannel&lt;/li&gt;
&lt;li&gt;3xMaster 4 vCPU, 4Gb RAM&lt;/li&gt;
&lt;li&gt;3xWorker 4 vCPU, 4Gb RAM&lt;/li&gt;
&lt;li&gt;1xClient 8 vCPU, 4Gb RAM&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Additional Performance Tuning
&lt;/h2&gt;

&lt;p&gt;Below are the common additional optimization options used for all the solutions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set the Max backlog
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sysctl net.core.netdev_max_backlog=10000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Enable Multiple queues and Configure MTU
We used Vagrant with libvirt. For better performance, it is recommended that number of driver queues are set same as the number of CPUs.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;config.vm.define "master1" do |master|
    master.vm.hostname = 'master1'
    master.vm.network :private_network, ip: "192.168.90.10", :netmask =&amp;gt; "255.255.255.0", :libvirt__driver_queues =&amp;gt; 4, :libvirt__mtu =&amp;gt; 9000
    master.vm.network :private_network, ip: "192.168.80.10", :netmask =&amp;gt; "255.255.255.0", :libvirt__driver_queues =&amp;gt; 4, :libvirt__mtu =&amp;gt; 9000
    master.vm.provision :shell, :path =&amp;gt; "master1.sh"
    master.vm.provider :libvirt do |vbox|
        vbox.memory = 4000
        vbox.cpus = 4
    end
  end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can find more information about libvirt &lt;a href="https://vagrant-libvirt.github.io/vagrant-libvirt/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Disable TX XPS (needed for only LoxiLB)
Configure this setting on all the nodes where LoxiLB is running
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for ((i=0;i&amp;lt;7;i++))
do
echo 00 &amp;gt; /sys/class/net/enp1s0/queues/tx-$i/xps_cpus
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Performance Metrics
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;LoxiLB&lt;/th&gt;
&lt;th&gt;MetalLB&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Throughput&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High (eBPF-based)&lt;/td&gt;
&lt;td&gt;Moderate (IPTables)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Latency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Higher under load&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Connection Handling&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Scales to millions&lt;/td&gt;
&lt;td&gt;Limited by IPTables&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Resource Usage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Efficient (eBPF)&lt;/td&gt;
&lt;td&gt;CPU-intensive&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;There are few key differences between LoxiLB and MetalLB:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance :&lt;/strong&gt; LoxiLB uses eBPF for packet processing, providing near-kernel speed and minimal CPU overhead whereas MetalLB relies on traditional IPTables/IPVS for packet forwarding, leading to higher latency and limited scalability in high-throughput environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability :&lt;/strong&gt; LoxiLB can handle significantly more connections and workloads due to its optimized architecture whereas MetalLB struggles with high-scale environments, especially under heavy network loads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feature Set :&lt;/strong&gt; LoxiLB supports advanced features like direct server return (DSR), Proxy Protocol, and observability for debugging network flows whereas MetalLB provides basic load-balancing capabilities, primarily for simple Layer 2 or Layer 3 setups.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We benchmarked &lt;a href="https://www.loxilb.io/post/running-loxilb-on-aws-graviton2-based-ec2-instance" rel="noopener noreferrer"&gt;LoxiLB with IPVS&lt;/a&gt; on AWS Graviton2 environment before. But this blog covers the comparison in Kubernetes environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance Tests
&lt;/h2&gt;

&lt;p&gt;We benchmarked LoxiLB performance as Kubernetes load balancer with popular open-source tools like iperf and go-wrk.&lt;/p&gt;

&lt;h3&gt;
  
  
  Throughput
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zdkjq45ihczza4fhl8m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zdkjq45ihczza4fhl8m.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We created a iperf service and used iperf client in a separate VM outside the cluster. Traffic flow originated from client, hit the Load Balancer, goes to NodePort and then redirected to the workload. Now, It depends which cluster node hosts the service and where the selected workload is scheduled: Same or Different Node. Throughput will be naturally higher when the service and workload are hosted in the same node. But, In both the cases, LoxiLB performed better in case of throughput.&lt;/p&gt;

&lt;h3&gt;
  
  
  Requests Per Second
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flfawwlato7m8uouqyvnx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flfawwlato7m8uouqyvnx.png" alt="Image description" width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We created another service with nginx pod DaemonSet on the backend side and used go-wrk client in a separate VM outside the cluster. Traffic flow originated from client was same as throughput test.&lt;/p&gt;

&lt;h1&gt;
  
  
  Comparing Kubernetes Ingress
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;NGINX:&lt;/em&gt; A widely used ingress controller with rich Layer 7 features such as SSL termination, HTTP routing, and caching.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;HAProxy:&lt;/em&gt; Known for its robust load balancing and performance, HAProxy provides fine-grained control over Layer 4 and Layer 7 traffic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;LoxiLB:&lt;/em&gt; Combines Layer 4 and Layer 7 capabilities with the added advantage of eBPF-based performance and Kubernetes-native integration.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Performance Metrics
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;LoxiLB&lt;/th&gt;
&lt;th&gt;NGINX&lt;/th&gt;
&lt;th&gt;HAProxy&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Throughput&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;Moderate&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Latency&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Moderate&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SSL Termination&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Connection Handling&lt;/td&gt;
&lt;td&gt;Scales to millions&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Key differences between LoxiLB, NGINX and HAProxy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance :&lt;/strong&gt; LoxiLB offers high throughput and low latency, especially under high-load conditions. HAProxy performs well for high-throughput environments but consumes more resources. NGINX, while feature-rich, often lags behind in raw performance compared to LoxiLB and HAProxy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability :&lt;/strong&gt; LoxiLB scales seamlessly for modern, containerized workloads with support for millions of connections. HAProxy scales well but can require additional tuning for Kubernetes-specific deployments. NGINX, being less optimized for extreme scale, may require more resources and configurations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feature Set :&lt;/strong&gt; NGINX excels in advanced HTTP-based routing, caching, and SSL management. HAProxy provides robust Layer 4 and Layer 7 capabilities but is less Kubernetes-native. LoxiLB integrates Layer 7 features while maintaining high performance, making it a balanced choice.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kubernetes-Native Design :&lt;/strong&gt; LoxiLB is purpose-built for Kubernetes, offering tighter integration with cluster networking and service discovery. On the other hand NGINX and HAProxy, while Kubernetes-compatible, are not specifically designed for cloud-native environments.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Performance Tests
&lt;/h2&gt;

&lt;p&gt;We benchmarked &lt;a href="https://docs.loxilb.io/latest/loxilb-ingress/" rel="noopener noreferrer"&gt;LoxiLB Ingress&lt;/a&gt; solution with go-wrk against NGINX and HAProxy using go-wrk. The test for RPS and latency was done with different variations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Requests per Second
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6kh3g54retkjv7jkguh9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6kh3g54retkjv7jkguh9.png" alt="Image description" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Latency
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1y7geatb3t5n4vmbrfav.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1y7geatb3t5n4vmbrfav.png" alt="Image description" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;LoxiLB also supports &lt;code&gt;IPVS-compatibility&lt;/code&gt; mode where it will &lt;code&gt;eBPFy&lt;/code&gt; all the services managed by IPVS. In simpler words - if you have a cluster running with flannel with IPVS then running LoxiLB with &lt;code&gt;--ipvs-compat&lt;/code&gt; is going to improve the performance of your entire cluster. You can check out the details in this &lt;a href="https://www.loxilb.io/post/loxilb-cluster-networking-elevating-k8s-networking-capabilities" rel="noopener noreferrer"&gt;blog&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;When evaluating solutions for Kubernetes networking, the choice depends on your specific workload and scalability requirements. LoxiLB consistently outperforms its peers in terms of raw performance and scalability, making it a strong candidate for modern, high-throughput environments. However, for traditional use cases with a focus on Layer 7 features, NGINX and HAProxy remain solid options. For simpler setups, MetalLB can suffice but may not scale to meet future demands.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: Author is one of the maintainers of LoxiLB project and currently working on it.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>kubernetes</category>
      <category>performance</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>5G Service Communication Proxy with LoxiLB</title>
      <dc:creator>Nikhil Malik</dc:creator>
      <pubDate>Thu, 20 Jun 2024 05:36:36 +0000</pubDate>
      <link>https://forem.com/nikhilmalik/5g-service-communication-proxy-with-loxilb-4242</link>
      <guid>https://forem.com/nikhilmalik/5g-service-communication-proxy-with-loxilb-4242</guid>
      <description>&lt;p&gt;Before we start our blog post, let's understand what is a service communication proxy and why we need it?&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Service Communication Proxy?
&lt;/h2&gt;

&lt;p&gt;In general terms, A Service Communication Proxy is a component that facilitates communication between different services within a distributed system or microservices architecture. Similarly, A 5G Core Service Communication Proxy is a specialized component within the 5G core network architecture that manages and facilitates the communication between different network functions (NFs). It is integral to ensuring efficient, secure, and reliable interactions within the 5G core network.&lt;/p&gt;

&lt;p&gt;The 5G core network is designed around a service-based architecture (SBA), where network functions (such as the Access and Mobility Management Function (AMF), Session Management Function (SMF), and Policy Control Function (PCF)) communicate with each other using standard web-based protocols (e.g., HTTP/2, RESTful APIs). The SCP acts as an intermediary that provides several key services to facilitate and enhance this communication.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do we need SCP?
&lt;/h2&gt;

&lt;p&gt;The Service Communication Proxy is one of the most important components of the 3GPP Service-Based Architecture(SBA) for 5G Core Networks. The concept of SCP is not entirely new. Similar functionality is provided by Signaling Transfer Point(STP), the central signaling router in 2G/3G and Diameter Routing Agent(DRA) in 4G.&lt;br&gt;
The 5G SCP performs multiple key functions and offers benefits such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Routing, load balancing and distribution&lt;/li&gt;
&lt;li&gt;Enhanced Security&lt;/li&gt;
&lt;li&gt;Cloud-Native nature - Easy to deploy&lt;/li&gt;
&lt;li&gt;5G Service Detection and Discovery&lt;/li&gt;
&lt;li&gt;Load Detection and Auto-Scaling&lt;/li&gt;
&lt;li&gt;Reduced complexity&lt;/li&gt;
&lt;li&gt;Better observability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbe5q42wz92rcgb766zp.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbe5q42wz92rcgb766zp.gif" alt="SCP"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  SCP with LoxiLB
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Background
&lt;/h3&gt;

&lt;p&gt;Earlier many users have tried to deploy Open5gs and exposed the service externally not with the Load Balancer but through the NodePort. As we know using NodePort is fine for testing purposes but it is never used in the production environment. Moreover, 3GPP introduced the Service-Based-Architecture for 5G and to further the idea, the concept of &lt;a href="https://www.etsi.org/deliver/etsi_ts/129500_129599/129500/16.04.00_60/ts_129500v160400p.pdf" rel="noopener noreferrer"&gt;SCP&lt;/a&gt; was introduced. In simple words, the very basic element of SCP is load balancing which cannot be accomplished with NodePort. Now, there are few load balancers available which can solve this problem but there are also few areas which they don't particularly address e.g. Flexibility to run in any environment, be it on-prem or public cloud, Hitless failover, Auto-scaling, L7 load balancing for 5G interfaces, SCTP multi-homing etc. This is where LoxiLB comes into the picture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Let me start with a basic introduction of &lt;a href="https://github.com/loxilb-io/loxilb" rel="noopener noreferrer"&gt;LoxiLB&lt;/a&gt; - it is an open-source cloud-native load balancer, written in GoLang and uses eBPF technology for it's core engine, primarily designed to tackle independent workloads or microservices.&lt;/p&gt;

&lt;p&gt;For more information about LoxiLB, please follow &lt;a href="https://www.loxilb.io/" rel="noopener noreferrer"&gt;this&lt;/a&gt;. There are few 5G related blogs already published where other users have applied LoxiLB to N2 interface. You can read them &lt;a href="https://futuredon.medium.com/5g-sctp-loadbalancer-using-loxilb-b525198a9103" rel="noopener noreferrer"&gt;here&lt;/a&gt; and &lt;a href="https://medium.com/@ben0978327139/5g-sctp-loadbalancer-using-loxilb-applying-on-free5gc-b5c05bb723f0" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this blog post, we are going to discuss how we deployed LoxiLB as SCP with a popular open-source 5G Core - &lt;a href="https://github.com/open5gs/open5gs" rel="noopener noreferrer"&gt;Open5GS&lt;/a&gt; in Kubernetes environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xg2ryzcqh7lxepev209.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xg2ryzcqh7lxepev209.gif" alt="5G SCP with LoxiLB"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are going to have a setup of total 6 nodes where 1 node will be dedicated for UE, UPF and LoxiLB each and rest of the 3 nodes will be required for a Kubernetes cluster to host Open5gs core components. LoxiLB can run in the in-cluster mode and outside the cluster as well. For this blog, we are running LoxiLB outside the cluster in a separate VM.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prepare the Kubernetes cluster
&lt;/h3&gt;

&lt;p&gt;We are assuming that the user has already set up a Kubernetes cluster. If not, then there are plenty of LoxiLB Quick start &lt;a href="https://github.com/loxilb-io/loxilb#getting-started-with-different-k8s-distributionstools" rel="noopener noreferrer"&gt;guides&lt;/a&gt; to help you kick-start.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prepare LoxiLB Instance
&lt;/h3&gt;

&lt;p&gt;Once the Kubernetes cluster is ready, we can deploy LoxiLB. To avoid a single point of failure, there are plenty of ways to deploy LoxiLB with High Availability. Please refer &lt;a href="https://github.com/loxilb-io/loxilbdocs/blob/main/docs/ha-deploy.md" rel="noopener noreferrer"&gt;this&lt;/a&gt; to know about some of the common ways. For this blog, we will keep things simple and use a single LoxiLB instance.&lt;br&gt;
Once the node instance is up and running, follow the steps below to start LoxiLB docker container:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ apt-get update
$ apt-get install -y software-properties-common

#Install Docker
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository -y "deb [arch=amd64] https://download.docker.com/linux/ubuntu  $(lsb_release -cs)  stable"
$ apt-get update
$ apt-get install -y docker-ce

#Run LoxiLB docker container
$ docker run -u root --cap-add SYS_ADMIN   --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --net=host --name loxilb ghcr.io/loxilb-io/loxilb:latest


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Deploy kube-loxilb
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/loxilb-io/kube-loxilb" rel="noopener noreferrer"&gt;kube-loxilb&lt;/a&gt; is used to deploy LoxiLB with Kubernetes.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ wget https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/ext-cluster/kube-loxilb.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;kube-loxilb.yaml&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

       args:
            - --loxiURL=http://172.17.0.2:11111
            - --externalCIDR=17.17.10.0/24
            - --setMode=2



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;A description of these options follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;loxiURL:&lt;/strong&gt; LoxiLB API server address. kube-loxilb uses this URL to communicate with LoxiLB. The IP must be kube-loxilb can access. (e.g. private IP of LoxiLB node).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;externalCIDR:&lt;/strong&gt; When creating a LoadBalancer service, LoxiLB specifies the VIP CIDR to allocate to the LB rule. In this document, we will specify the private IP range. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;setLBMode:&lt;/strong&gt; Specifies the NAT mode of the load balancer. Currently, there are three modes supported (0=default, 1=oneArm, 2=fullNAT), and we will use mode 2 (fullNAT) for this deployment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the topology, the LoxiLB node's private IP is 192.168.80.9. So, values are changed to:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

        args:
        - --loxiURL=http://192.168.80.9:11111
        - --externalCIDR=123.123.123.0/24
        - --setLBMode=2


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After modifying the options, use kubectl to deploy kube-loxilb.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kubectl apply -f kube-loxilb.yaml
serviceaccount/kube-loxilb created
clusterrole.rbac.authorization.k8s.io/kube-loxilb created
clusterrolebinding.rbac.authorization.k8s.io/kube-loxilb created
deployment.apps/kube-loxilb created


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;When the deployment is complete, you can verify that the Deployment has been created in the kube-system namespace of k8s with the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kubectl -n kube-system get deployment
NAME                        READY       UP-TO-DATE      AVAILABLE       AGE
calico-kube-controllers     1/1         1               1               18d
coredns                     2/2         2               2               18d
kube-loxilb                 1/1         1               1               18d
metrics-server              1/1         1               1               18d


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Deploy UPF
&lt;/h3&gt;

&lt;p&gt;Now, let's install Open5gs UPF on the UPF node.&lt;br&gt;
Login to the UPF node and install mongodb first. Import the key for installation.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ sudo apt update
$ sudo apt install gnupg
$ curl -fsSL &amp;lt;https://pgp.mongodb.com/server-6.0.asc&amp;gt; | sudo gpg -o /usr/share/keyrings/mongodb-server-6.0.gpg --dearmor
$ echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-6.0.gpg] &amp;lt;https://repo.mongodb.org/apt/ubuntu&amp;gt; focal/mongodb-org/6.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list

#Install mongodb
$ sudo apt update
$ sudo apt install -y mongodb-org
$ sudo systemctl start mongod  #(if '/usr/bin/mongod' is not running)
$ sudo systemctl enable mongod #(ensure to automatically start it on system boot)


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After mongodb installation is complete, install open5gs with the following command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ sudo add-apt-repository ppa:open5gs/latest
$ sudo apt update
$ sudo apt install open5gs

#When Open5gs is installed, initially all Open5gs processes are running, but we just have to run only UPF on that node. So, stopping everything else with the following command.
$ sudo systemctl stop open5gs*


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you don't want the process to run again when node restart you can use the following command: However, since * does not apply to the commands below, you must manually apply them to all processes.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ sudo systemctl disable open5gs-amfd
$ sudo systemctl disable open5gs-smfd
...


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Open the /etc/open5gs/upf.yaml file. Change the addr of the upf - pfcp and gtpu objects to the private IP of the UPF node.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

upf:
    pfcp:
      - addr: 192.168.80.5
    gtpu:
      - addr: 192.168.80.5
    subnet:
      - addr: 10.45.0.1/16
      - addr: 2001:db8:cafe::1/48
    metrics:
      - addr: 127.0.0.7
        port: 9090


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Add the route towards LoxiLB in UPF&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ sudo ip route add 123.123.123.0/24 via 192.168.80.9


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Restart UPF with the following command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ sudo systemctl start open5gs-upfd


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Install UERAN simulator
&lt;/h2&gt;

&lt;p&gt;Follow the steps below to install UERAN simulator:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ git clone https://github.com/my5G/my5G-RANTester.git 
$ cd my5G-RANTester 
$ go mod download 
$ cd cmd 
$ go build app.go


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Add the route towards LoxiLB in UE node&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ sudo ip route add 123.123.123.0/24 via 192.168.80.9


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Deploy Open5gs Core to EKS using helm
&lt;/h3&gt;

&lt;p&gt;Now, we will deploy Open5gs code components using helm-charts.&lt;br&gt;
For deployment, you need to have helm installed locally where you can use kubectl.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ git clone https://github.com/nik-netlox/open5gs-scp-helm-charts.git


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  Verify configuration
&lt;/h4&gt;

&lt;p&gt;Before deploying, check open5gs-scp-helm-charts/values.yaml file.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ cd open5gs-scp-helm-repo
$ vim values.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Open5gs core has different components which run on the same ports. For simplicity, we have statically fixed the service IP addresses for all the services. If you notice the values of the tag “svc”, they indicate the service IP address of the components. For example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

amf:
  mcc: 208
  mnc: 93
  tac: 7
  networkName: Open5GS
  ngapInt: eth0
  svc: 123.123.123.1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;AMF’s N2 interface service will be hosted at 123.123.123.1. This value set here will be used by kube-loxilb to create the service. Check the template file for AMF:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ vim templates/amf-1-deploy.yaml

apiVersion: v1
kind: Service
metadata:
  name: {{ .Release.Name }}-amf
  annotations:
    loxilb.io/probetype : "ping"
    loxilb.io/lbmode : "fullnat"
    loxilb.io/staticIP: {{ .Values.amf.svc }}
  labels:
    epc-mode: amf
spec:
  type: LoadBalancer
  loadBalancerClass: loxilb.io/loxilb


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Modify the upfPublicIP value of the smf object to the service IP value for the N4 interface. For this blog post, N4 interface service will be hosted at 123.123.123.2:8805&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

smf:
  N4Int: eth0
  upfPublicIP: 123.123.123.2


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Note: Before deploying the open5gs, we must take care of one more thing. PFCP protocol is UDP based two-way protocol which means UPF and SMF both can initiate the message. Since, UPF is going to be deployed as a standalone entity so we have create a load balancer service rule for the SMF initiated traffic to go towards UPF.&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# Create a rule to identify the SMF initiated traffic.
#loxicmd create firewall --firewallRule="sourceIP:&amp;lt;nodeCIDR&amp;gt;,minDestinationPort:8805,maxDestinationPort:8805" --allow --setmark=10
loxicmd create firewall --firewallRule="sourceIP:192.168.80.100/30,minDestinationPort:8805,maxDestinationPort:8805" --allow --setmark=10
# Create the LB rule
#loxicmd create lb &amp;lt;serviceIP&amp;gt; --udp=8805:8805  --mark=10 --endpoints=&amp;lt;upfIPaddress&amp;gt;:1 --mode=fullnat 
loxicmd create lb 123.123.123.2 --udp=8805:8805 --mark=10 --endpoints=192.168.80.5:1 --mode=fullnat


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  Deploy Open5gs
&lt;/h4&gt;

&lt;p&gt;After that, you can deploy open5gs with the following command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kubectl create ns open5gs
$ helm -n open5gs upgrade --install core5g ./open5gs-scp-helm-charts/


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;When the deployment is complete, you can check the open5gs pod with the following command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS      RESTARTS      AGE
kube-system   calico-kube-controllers-74d5f9d7bb-v6td4   1/1     Running     0             18d
kube-system   calico-node-5kvdw                          1/1     Running     0             18d
kube-system   calico-node-wnclp                          1/1     Running     0             18d
kube-system   coredns-7c5cd84f7b-g6rxs                   1/1     Running     0             18d
kube-system   coredns-7c5cd84f7b-lghq6                   1/1     Running     0             18d
kube-system   etcd-master                                1/1     Running     0             18d
kube-system   kube-apiserver-master                      1/1     Running     0             18d
kube-system   kube-controller-manager-master             1/1     Running     1 (22h ago)   18d
kube-system   kube-loxilb-76f96b44f4-jwbht               1/1     Running     0             12d
kube-system   kube-proxy-sh9nt                           1/1     Running     0             18d
kube-system   kube-proxy-wfrzw                           1/1     Running     0             18d
kube-system   kube-scheduler-master                      1/1     Running     1 (22h ago)   18d
kube-system   metrics-server-69fb86cf66-4vnwx            1/1     Running     3 (18d ago)   18d
open5gs       core5g-amf-deployment-595f7fffb4-5n6nj     1/1     Running     0             3m8s
open5gs       core5g-ausf-deployment-684b4bb9f-gpxbw     1/1     Running     0             3m8s
open5gs       core5g-bsf-deployment-8f6dbd599-898jk      1/1     Running     0             3m8s
open5gs       core5g-mongo-ue-import-rvtkr               0/1     Completed   0             3m8s
open5gs       core5g-mongodb-5c5d64455c-vrjdz            1/1     Running     0             3m8s
open5gs       core5g-nrf-deployment-b4d796466-cq597      1/1     Running     0             3m8s
open5gs       core5g-nssf-deployment-5df4d988fd-5sbv6    1/1     Running     0             3m8s
open5gs       core5g-pcf-deployment-7b87484dcf-sz5lh     1/1     Running     0             3m8s
open5gs       core5g-smf-deployment-67f9f4bcd-p8mkh      1/1     Running     0             3m8s
open5gs       core5g-udm-deployment-54bfd97d56-h5x4n     1/1     Running     0             3m8s
open5gs       core5g-udr-deployment-7656cbbd7b-wwrsl     1/1     Running     0             3m8s
open5gs       core5g-webui-78fc76b8f8-4vzhl              1/1     Running     0             3m8s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;All the pods must be in a “Running” state except “core5g-mongo-ue-import-rvtkr”. As soon as it becomes “Completed” then the deployment can be considered completed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verify the services
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ sudo kubectl get svc -n open5gs
NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP          PORT(S)                                                                                                                  AGE
core5g-amf           LoadBalancer   172.17.46.201   llb-123.123.123.1    38412:32670/SCTP,7777:31954/TCP,80:31678/TCP                                                                             6m56s
core5g-ausf          LoadBalancer   172.17.27.89    llb-123.123.123.9    80:30211/TCP                                                                                                             6m56s
core5g-bsf           LoadBalancer   172.17.24.86    llb-123.123.123.8    80:30606/TCP                                                                                                             6m56s
core5g-mongodb-svc   LoadBalancer   172.17.39.185   llb-123.123.123.3    27017:31465/TCP                                                                                                          6m56s
core5g-nrf           LoadBalancer   172.17.3.112    llb-123.123.123.4    80:31558/TCP,7777:32724/TCP                                                                                              6m56s
core5g-nssf          LoadBalancer   172.17.58.170   llb-123.123.123.5    80:32126/TCP                                                                                                             6m56s
core5g-pcf           LoadBalancer   172.17.47.109   llb-123.123.123.7    80:31916/TCP                                                                                                             6m56s
core5g-smf           LoadBalancer   172.17.20.10    llb-123.123.123.2    2123:31581/UDP,8805:31991/UDP,3868:30899/TCP,3868:30899/SCTP,7777:30152/TCP,2152:31071/UDP,9090:32299/TCP,80:30246/TCP   6m56s
core5g-udm           LoadBalancer   172.17.7.145    llb-123.123.123.6    80:30852/TCP                                                                                                             6m56s
core5g-udr           LoadBalancer   172.17.42.127   llb-123.123.123.10   80:32709/TCP,7777:32064/TCP                                                                                              6m56s
core5g-webui         LoadBalancer   172.17.28.242   llb-123.123.123.11   80:30302/TCP                                                                                                             6m56s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Verify the services at LoxiLB:
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ loxicmd get lb -o wide
|     EXT IP      | SEC IPS | PORT  | PROTO |            NAME            | MARK | SEL |   MODE    |    ENDPOINT    | EPORT | WEIGHT | STATE  |  COUNTERS   |
|-----------------|---------|-------|-------|----------------------------|------|-----|-----------|----------------|-------|--------|--------|-------------|
| 123.123.123.10  |         |    80 | tcp   | open5gs_core5g-udr         |    0 | rr  | fullnat   | 192.168.80.10  | 32709 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 32709 |      1 | active | 0:0         |
| 123.123.123.10  |         |  7777 | tcp   | open5gs_core5g-udr         |    0 | rr  | fullnat   | 192.168.80.10  | 32064 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 32064 |      1 | active | 0:0         |
| 123.123.123.11  |         |    80 | tcp   | open5gs_core5g-webui       |    0 | rr  | fullnat   | 192.168.80.10  | 30302 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 30302 |      1 | active | 0:0         |
| 123.123.123.1   |         |    80 | tcp   | open5gs_core5g-amf         |    0 | rr  | fullnat   | 192.168.80.10  | 31678 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 31678 |      1 | active | 0:0         |
| 123.123.123.1   |         |  7777 | tcp   | open5gs_core5g-amf         |    0 | rr  | fullnat   | 192.168.80.10  | 31954 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 31954 |      1 | active | 0:0         |
| 123.123.123.1   |         | 38412 | sctp  | open5gs_core5g-amf         |    0 | rr  | fullnat   | 192.168.80.10  | 32670 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 32670 |      1 | active | 0:0         |
| 123.123.123.2   |         |    80 | tcp   | open5gs_core5g-smf         |    0 | rr  | fullnat   | 192.168.80.10  | 30246 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 30246 |      1 | active | 0:0         |
| 123.123.123.2   |         |  2123 | udp   | open5gs_core5g-smf         |    0 | rr  | fullnat   | 192.168.80.10  | 31581 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 31581 |      1 | active | 0:0         |
| 123.123.123.2   |         |  2152 | udp   | open5gs_core5g-smf         |    0 | rr  | fullnat   | 192.168.80.10  | 31071 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 31071 |      1 | active | 0:0         |
| 123.123.123.2   |         |  3868 | sctp  | open5gs_core5g-smf         |    0 | rr  | fullnat   | 192.168.80.10  | 30899 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 30899 |      1 | active | 0:0         |
| 123.123.123.2   |         |  3868 | tcp   | open5gs_core5g-smf         |    0 | rr  | fullnat   | 192.168.80.10  | 30899 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 30899 |      1 | active | 0:0         |
| 123.123.123.2   |         |  7777 | tcp   | open5gs_core5g-smf         |    0 | rr  | fullnat   | 192.168.80.10  | 30152 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 30152 |      1 | active | 0:0         |
| 123.123.123.2   |         |  8805 | udp   |                            |   10 | rr  | fullnat   | 192.168.80.5   |  8805 |      1 | -      | 279:16780   |
| 123.123.123.2   |         |  8805 | udp   | open5gs_core5g-smf         |    0 | rr  | fullnat   | 192.168.80.10  | 31991 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 31991 |      1 | active | 0:0         |
| 123.123.123.2   |         |  9090 | tcp   | open5gs_core5g-smf         |    0 | rr  | fullnat   | 192.168.80.10  | 32299 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 32299 |      1 | active | 0:0         |
| 123.123.123.3   |         | 27017 | tcp   | open5gs_core5g-mongodb-svc |    0 | rr  | fullnat   | 192.168.80.10  | 31465 |      1 | active | 277:49305   |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 31465 |      1 | active | 250:42415   |
| 123.123.123.4   |         |    80 | tcp   | open5gs_core5g-nrf         |    0 | rr  | fullnat   | 192.168.80.10  | 31558 |      1 | active | 1197:138839 |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 31558 |      1 | active | 992:115387  |
| 123.123.123.4   |         |  7777 | tcp   | open5gs_core5g-nrf         |    0 | rr  | fullnat   | 192.168.80.10  | 32724 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 32724 |      1 | active | 0:0         |
| 123.123.123.5   |         |    80 | tcp   | open5gs_core5g-nssf        |    0 | rr  | fullnat   | 192.168.80.10  | 32126 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 32126 |      1 | active | 0:0         |
| 123.123.123.6   |         |    80 | tcp   | open5gs_core5g-udm         |    0 | rr  | fullnat   | 192.168.80.10  | 30852 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 30852 |      1 | active | 0:0         |
| 123.123.123.7   |         |    80 | tcp   | open5gs_core5g-pcf         |    0 | rr  | fullnat   | 192.168.80.10  | 31916 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 31916 |      1 | active | 0:0         |
| 123.123.123.8   |         |    80 | tcp   | open5gs_core5g-bsf         |    0 | rr  | fullnat   | 192.168.80.10  | 30606 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 30606 |      1 | active | 0:0         |
| 123.123.123.9   |         |    80 | tcp   | open5gs_core5g-ausf        |    0 | rr  | fullnat   | 192.168.80.10  | 30211 |      1 | active | 0:0         |
|                 |         |       |       |                            |      |     |           | 192.168.80.101 | 30211 |      1 | active | 0:0         |


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Check UPF logs
&lt;/h3&gt;

&lt;p&gt;Now, Check the logs at the UPF node to confirm N2 interface (PFCP) is established.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ tail -f /var/log/open5gs/upf.log 
Open5GS daemon v2.7.1

06/18 12:46:19.510: [app] INFO: Configuration: '/etc/open5gs/upf.yaml' (../lib/app/ogs-init.c:133)
06/18 12:46:19.510: [app] INFO: File Logging: '/var/log/open5gs/upf.log' (../lib/app/ogs-init.c:136)
06/18 12:46:19.577: [metrics] INFO: metrics_server() [http://127.0.0.7]:9090 (../lib/metrics/prometheus/context.c:299)
06/18 12:46:19.577: [pfcp] INFO: pfcp_server() [192.168.80.5]:8805 (../lib/pfcp/path.c:30)
06/18 12:46:19.577: [gtp] INFO: gtp_server() [192.168.80.5]:2152 (../lib/gtp/path.c:30)
06/18 12:46:19.579: [app] INFO: UPF initialize...done (../src/upf/app.c:31)
06/18 12:46:22.866: [pfcp] INFO: ogs_pfcp_connect() [123.123.123.2]:23197 (../lib/pfcp/path.c:61)
06/18 12:47:33.639: [pfcp] INFO: ogs_pfcp_connect() [123.123.123.2]:1066 (../lib/pfcp/path.c:61)
06/18 12:47:33.640: [upf] INFO: PFCP associated [123.123.123.2]:1066 (../src/upf/pfcp-sm.c:184)


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Verify active connections
&lt;/h3&gt;

&lt;p&gt;Check the status for all the current active connections at LoxiLB:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ loxicmd get ct -o wide
|        SERVICE NAME        |    DESTIP     |     SRCIP      | DPORT | SPORT | PROTO |  STATE  |                     ACT                     | PACKETS | BYTES |
|----------------------------|---------------|----------------|-------|-------|-------|---------|---------------------------------------------|---------|-------|
|                            | 123.123.123.2 | 192.168.80.101 |  8805 |  1066 | udp   | udp-est | fdnat-123.123.123.2,192.168.80.5:8805:w0    |       4 |   224 |
|                            | 123.123.123.2 | 192.168.80.5   |  1066 |  8805 | udp   | udp-est | fsnat-123.123.123.2,192.168.80.101:8805:w0  |       4 |   224 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 |  4486 | 31465 | tcp   | est     | hsnat-0.0.0.0:27017:w0                      |      18 |  1459 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 13071 | 31465 | tcp   | est     | hsnat-0.0.0.0:27017:w0                      |      18 |  1535 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 19556 | 31465 | tcp   | est     | hsnat-0.0.0.0:27017:w0                      |      75 | 21311 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 20114 | 31465 | tcp   | est     | hsnat-0.0.0.0:27017:w0                      |      15 |  7015 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 27017 |  4486 | tcp   | est     | fdnat-123.123.123.3,0.0.0.0:31465:w0        |      19 |  1666 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 27017 | 13071 | tcp   | est     | fdnat-123.123.123.3,0.0.0.0:31465:w0        |      19 |  1771 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 27017 | 19556 | tcp   | est     | fdnat-123.123.123.3,0.0.0.0:31465:w0        |     148 | 13820 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 27017 | 20114 | tcp   | est     | fdnat-123.123.123.3,0.0.0.0:31465:w0        |      16 |  1436 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 27017 | 45498 | tcp   | est     | fdnat-123.123.123.3,192.168.80.10:31465:w0  |     148 | 13824 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 27017 | 50148 | tcp   | est     | fdnat-123.123.123.3,192.168.80.10:31465:w0  |      17 |  1502 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 27017 | 62163 | tcp   | est     | fdnat-123.123.123.3,192.168.80.10:31465:w0  |      20 |  1926 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 27017 | 63733 | tcp   | est     | fdnat-123.123.123.3,192.168.80.10:31465:w0  |      20 |  1928 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.10  | 45498 | 31465 | tcp   | est     | fsnat-123.123.123.3,192.168.80.101:27017:w0 |      75 | 21311 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.10  | 50148 | 31465 | tcp   | est     | fsnat-123.123.123.3,192.168.80.101:27017:w0 |      15 |  6935 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.10  | 62163 | 31465 | tcp   | est     | fsnat-123.123.123.3,192.168.80.101:27017:w0 |      19 |  1639 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.10  | 63733 | 31465 | tcp   | est     | fsnat-123.123.123.3,192.168.80.101:27017:w0 |      20 |  1705 |
| open5gs_core5g-nrf         | 123.123.123.4 | 192.168.80.101 |    80 |  2368 | tcp   | est     | fdnat-123.123.123.4,192.168.80.10:31558:w0  |     226 | 28580 |
| open5gs_core5g-nrf         | 123.123.123.4 | 192.168.80.101 |    80 |  6687 | tcp   | est     | fdnat-123.123.123.4,192.168.80.10:31558:w0  |     233 | 30410 |
| open5gs_core5g-nrf         | 123.123.123.4 | 192.168.80.101 |    80 | 16244 | tcp   | est     | fdnat-123.123.123.4,0.0.0.0:31558:w0        |     229 | 28958 |
| open5gs_core5g-nrf         | 123.123.123.4 | 192.168.80.101 |    80 | 24477 | tcp   | est     | fdnat-123.123.123.4,0.0.0.0:31558:w0        |     232 | 30594 |
| open5gs_core5g-nrf         | 123.123.123.4 | 192.168.80.101 |    80 | 27944 | tcp   | est     | fdnat-123.123.123.4,192.168.80.10:31558:w0  |     219 | 28251 |
| open5gs_core5g-nrf         | 123.123.123.4 | 192.168.80.101 |    80 | 32565 | tcp   | est     | fdnat-123.123.123.4,0.0.0.0:31558:w0        |     229 | 29153 |
| open5gs_core5g-nrf         | 123.123.123.4 | 192.168.80.101 |    80 | 52707 | tcp   | est     | fdnat-123.123.123.4,192.168.80.10:31558:w0  |     226 | 28663 |
| open5gs_core5g-nrf         | 123.123.123.4 | 192.168.80.101 |    80 | 57099 | tcp   | est     | fdnat-123.123.123.4,0.0.0.0:31558:w0        |     235 | 30941 |
| open5gs_core5g-nrf         | 123.123.123.4 | 192.168.80.101 | 16244 | 31558 | tcp   | est     | hsnat-0.0.0.0:80:w0                         |     157 | 13259 |
| open5gs_core5g-nrf         | 123.123.123.4 | 192.168.80.101 | 24477 | 31558 | tcp   | est     | hsnat-0.0.0.0:80:w0                         |     160 | 14281 |
| open5gs_core5g-nrf         | 123.123.123.4 | 192.168.80.101 | 32565 | 31558 | tcp   | est     | hsnat-0.0.0.0:80:w0                         |     158 | 13713 |
| open5gs_core5g-nrf         | 123.123.123.4 | 192.168.80.101 | 57099 | 31558 | tcp   | est     | hsnat-0.0.0.0:80:w0                         |     162 | 15996 |
| open5gs_core5g-nrf         | 123.123.123.4 | 192.168.80.10  |    80 | 38140 | tcp   | est     | fdnat-123.123.123.4,0.0.0.0:31558:w0        |     235 | 31048 |
| open5gs_core5g-nrf         | 123.123.123.4 | 192.168.80.10  |  2368 | 31558 | tcp   | est     | fsnat-123.123.123.4,192.168.80.101:80:w0    |     154 | 13024 |
| open5gs_core5g-nrf         | 123.123.123.4 | 192.168.80.10  |  6687 | 31558 | tcp   | est     | fsnat-123.123.123.4,192.168.80.101:80:w0    |     157 | 13662 |
| open5gs_core5g-nrf         | 123.123.123.4 | 192.168.80.10  | 27944 | 31558 | tcp   | est     | fsnat-123.123.123.4,192.168.80.101:80:w0    |     151 | 13925 |
| open5gs_core5g-nrf         | 123.123.123.4 | 192.168.80.10  | 38140 | 31558 | tcp   | est     | hsnat-0.0.0.0:80:w0                         |     162 | 16041 |
| open5gs_core5g-nrf         | 123.123.123.4 | 192.168.80.10  | 52707 | 31558 | tcp   | est     | fsnat-123.123.123.4,192.168.80.101:80:w0    |     155 | 13091 |


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Configure UERAN Simulator
&lt;/h3&gt;

&lt;p&gt;You have to change the UE’s configuration file to connect the UE to the core. The path of the configuration file is ~/my5G-RANTester/config/config.yml.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

gnodeb:
  controlif:
    ip: "172.0.14.27"
    port: 9487
  dataif:
    ip: "172.0.14.27"
    port: 2152
  plmnlist:
    mcc: "208"
    mnc: "93"
    tac: "000007"
    gnbid: "000001"
  slicesupportlist:
    sst: "01"
    sd: "000001"

ue:
  msin: "0000000031"
  key: "0C0A34601D4F07677303652C0462535B"
  opc: "63bfa50ee6523365ff14c1f45f88737d"
  amf: "8000"
  sqn: "0000000"
  dnn: "internet"
  hplmn:
    mcc: "208"
    mnc: "93"
  snssai:
    sst: 01
    sd: "000001"

amfif:
  ip: "43.201.17.32"
  port: 38412

logs:
    level: 4


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;First, register the private IP of the UE node in gnodeb - controlif object’s ip and dataif object’s ip.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

gnodeb:
  controlif:
    ip: "172.0.14.27"
    port: 9487
  dataif:
    ip: "172.0.14.27"
    port: 2152


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, you modify the values of mcc, mnc, and tac in plmnlist object. This value should match the AMF settings of the Open5gs core deployed with helm. You can check the values in the ./open5gs-scp-helm-charts/values.yaml file. Here are the AMF settings in the values.yaml file used in this post.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

amf:
  mcc: 208
  mnc: 93
  tac: 7
  networkName: Open5GS
  ngapInt: eth0

nssf:
  sst: "1"
  sd: "1"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The values of mcc, mnc, and tac in the UE settings must match the values above.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

plmnlist:
    mcc: "208"
    mnc: "93"
    tac: "000007"
    gnbid: "000001"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The sst, sd values of the slicesupportlist object in the UE settings must match the values of the nssf object in ./open5gs-scp-helm-charts/values.yaml.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

slicesupportlist:
    sst: "01"
    sd: "000001"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The msin, key, and opc values of the ue object in the UE settings must match the simulator-ue1 object in ./open5gs-scp-helm-charts/values.yaml . Here is the content of the ./open5gs-scp-helm-charts/values.yaml file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

simulator:
   ue1:
     imsi: "208930000000031"
     imei: "356938035643803"
     imeiSv: "4370816125816151"
     op: "8e27b6af0e692e750f32667a3b14605d"
     secKey: "8baf473f2f8fd09487cccbd7097c6862"
     sst: "1"
     sd: "1"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you modify the UE settings according to the contents of the values.yaml file, it looks like this:&lt;br&gt;
• msin: the last 10 digits of the imsi value excluding mcc(208) and mnc(93)&lt;br&gt;
• key: secKey&lt;br&gt;
• opc: op&lt;br&gt;
• mcc, mnc, sst, sd: Enter the values described above&lt;br&gt;
Other values are left as default.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

ue:
  msin: "0000000031"
  key: "8baf473f2f8fd09487cccbd7097c6862"
  opc: "8e27b6af0e692e750f32667a3b14605d"
  amf: "8000"
  sqn: "0000000"
  dnn: "internet"
  hplmn:
    mcc: "208"
    mnc: "93"
  snssai:
    sst: 01
    sd: "000001"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Finally, you have to modify amfif - ip value. Since the gNB needs to connect to the AMF through the LoxiLB load balancer, it needs to be changed to the service IP for N2 interface. In the current topology it is 123.123.123.1.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

amfif:
  ip: "123.123.123.1"
  port: 38412


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After editing the configuration file, the UE can be connected to AMF with the following command.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;p&gt;$ cd ~/my5G-RANTester/cmd&lt;br&gt;
$ sudo ./app ue&lt;br&gt;
INFO[0000] my5G-RANTester version 1.0.1&lt;br&gt;&lt;br&gt;
INFO[0000] ---------------------------------------&lt;br&gt;&lt;br&gt;
INFO[0000] [TESTER] Starting test function: Testing an ue attached with configuration &lt;br&gt;
INFO[0000] [TESTER][UE] Number of UEs: 1&lt;br&gt;&lt;br&gt;
INFO[0000] [TESTER][GNB] Control interface IP/Port: 192.168.80.4/9487 &lt;br&gt;
INFO[0000] [TESTER][GNB] Data interface IP/Port: 192.168.80.4/2152 &lt;br&gt;
INFO[0000] [TESTER][AMF] AMF IP/Port: 123.123.123.1/38412 &lt;br&gt;
INFO[0000] ---------------------------------------&lt;br&gt;&lt;br&gt;
INFO[0000] [GNB] SCTP/NGAP service is running&lt;br&gt;&lt;br&gt;
INFO[0000] [GNB] UNIX/NAS service is running&lt;br&gt;&lt;br&gt;
INFO[0000] [GNB][SCTP] Receive message in 0 stream&lt;br&gt;&lt;br&gt;
INFO[0000] [GNB][NGAP] Receive Ng Setup Response&lt;br&gt;&lt;br&gt;
INFO[0000] [GNB][AMF] AMF Name: open5gs-amf&lt;br&gt;&lt;br&gt;
INFO[0000] [GNB][AMF] State of AMF: Active&lt;br&gt;&lt;br&gt;
INFO[0000] [GNB][AMF] Capacity of AMF: 255&lt;br&gt;&lt;br&gt;
INFO[0000] [GNB][AMF] PLMNs Identities Supported by AMF -- mcc: 208 mnc:93 &lt;br&gt;
INFO[0000] [GNB][AMF] List of AMF slices Supported by AMF -- sst:01 sd:000001 &lt;br&gt;
INFO[0001] [UE] UNIX/NAS service is running&lt;br&gt;&lt;br&gt;
INFO[0001] [GNB][SCTP] Receive message in 1 stream&lt;br&gt;&lt;br&gt;
INFO[0001] [GNB][NGAP] Receive Downlink NAS Transport&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS] Message without security header&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS] Receive Authentication Request&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS][MAC] Authenticity of the authentication request message: OK &lt;br&gt;
INFO[0001] [UE][NAS][SQN] SQN of the authentication request message: VALID &lt;br&gt;
INFO[0001] [UE][NAS] Send authentication response&lt;br&gt;&lt;br&gt;
INFO[0001] [GNB][SCTP] Receive message in 1 stream&lt;br&gt;&lt;br&gt;
INFO[0001] [GNB][NGAP] Receive Downlink NAS Transport&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS] Message with security header&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS] Message with integrity and with NEW 5G NAS SECURITY CONTEXT &lt;br&gt;
INFO[0001] [UE][NAS] successful NAS MAC verification&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS] Receive Security Mode Command&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS] Type of ciphering algorithm is 5G-EA0 &lt;br&gt;
INFO[0001] [UE][NAS] Type of integrity protection algorithm is 128-5G-IA2 &lt;br&gt;
INFO[0001] [GNB][SCTP] Receive message in 1 stream&lt;br&gt;&lt;br&gt;
INFO[0001] [GNB][NGAP] Receive Initial Context Setup Request &lt;br&gt;
INFO[0001] [GNB][UE] UE Context was created with successful &lt;br&gt;
INFO[0001] [GNB][UE] UE RAN ID 1&lt;br&gt;&lt;br&gt;
INFO[0001] [GNB][UE] UE AMF ID 1&lt;br&gt;&lt;br&gt;
INFO[0001] [GNB][UE] UE Mobility Restrict --Plmn-- Mcc: not informed Mnc: not informed &lt;br&gt;
INFO[0001] [GNB][UE] UE Masked Imeisv: 1110000000ffff00 &lt;br&gt;
INFO[0001] [GNB][UE] Allowed Nssai-- Sst: 01 Sd: 000001 &lt;br&gt;
INFO[0001] [GNB][NAS][UE] Send Registration Accept.&lt;br&gt;&lt;br&gt;
INFO[0001] [GNB][NGAP][AMF] Send Initial Context Setup Response. &lt;br&gt;
INFO[0001] [UE][NAS] Message with security header&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS] Message with integrity and ciphered &lt;br&gt;
INFO[0001] [UE][NAS] successful NAS MAC verification&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS] successful NAS CIPHERING&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS] Receive Registration Accept&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS] UE 5G GUTI: [215 0 14 119]&lt;br&gt;&lt;br&gt;
INFO[0001] [GNB][SCTP] Receive message in 1 stream&lt;br&gt;&lt;br&gt;
INFO[0001] [GNB][NGAP] Receive Downlink NAS Transport&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS] Message with security header&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS] Message with integrity and ciphered &lt;br&gt;
INFO[0001] [UE][NAS] successful NAS MAC verification&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS] successful NAS CIPHERING&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS] Receive Configuration Update Command &lt;br&gt;
INFO[0001] [GNB][SCTP] Receive message in 1 stream&lt;br&gt;&lt;br&gt;
INFO[0001] [GNB][NGAP] Receive PDU Session Resource Setup Request &lt;br&gt;
INFO[0001] [GNB][NGAP][UE] PDU Session was created with successful. &lt;br&gt;
INFO[0001] [GNB][NGAP][UE] PDU Session Id: 1&lt;br&gt;&lt;br&gt;
INFO[0001] [GNB][NGAP][UE] NSSAI Selected --- sst: 01 sd: 000001 &lt;br&gt;
INFO[0001] [GNB][NGAP][UE] PDU Session Type: ipv4&lt;br&gt;&lt;br&gt;
INFO[0001] [GNB][NGAP][UE] QOS Flow Identifier: 1&lt;br&gt;&lt;br&gt;
INFO[0001] [GNB][NGAP][UE] Uplink Teid: 37088&lt;br&gt;&lt;br&gt;
INFO[0001] [GNB][NGAP][UE] Downlink Teid: 1&lt;br&gt;&lt;br&gt;
INFO[0001] [GNB][NGAP][UE] Non-Dynamic-5QI: 9&lt;br&gt;&lt;br&gt;
INFO[0001] [GNB][NGAP][UE] Priority Level ARP: 8&lt;br&gt;&lt;br&gt;
INFO[0001] [GNB][NGAP][UE] UPF Address: 192.168.80.5 :2152 &lt;br&gt;
INFO[0001] [UE][NAS] Message with security header&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS] Message with integrity and ciphered &lt;br&gt;
INFO[0001] [UE][NAS] successful NAS MAC verification&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS] successful NAS CIPHERING&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS] Receive DL NAS Transport&lt;br&gt;&lt;br&gt;
INFO[0001] [UE][NAS] Receiving PDU Session Establishment Accept &lt;br&gt;
INFO[0001] [UE][DATA] UE is ready for using data plane  &lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Challenges and Future Work&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;In this blog, we exposed N2 Interface, N4 Interface, MongoDB and NRF services externally with LoxiLB. Many 5G core components advertises the service IP but we noticed that few components were not advertising the address correctly. We will continue this work to cover all the interfaces with SCP and collaborate with Open5gs community.&lt;/p&gt;

&lt;h3&gt;
  
  
  About Authors
&lt;/h3&gt;

&lt;p&gt;This blog was prepared by Nikhil Malik and Jung BackGyun. We are contributors of LoxiLB project.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>cloud</category>
      <category>go</category>
    </item>
  </channel>
</rss>
