<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: sekka1</title>
    <description>The latest articles on Forem by sekka1 (@sekka1).</description>
    <link>https://forem.com/sekka1</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/sekka1"/>
    <language>en</language>
    <item>
      <title>Kubernetes Troubleshooting Walkthrough - Pending Pods</title>
      <dc:creator>sekka1</dc:creator>
      <pubDate>Tue, 24 Dec 2019 03:09:09 +0000</pubDate>
      <link>https://forem.com/sekka1/kubernetes-troubleshooting-walkthrough-pending-pods-53o7</link>
      <guid>https://forem.com/sekka1/kubernetes-troubleshooting-walkthrough-pending-pods-53o7</guid>
      <description>&lt;h1&gt;
  
  
  Introduction: troubleshooting pending pods
&lt;/h1&gt;

&lt;p&gt;You got your deployment, statefulset, or somehow turned on a pod on the Kubernetes&lt;br&gt;
cluster and it is in a &lt;code&gt;pending&lt;/code&gt; state.  What can you do now and how do you troubleshoot&lt;br&gt;
it to see what the problem is?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods
NAME                                                   READY   STATUS             RESTARTS   AGE
echoserver-657f6fb8f5-wmgj5        0/1     Pending            0          1d
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;There can be various reasons why your pod is in a &lt;code&gt;pending&lt;/code&gt; state.  We'll go through them one-by-one and how to&lt;br&gt;
determine what the error messages are telling you.&lt;/p&gt;

&lt;p&gt;With any of these errors, step one is to &lt;code&gt;describe&lt;/code&gt; the pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe pod echoserver-657f6fb8f5-wmgj5
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will give you additional information.  The describe output can be long but look&lt;br&gt;
at the &lt;code&gt;Events&lt;/code&gt; section first.&lt;/p&gt;
&lt;h2&gt;
  
  
  Troubleshooting Reason #1: Not enough CPU
&lt;/h2&gt;


&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl describe pod echoserver-657f6fb8f5-wmgj5
...
...
Events:
  Type     Reason            Age               From               Message
  &lt;span class="nt"&gt;----&lt;/span&gt;     &lt;span class="nt"&gt;------&lt;/span&gt;            &lt;span class="nt"&gt;----&lt;/span&gt;              &lt;span class="nt"&gt;----&lt;/span&gt;               &lt;span class="nt"&gt;-------&lt;/span&gt;
  Warning  FailedScheduling  2s &lt;span class="o"&gt;(&lt;/span&gt;x6 over 11s&lt;span class="o"&gt;)&lt;/span&gt;  default-scheduler  0/4 nodes are available: 4 Insufficient cpu.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;To expand on this line.  Kubernetes &lt;code&gt;FailedScheduling&lt;/code&gt; of this pod.  There are 0 out of 4 nodes&lt;br&gt;
in the cluster that did not have sufficient CPU to allocate to this pod.&lt;/p&gt;

&lt;p&gt;This could mean:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have requested more CPU than any of the nodes has.  For example, each node in the cluster has
2 CPU cores and you request 4 CPU cores.  This would mean that even if you turned on more nodes in
your cluster, Kubernetes will still not be able to schedule it out anywhere.&lt;/li&gt;
&lt;li&gt;There is no more capacity in the cluster per the CPU cores you have requested.  If it is not the first
case, then this would mean that if you had 4 nodes in the cluster and each node has 1 CPU, all of
those CPUs has already been requested and allocated to other pods.  In this case, you can turn on
more nodes in the cluster and your pod will schedule out.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can check the total number of node via:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes
NAME                             STATUS   ROLES    AGE   VERSION
gke-gar-3-pool-1-9781becc-bdb3   Ready    &amp;lt;none&amp;gt;   12h   v1.11.5-gke.5
gke-gar-3-pool-1-9781becc-d0m6   Ready    &amp;lt;none&amp;gt;   3d    v1.11.5-gke.5
gke-gar-3-pool-1-9781becc-gc8h   Ready    &amp;lt;none&amp;gt;   4h    v1.11.5-gke.5
gke-gar-3-pool-1-9781becc-zj3w   Ready    &amp;lt;none&amp;gt;   20h   v1.11.5-gke.5
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Describing a node will give you more details about the capacity of the node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe node gke-gar-3-pool-1-9781becc-bdb3
Name:               gke-gar-3-pool-1-9781becc-bdb3
...
...
Allocatable:
 cpu:                940m
 ephemeral-storage:  4278888833
 hugepages-2Mi:      0
 memory:             2702164Ki
 pods:               110
...
...
Allocated resources:
  &lt;span class="o"&gt;(&lt;/span&gt;Total limits may be over 100 percent, i.e., overcommitted.&lt;span class="o"&gt;)&lt;/span&gt;
  Resource  Requests         Limits
  &lt;span class="nt"&gt;--------&lt;/span&gt;  &lt;span class="nt"&gt;--------&lt;/span&gt;         &lt;span class="nt"&gt;------&lt;/span&gt;
  cpu       908m &lt;span class="o"&gt;(&lt;/span&gt;96%&lt;span class="o"&gt;)&lt;/span&gt;       2408m &lt;span class="o"&gt;(&lt;/span&gt;256%&lt;span class="o"&gt;)&lt;/span&gt;
  memory    1227352Ki &lt;span class="o"&gt;(&lt;/span&gt;45%&lt;span class="o"&gt;)&lt;/span&gt;  3172952Ki &lt;span class="o"&gt;(&lt;/span&gt;117%&lt;span class="o"&gt;)&lt;/span&gt;
...
...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will tell you how much this node's CPU/memory has been requested.  The &lt;code&gt;Request&lt;/code&gt;&lt;br&gt;
can never go over 100% but the &lt;code&gt;Limits&lt;/code&gt; can.  We are interested in the &lt;code&gt;Request&lt;/code&gt;&lt;br&gt;
column.  For example, this output is telling us that it is at 96% of the max cpu&lt;br&gt;
that is allocatable.  This means that we have 4% more we can request.  Looking at&lt;br&gt;
the Allocatable cpu section(940m) and the current Request cpu (908m), this means we have (940m - 908m)&lt;br&gt;
32m worth of CPU that we can still request.&lt;/p&gt;

&lt;p&gt;Looking back at our &lt;code&gt;describe pod&lt;/code&gt; output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;Limits:
  cpu:     16
  memory:  128Mi
Requests:
  cpu:        16
  memory:     64Mi
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We can see that we have requested 16 CPU.  What happened to the &lt;code&gt;m&lt;/code&gt; and why is it 16?  This&lt;br&gt;
deserves a little bit of explanation to understand this.  CPU request/limits are in the&lt;br&gt;
units of CPU cores.  For 1 CPU core it is either &lt;code&gt;1&lt;/code&gt; or &lt;code&gt;1000m&lt;/code&gt;.  This means you can ask for&lt;br&gt;
half a core by donoting &lt;code&gt;500m&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For this example, we have requested a very high CPU core request at 16 cores.  From our&lt;br&gt;
&lt;code&gt;describe node&lt;/code&gt; output this node only has 940m it can allocate out which is under one&lt;br&gt;
core which means it will never be able to schedule this pod out on this node type.  It&lt;br&gt;
just doesnt have enough CPU cores on it.&lt;/p&gt;

&lt;p&gt;On the flip side, even if we requested something reasonable like 1 core, it still wouldn't&lt;br&gt;
be able to schedule it out.  We would have to request (per our calculation above) 32m of&lt;br&gt;
CPU.&lt;/p&gt;
&lt;h2&gt;
  
  
  Troubleshooting Reason #2: Not enough memory
&lt;/h2&gt;


&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;Events:
  Type     Reason            Age                    From               Message
  &lt;span class="nt"&gt;----&lt;/span&gt;     &lt;span class="nt"&gt;------&lt;/span&gt;            &lt;span class="nt"&gt;----&lt;/span&gt;                   &lt;span class="nt"&gt;----&lt;/span&gt;               &lt;span class="nt"&gt;-------&lt;/span&gt;
  Warning  FailedScheduling  2m6s &lt;span class="o"&gt;(&lt;/span&gt;x25 over 2m54s&lt;span class="o"&gt;)&lt;/span&gt;  default-scheduler  0/4 nodes are available: 4 Insufficient cpu, 4 Insufficient memory.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;We would go through about the same troubleshooting workflow as the CPU above.&lt;/p&gt;

&lt;p&gt;The two problems are the same.  Either we have requested way too much memory or our nodes just don't&lt;br&gt;
have the memory we are requesting.&lt;/p&gt;

&lt;p&gt;We would look at our nodes and see what available memory they have:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe node gke-gar-3-pool-1-9781becc-bdb3
Name:               gke-gar-3-pool-1-9781becc-bdb3
...
...
Allocatable:
 cpu:                940m
 ephemeral-storage:  4278888833
 hugepages-2Mi:      0
 memory:             2702164Ki
 pods:               110
...
...
Allocated resources:
  &lt;span class="o"&gt;(&lt;/span&gt;Total limits may be over 100 percent, i.e., overcommitted.&lt;span class="o"&gt;)&lt;/span&gt;
  Resource  Requests         Limits
  &lt;span class="nt"&gt;--------&lt;/span&gt;  &lt;span class="nt"&gt;--------&lt;/span&gt;         &lt;span class="nt"&gt;------&lt;/span&gt;
  cpu       908m &lt;span class="o"&gt;(&lt;/span&gt;96%&lt;span class="o"&gt;)&lt;/span&gt;       2408m &lt;span class="o"&gt;(&lt;/span&gt;256%&lt;span class="o"&gt;)&lt;/span&gt;
  memory    1227352Ki &lt;span class="o"&gt;(&lt;/span&gt;45%&lt;span class="o"&gt;)&lt;/span&gt;  3172952Ki &lt;span class="o"&gt;(&lt;/span&gt;117%&lt;span class="o"&gt;)&lt;/span&gt;
...
...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This node has &lt;code&gt;1227352Ki&lt;/code&gt; memory free.  About 1.2 GB.&lt;/p&gt;

&lt;p&gt;Now we look at the describe pod output to see how much we have requested:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;Limits:
  cpu:     100m
  memory:  125Gi
Requests:
  cpu:        100m
  memory:     64000Mi
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We did request a lot of memory for this example; 64GB.  Same thing as the CPU, none&lt;br&gt;
of our nodes has this much memory.  We either lower the memory request or change&lt;br&gt;
the instance type to have sufficient memory.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting Reason #3: Not enough CPU and memory
&lt;/h2&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;Events:
  Type     Reason            Age                     From               Message
  &lt;span class="nt"&gt;----&lt;/span&gt;     &lt;span class="nt"&gt;------&lt;/span&gt;            &lt;span class="nt"&gt;----&lt;/span&gt;                    &lt;span class="nt"&gt;----&lt;/span&gt;               &lt;span class="nt"&gt;-------&lt;/span&gt;
  Warning  FailedScheduling  2m30s &lt;span class="o"&gt;(&lt;/span&gt;x25 over 3m18s&lt;span class="o"&gt;)&lt;/span&gt;  default-scheduler  0/4 nodes are available: 4 Insufficient cpu, 4 Insufficient memory.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This is a combination on both of the above.  The event is telling us that there are&lt;br&gt;
not enough CPU and memory to fulfill this request.  We will have to run through&lt;br&gt;
the above two troubleshooting workflows and determine what we want to do for both&lt;br&gt;
the CPU and memory.  You can alternatively just look at one (CPU or memory), fix that&lt;br&gt;
problem and then look at what Kubernetes is telling you at that point and continue from there.&lt;/p&gt;

&lt;h1&gt;
  
  
  More troubleshooting blog posts
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://managedkube.com/kubernetes/trace/ingress/service/port/not/matching/pod/k8sbot/2019/02/13/trace-ingress.html"&gt;Kubernetes Troubleshooting Walkthrough - Tracing through an ingress&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://managedkube.com/kubernetes/pod/failure/crashloopbackoff/k8sbot/troubleshooting/2019/02/12/pod-failure-crashloopbackoff.html"&gt;Kubernetes Troubleshooting Walkthrough - Pod Failure CrashLoopBackOff&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://managedkube.com/kubernetes/k8sbot/troubleshooting/imagepullbackoff/2019/02/23/imagepullbackoff.html"&gt;Kubernetes Troubleshooting Walkthrough - imagepullbackoff&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Kubernetes Troubleshooting Walkthrough - imagepullbackoff</title>
      <dc:creator>sekka1</dc:creator>
      <pubDate>Tue, 24 Dec 2019 02:58:47 +0000</pubDate>
      <link>https://forem.com/sekka1/kubernetes-troubleshooting-walkthrough-imagepullbackoff-3j81</link>
      <guid>https://forem.com/sekka1/kubernetes-troubleshooting-walkthrough-imagepullbackoff-3j81</guid>
      <description>&lt;h1&gt;
  
  
  Introduction: troubleshooting the Kubernetes error, imagepullbackoff
&lt;/h1&gt;

&lt;p&gt;I am writing a series of blog posts about troubleshooting Kubernetes. One of the reasons why Kubernetes is so complex is because troubleshooting what went wrong requires many levels of information gathering. It's like trying to find the other end of a string in a tangled ball of strings. In this post, I am going to walk you through troubleshooting the state, imagepullbackoff.&lt;/p&gt;

&lt;p&gt;You got your deployment, statefulset, or somehow turned on a pod on the Kubernetes&lt;br&gt;
cluster and it is in a &lt;code&gt;imagepullbackoff&lt;/code&gt; state.  What can you do now and how do you troubleshoot&lt;br&gt;
it to see what the problem is?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods
NAME                                                   READY   STATUS             RESTARTS   AGE
invalid-container-5896955f9f-cg9jg                     1/2     ImagePullBackOff   0          21h
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;There can be various reasons on why it is in a &lt;code&gt;imagepullbackoff&lt;/code&gt; state.  First, let's figure out what error message you have and what it's telling you with &lt;code&gt;describe&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe pod invalid-container-5896955f9f-cg9jg
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will give you additional information.  The describe output can be long but look&lt;br&gt;
at the &lt;code&gt;Events&lt;/code&gt; section first.&lt;/p&gt;
&lt;h2&gt;
  
  
  Troubleshooting: Invalid container image
&lt;/h2&gt;


&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe pod invalid-container-5896955f9f-cg9jg
...
...
Containers:
  my-container:
    Container ID:   
    Image:          foobartest4
...
...
Events:
  Type     Reason     Age                 From                                     Message
  &lt;span class="nt"&gt;----&lt;/span&gt;     &lt;span class="nt"&gt;------&lt;/span&gt;     &lt;span class="nt"&gt;----&lt;/span&gt;                &lt;span class="nt"&gt;----&lt;/span&gt;                                     &lt;span class="nt"&gt;-------&lt;/span&gt;
  Normal   Scheduled  115s                default-scheduler                        Successfully assigned dev-k8sbot-test-pods/invalid-container-5896955f9f-r6sgz to gke-gar-3-pool-1-9781becc-gc8h
  Normal   Pulling    113s                kubelet, gke-gar-3-pool-1-9781becc-gc8h  pulling image &lt;span class="s2"&gt;"gcr.io/google_containers/echoserver:1.0"&lt;/span&gt;
  Normal   Pulled     84s                 kubelet, gke-gar-3-pool-1-9781becc-gc8h  Successfully pulled image &lt;span class="s2"&gt;"gcr.io/google_containers/echoserver:1.0"&lt;/span&gt;
  Normal   Created    84s                 kubelet, gke-gar-3-pool-1-9781becc-gc8h  Created container
  Normal   Started    83s                 kubelet, gke-gar-3-pool-1-9781becc-gc8h  Started container
  Normal   BackOff    27s &lt;span class="o"&gt;(&lt;/span&gt;x4 over 82s&lt;span class="o"&gt;)&lt;/span&gt;   kubelet, gke-gar-3-pool-1-9781becc-gc8h  Back-off pulling image &lt;span class="s2"&gt;"foobartest4"&lt;/span&gt;
  Warning  Failed     27s &lt;span class="o"&gt;(&lt;/span&gt;x4 over 82s&lt;span class="o"&gt;)&lt;/span&gt;   kubelet, gke-gar-3-pool-1-9781becc-gc8h  Error: ImagePullBackOff
  Normal   Pulling    13s &lt;span class="o"&gt;(&lt;/span&gt;x4 over 114s&lt;span class="o"&gt;)&lt;/span&gt;  kubelet, gke-gar-3-pool-1-9781becc-gc8h  pulling image &lt;span class="s2"&gt;"foobartest4"&lt;/span&gt;
  Warning  Failed     12s &lt;span class="o"&gt;(&lt;/span&gt;x4 over 113s&lt;span class="o"&gt;)&lt;/span&gt;  kubelet, gke-gar-3-pool-1-9781becc-gc8h  Failed to pull image &lt;span class="s2"&gt;"foobartest4"&lt;/span&gt;: rpc error: code &lt;span class="o"&gt;=&lt;/span&gt; Unknown desc &lt;span class="o"&gt;=&lt;/span&gt; Error response from daemon: repository foobartest4 not found: does not exist or no pull access
  Warning  Failed     12s &lt;span class="o"&gt;(&lt;/span&gt;x4 over 113s&lt;span class="o"&gt;)&lt;/span&gt;  kubelet, gke-gar-3-pool-1-9781becc-gc8h  Error: ErrImagePull
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;There is a long list of events but only a few with the &lt;code&gt;Reason&lt;/code&gt; of &lt;code&gt;Failed&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;Warning  Failed     27s &lt;span class="o"&gt;(&lt;/span&gt;x4 over 82s&lt;span class="o"&gt;)&lt;/span&gt;   kubelet, gke-gar-3-pool-1-9781becc-gc8h  Error: ImagePullBackOff
Warning  Failed     12s &lt;span class="o"&gt;(&lt;/span&gt;x4 over 113s&lt;span class="o"&gt;)&lt;/span&gt;  kubelet, gke-gar-3-pool-1-9781becc-gc8h  Failed to pull image &lt;span class="s2"&gt;"foobartest4"&lt;/span&gt;: rpc error: code &lt;span class="o"&gt;=&lt;/span&gt; Unknown desc &lt;span class="o"&gt;=&lt;/span&gt; Error response from daemon: repository foobartest4 not found: does not exist or no pull access
Warning  Failed     12s &lt;span class="o"&gt;(&lt;/span&gt;x4 over 113s&lt;span class="o"&gt;)&lt;/span&gt;  kubelet, gke-gar-3-pool-1-9781becc-gc8h  Error: ErrImagePull
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This gives us a really good indication of what the problem is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;Error response from daemon: repository foobartest4 not found: does not exist or no pull access
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;From here, we either have a non-existent container registry name or we dont have access to it.&lt;br&gt;
Usually a system will not tell you if an item exist or not if you don't have access to it.  This&lt;br&gt;
would allow someone to glean more information than they have access to.  This is why the error&lt;br&gt;
message can mean multiple things.&lt;/p&gt;

&lt;p&gt;As a user you should at this point take a look at the image name and make sure you have the&lt;br&gt;
correct name.  If you do, then you should make sure that this container registry for this&lt;br&gt;
image does not require authentication.  As a test you can try to pull the same imae from yor laptop&lt;br&gt;
to see if it works locally for you.&lt;/p&gt;
&lt;h2&gt;
  
  
  Troubleshooting: Invalid container image tag
&lt;/h2&gt;

&lt;p&gt;Another variation to this is if the container tag does not exist:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe pod invalid-container-5896955f9f-cg9jg
...
...
Containers:
  my-container:
    Container ID:   
    Image:          redis:foobar
...
...
Events:
  Type     Reason     Age                  From                                     Message
  &lt;span class="nt"&gt;----&lt;/span&gt;     &lt;span class="nt"&gt;------&lt;/span&gt;     &lt;span class="nt"&gt;----&lt;/span&gt;                 &lt;span class="nt"&gt;----&lt;/span&gt;                                     &lt;span class="nt"&gt;-------&lt;/span&gt;
  Normal   Scheduled  12m                  default-scheduler                        Successfully assigned dev-k8sbot-test-pods/invalid-container-tag-85d478dfbd-hddzg to gke-gar-3-pool-1-9781becc-bdb3
  Normal   Pulling    12m                  kubelet, gke-gar-3-pool-1-9781becc-bdb3  pulling image &lt;span class="s2"&gt;"gcr.io/google_containers/echoserver:1.0"&lt;/span&gt;
  Normal   Started    11m                  kubelet, gke-gar-3-pool-1-9781becc-bdb3  Started container
  Normal   Pulled     11m                  kubelet, gke-gar-3-pool-1-9781becc-bdb3  Successfully pulled image &lt;span class="s2"&gt;"gcr.io/google_containers/echoserver:1.0"&lt;/span&gt;
  Normal   Created    11m                  kubelet, gke-gar-3-pool-1-9781becc-bdb3  Created container
  Normal   BackOff    10m &lt;span class="o"&gt;(&lt;/span&gt;x4 over 11m&lt;span class="o"&gt;)&lt;/span&gt;    kubelet, gke-gar-3-pool-1-9781becc-bdb3  Back-off pulling image &lt;span class="s2"&gt;"redis:foobar"&lt;/span&gt;
  Normal   Pulling    10m &lt;span class="o"&gt;(&lt;/span&gt;x4 over 12m&lt;span class="o"&gt;)&lt;/span&gt;    kubelet, gke-gar-3-pool-1-9781becc-bdb3  pulling image &lt;span class="s2"&gt;"redis:foobar"&lt;/span&gt;
  Warning  Failed     10m &lt;span class="o"&gt;(&lt;/span&gt;x4 over 12m&lt;span class="o"&gt;)&lt;/span&gt;    kubelet, gke-gar-3-pool-1-9781becc-bdb3  Error: ErrImagePull
  Warning  Failed     10m &lt;span class="o"&gt;(&lt;/span&gt;x4 over 12m&lt;span class="o"&gt;)&lt;/span&gt;    kubelet, gke-gar-3-pool-1-9781becc-bdb3  Failed to pull image &lt;span class="s2"&gt;"redis:foobar"&lt;/span&gt;: rpc error: code &lt;span class="o"&gt;=&lt;/span&gt; Unknown desc &lt;span class="o"&gt;=&lt;/span&gt; Error response from daemon: manifest &lt;span class="k"&gt;for &lt;/span&gt;redis:foobar not found
  Warning  Failed     2m1s &lt;span class="o"&gt;(&lt;/span&gt;x40 over 11m&lt;span class="o"&gt;)&lt;/span&gt;  kubelet, gke-gar-3-pool-1-9781becc-bdb3  Error: ImagePullBackOff

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This is very similar to the previous error but there is a slight difference that can tell us&lt;br&gt;
that it is the image tag.  Once again pulling out the pertinent events:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;Warning  Failed     10m &lt;span class="o"&gt;(&lt;/span&gt;x4 over 12m&lt;span class="o"&gt;)&lt;/span&gt;    kubelet, gke-gar-3-pool-1-9781becc-bdb3  Failed to pull image &lt;span class="s2"&gt;"redis:foobar"&lt;/span&gt;: rpc error: code &lt;span class="o"&gt;=&lt;/span&gt; Unknown desc &lt;span class="o"&gt;=&lt;/span&gt; Error response from daemon: manifest &lt;span class="k"&gt;for &lt;/span&gt;redis:foobar not found
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The previous error said the &lt;code&gt;repository&lt;/code&gt; was not found and this one does not.  It tells&lt;br&gt;
you the &lt;code&gt;manifest for redis:foobar not found&lt;/code&gt;.  This is a very good indication that the&lt;br&gt;
registry &lt;code&gt;redis&lt;/code&gt; exist but it didn't find the tag &lt;code&gt;foobar&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You can test and confirm this by trying to pull this image locally on your laptop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker pull redis:foobar
Error response from daemon: manifest &lt;span class="k"&gt;for &lt;/span&gt;redis:foobar not found
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We receive the save message.  If we try a valid tag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker pull redis:latest
latest: Pulling from library/redis
6ae821421a7d: Already exists
e3717477b42d: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;8e70bf6cc2e6: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;0f84ab76ce60: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;0903bdecada2: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;492876061fbd: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;Digest: sha256:dd5b84ce536dffdcab79024f4df5485d010affa09e6c399b215e199a0dca38c4
Status: Downloaded newer image &lt;span class="k"&gt;for &lt;/span&gt;redis:latest
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We are able to successfully pull this image.&lt;/p&gt;

&lt;p&gt;This will help us determine what are the valid tags.  Or if your registry has a web&lt;br&gt;
GUI, you can go to that also to see what the valid tags are.&lt;/p&gt;
&lt;h2&gt;
  
  
  Troubleshooting: Unable to pull a private image
&lt;/h2&gt;

&lt;p&gt;As we mentioned above for the &lt;code&gt;invalid image&lt;/code&gt; name, a private image that you don't&lt;br&gt;
have access to will return the same error messages.&lt;/p&gt;

&lt;p&gt;If you did determine your image is private, you have to give the pod a secret that&lt;br&gt;
has the proper authentication to allow it to pull the image.  This can be the same&lt;br&gt;
credential that you use locally to allow you to pull the image or another read only&lt;br&gt;
machine credential.&lt;/p&gt;

&lt;p&gt;Either way, you need to do at least two things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add the credential secret to Kubernetes&lt;/li&gt;
&lt;li&gt;Add the reference of the secret to use in your pod definition
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-namespace&lt;/span&gt; &amp;lt;YOUR NAMESPACE&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
create secret docker-registry registry-secret &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--docker-server&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://index.docker.io/v1/ &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--docker-username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;THE USERNAME&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--docker-password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;THE PASSWORD&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--docker-email&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;not-needed@example.com
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;In this case the secret name is: &lt;code&gt;registry-secret&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then add this reference so that your pod knows to use it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: foo
  namespace: awesomeapps
spec:
  containers:
    - name: foo
      image: janedoe/awesomeapp:v1
  imagePullSecrets:
    - name: registry-secret
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;More information: &lt;a href="https://kubernetes.io/docs/concepts/containers/images/#referring-to-an-imagepullsecrets-on-a-pod"&gt;&lt;/a&gt;&lt;a href="https://kubernetes.io/docs/concepts/containers/images/#referring-to-an-imagepullsecrets-on-a-pod"&gt;https://kubernetes.io/docs/concepts/containers/images/#referring-to-an-imagepullsecrets-on-a-pod&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  More troubleshooting blog posts
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://managedkube.com/kubernetes/k8sbot/troubleshooting/pending/pod/2019/02/22/pending-pod.html"&gt;Kubernetes Troubleshooting Walkthrough - Pending pods&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://managedkube.com/kubernetes/pod/failure/crashloopbackoff/k8sbot/troubleshooting/2019/02/12/pod-failure-crashloopbackoff.html"&gt;Kubernetes Troubleshooting Walkthrough - Pod Failure CrashLoopBackOff&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://managedkube.com/kubernetes/trace/ingress/service/port/not/matching/pod/k8sbot/2019/02/13/trace-ingress.html"&gt;Kubernetes Troubleshooting Walkthrough - Tracing through an ingress&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>kubectl</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
