<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Gianluca Brindisi</title>
    <description>The latest articles on Forem by Gianluca Brindisi (@gbrindisi).</description>
    <link>https://forem.com/gbrindisi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/gbrindisi"/>
    <language>en</language>
    <item>
      <title>A Collection of Cloud Security Tools</title>
      <dc:creator>Gianluca Brindisi</dc:creator>
      <pubDate>Sun, 18 Oct 2020 00:00:00 +0000</pubDate>
      <link>https://forem.com/gbrindisi/a-collection-of-cloud-security-tools-4j85</link>
      <guid>https://forem.com/gbrindisi/a-collection-of-cloud-security-tools-4j85</guid>
      <description>&lt;p&gt;I’ve built a &lt;a href="https://cloudberry.engineering/tool/"&gt;directory of open source &lt;strong&gt;cloud security tools&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A good part of my day to day is spent trying to automate away problems. Over the years I learned how to invest my time wisely, and I made a habit to research and use already made tools before start coding my own.&lt;/p&gt;

&lt;p&gt;As a consequence I have a fairly large collection of utilities I keep nurturing, alongide references, commands and debugging adventures.&lt;/p&gt;

&lt;p&gt;I thought I could as well make it public, so here we are: &lt;a href="https://cloudberry.engineering/tool/"&gt;check it out&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Every tool has a page were I (will) store my own notes: see my very own &lt;a href="https://cloudberry.engineering/tool/docker-security"&gt;docker-security&lt;/a&gt; as an example. I am still cleaning up most of them and I will publish a bit per time.&lt;/p&gt;

&lt;p&gt;I’ll also commit a more compact version &lt;a href="https://github.com/gbrindisi/cloud-security-tools"&gt;to github&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the meanwhile if you have some tools to share &lt;a href="//mailto:hello@cloudberry.engineering"&gt;please do&lt;/a&gt;!&lt;/p&gt;




&lt;p&gt;Did you find this post interesting? I’d love to hear your thoughts: &lt;code&gt;hello AT cloudberry.engineering&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I write about &lt;strong&gt;cloud security&lt;/strong&gt; on my &lt;a href="https://cloudberry.engineering"&gt;blog&lt;/a&gt;, you can subscribe to the &lt;a href="https://cloudberry.engineering/index.xml"&gt;RSS feed&lt;/a&gt; or to the &lt;a href="http://eepurl.com/hecKnf"&gt;newsletter&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>security</category>
      <category>tools</category>
    </item>
    <item>
      <title>How to find and delete idle GCP Projects</title>
      <dc:creator>Gianluca Brindisi</dc:creator>
      <pubDate>Tue, 13 Oct 2020 15:28:01 +0000</pubDate>
      <link>https://forem.com/gbrindisi/how-to-find-and-delete-idle-gcp-projects-l0k</link>
      <guid>https://forem.com/gbrindisi/how-to-find-and-delete-idle-gcp-projects-l0k</guid>
      <description>&lt;p&gt;A constant source of pain in Google Cloud Platform (GCP) and everywhere else is the amount of unmaintained resources: idle virtual machines, old buckets, IAM policies, DNS records and so on. They contribute to the attack surface and the chance of a vulnerability increase with time.&lt;/p&gt;

&lt;p&gt;Shutting off resources is a such a low hanging fruit from a risk perspective that as a security engineer you should make it a daily habit.&lt;/p&gt;

&lt;p&gt;After all the most secure computer is the one that’s been turned off!&lt;/p&gt;

&lt;h2&gt;
  
  
  How to find the cruft
&lt;/h2&gt;

&lt;p&gt;The bigger and complex a cloud infrastructure becomes, the harder it gets to find unmaintained stuff. &lt;/p&gt;

&lt;p&gt;Having an inventory system in place, as early as possible, would prevent so many headaches but even the most enlightened leadership will have a hard time justifying the investment. &lt;/p&gt;

&lt;p&gt;Eventually the problem will outgrow security and spill into other areas such as &lt;strong&gt;cloud spending&lt;/strong&gt; (&lt;em&gt;gasp!&lt;/em&gt;): that’s when everyone will start talking about inventories, accountability and resources lifecycle.&lt;/p&gt;

&lt;p&gt;Until then how to find things to kill?&lt;/p&gt;

&lt;p&gt;Start with the Projects. &lt;br&gt;
The GCP model encourages the segmentation of the infrastructure logical areas into Projects, and a lot of audit facilities are aggregated on that level.&lt;br&gt;
(&lt;strong&gt;Obviously Projects will also silently introduces cost multipliers such as VPCs but we will leave this for another rant.&lt;/strong&gt;)&lt;/p&gt;

&lt;p&gt;There are three sources one can query:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Activity Logs&lt;/li&gt;
&lt;li&gt;Billing Reports&lt;/li&gt;
&lt;li&gt;IAM Policies&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Use these three and you can build your own personal heuristic that will answer the question: &lt;strong&gt;can I kill this?&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Activity Logs
&lt;/h2&gt;

&lt;p&gt;Events that change state and configuration of cloud services are collected in the &lt;a href="https://cloud.google.com/logging/docs/audit/"&gt;Admin Activity&lt;/a&gt; and System Event audit logs.&lt;/p&gt;

&lt;p&gt;While they both track configuration changes only the Admin Activity is the one that tracks &lt;strong&gt;manual changes driven by direct user action&lt;/strong&gt;: creation of resources, change of IAM policies, etc.&lt;/p&gt;

&lt;p&gt;The retention is ~400 days and I would check the frequency of these log entries to understand if services were being configured recently. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Usually an active project implies an active administrator.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Billing Reports
&lt;/h2&gt;

&lt;p&gt;We can query the billing account(s) to get a per project cost/usage report. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If the cost graph is flat that could be an indicator that the project is idling.&lt;/strong&gt; &lt;br&gt;
In contrast plotting an active project’s cost will results in a bumpy curve as buckets will fill up, logs will be generated and resources will be add and removed over time.&lt;/p&gt;

&lt;p&gt;It’s worth keeping in mind that we can also get &lt;a href="https://cloud.google.com/compute/docs/usage-export"&gt;usage reports&lt;/a&gt; for Compute Engine services. It’s mostly a data point about the lifecycle of resources rather than their usage - but can still contribute to our killer algorithm.&lt;/p&gt;

&lt;h2&gt;
  
  
  IAM Policies
&lt;/h2&gt;

&lt;p&gt;Nobody knows a project better than its owner, so we can’t go wrong if we ask politely. The problem is finding that person.&lt;/p&gt;

&lt;p&gt;The solution is to scrape the IAM policy.&lt;br&gt;
I’d start by searching role bindings for &lt;code&gt;Owner&lt;/code&gt;, &lt;code&gt;Editor&lt;/code&gt; or &lt;code&gt;Viewer&lt;/code&gt; as they are the basic roles in GCP.&lt;/p&gt;

&lt;p&gt;If we are lucky we will get a Group or a User’s email.&lt;/p&gt;

&lt;p&gt;If we get a Service Account (SA) we can investigate it. &lt;br&gt;
When a SA is bind with a basic role, 99% of the time it’s been created in another project. So we can recursively scrape that project’s IAM and keep going until we find a human.&lt;/p&gt;

&lt;h2&gt;
  
  
  Look ma, no influencing skills
&lt;/h2&gt;

&lt;p&gt;There are two things I learned the hard way as a security artisan:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do not kill stuff without asking first&lt;/li&gt;
&lt;li&gt;Do not flood people with alerts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As such my cruft hunting algorithm is called &lt;code&gt;can_I_MAYBE_kill_this()&lt;/code&gt; and works like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I combine billing reports and admin activity logs to figure out if the project is idle since a while. I want to rule out projects that are obviously active.&lt;/li&gt;
&lt;li&gt;I scrape the IAM policy and find potential owners.&lt;/li&gt;
&lt;li&gt;I send them an email asking who is the technical contact for the project because I need to talk to them about a security situation. The combination of &lt;strong&gt;asking for someone accountable&lt;/strong&gt; and &lt;strong&gt;mentioning security&lt;/strong&gt; usually trigger a game of hot potato that ends with the project killed.&lt;/li&gt;
&lt;li&gt;If I get no answer, I nudge that I will delete the project in X days. This is the part where is self reflect on how I ended up threatening good people for a living and I manually check the project again to find more evidence.&lt;/li&gt;
&lt;li&gt;I delete the project.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Note that deleting a project will trigger a soft deletion, this means you have 30 days to change your mind before resources are actually decommissioned (although Cloud Storage services get decommissioned faster, usually ~ a week).&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The take away is that finding and shutting down idle cloud resources is not straightforward and can’t be solved with a cron job.&lt;/p&gt;

&lt;p&gt;Shut down other people’s resources at your risk and peril: be nice, ask, nudge and implore them to take care of their things.&lt;/p&gt;

&lt;p&gt;Keep track of every time you have to track down owners. Make &lt;strong&gt;accountability and resource lifecycle&lt;/strong&gt; a chapter of your threat model and build a case to lobby for an inventory system.&lt;/p&gt;

&lt;p&gt;If you have someone in charge of keeping track of spending go and talk to them: change is never introduced in isolation, and there isn’t anything better than mixing cost savings and security to get &lt;del&gt;a budget&lt;/del&gt; attention. &lt;/p&gt;

&lt;p&gt;Happy hunting.&lt;/p&gt;




&lt;p&gt;Did you find this post interesting? I’d love to hear your thoughts: &lt;code&gt;hello AT cloudberry.engineering&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I write about &lt;strong&gt;cloud security&lt;/strong&gt; on my &lt;a href="https://cloudberry.engineering"&gt;blog&lt;/a&gt;, you can subscribe to the &lt;a href="https://cloudberry.engineering/index.xml"&gt;RSS feed&lt;/a&gt; or to the &lt;a href="http://eepurl.com/hecKnf"&gt;newsletter&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>security</category>
      <category>gcp</category>
    </item>
    <item>
      <title>Dockerfile Security Best Practices</title>
      <dc:creator>Gianluca Brindisi</dc:creator>
      <pubDate>Sun, 04 Oct 2020 21:32:41 +0000</pubDate>
      <link>https://forem.com/gbrindisi/dockerfile-security-best-practices-13n9</link>
      <guid>https://forem.com/gbrindisi/dockerfile-security-best-practices-13n9</guid>
      <description>&lt;p&gt;Container security is a broad problem space and there are many low hanging fruits one can harvest to mitigate risks. A good starting point is to follow some rules when writing Dockerfiles.&lt;/p&gt;

&lt;p&gt;I’ve compiled a list of common security issues and how to avoid them. For every issue I’ve also written an &lt;a href="https://www.openpolicyagent.org/"&gt;Open Policy Agent&lt;/a&gt; (OPA) rule ready to be used to statically analyze your Dockerfiles with &lt;a href="https://conftest.dev"&gt;conftest&lt;/a&gt;. You can’t shift more left than this!&lt;/p&gt;

&lt;p&gt;You can find the &lt;code&gt;.rego&lt;/code&gt; rule set in &lt;a href="https://github.com/gbrindisi/dockerfile-security"&gt;this repository&lt;/a&gt;. I appreciate feedback and contributions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Do not store secrets in environment variables
&lt;/h2&gt;

&lt;p&gt;Secrets distribution is a hairy problem and it’s easy to do it wrong. For containerized applications one can surface them either from the filesystem by mounting volumes or more handily through environment variables.&lt;/p&gt;

&lt;p&gt;Using &lt;code&gt;ENV&lt;/code&gt; to store secrets is bad practice because Dockerfiles are usually distributed with the application, so there is no difference from hard coding secrets in code.&lt;/p&gt;

&lt;p&gt;How to detect it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;secrets_env = [
    "passwd",
    "password",
    "pass",
 # "pwd", can't use this one   
    "secret",
    "key",
    "access",
    "api_key",
    "apikey",
    "token",
    "tkn"
]

deny[msg] {    
    input[i].Cmd == "env"
    val := input[i].Value
    contains(lower(val[_]), secrets_env[_])
    msg = sprintf("Line %d: Potential secret in ENV key found: %s", [i, val])
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Only use trusted base images
&lt;/h2&gt;

&lt;p&gt;Supply chain attacks for containerized application will also come from the hierarchy of layers used to build the container itself.&lt;/p&gt;

&lt;p&gt;The main culprit is obviously the base image used. Untrusted base images are a high risk and whenever possible should be avoided.&lt;/p&gt;

&lt;p&gt;Docker provides a &lt;a href="https://docs.docker.com/docker-hub/official_images/"&gt;set of official base images&lt;/a&gt; for most used operating systems and apps. By using them, we minimize risk of compromise by leveraging some sort of shared responsibility with Docker itself.&lt;/p&gt;

&lt;p&gt;How to detect it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deny[msg] {
    input[i].Cmd == "from"
    val := split(input[i].Value[0], "/")
    count(val) &amp;gt; 1
    msg = sprintf("Line %d: use a trusted base image", [i])
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This rule is tuned towards DockerHub’s official images. It’s very dumb since I’m only detecting the absence of a namespace.&lt;/p&gt;

&lt;p&gt;The definition of trust depends on your context: change this rule accordingly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Do not use ‘latest’ tag for base image
&lt;/h2&gt;

&lt;p&gt;Pinning the version of your base images will give you some peace of mind with regards to the predictability of the containers you are building.&lt;/p&gt;

&lt;p&gt;If you rely on latest you might silently inherit updated packages that in the best worst case might impact your application reliability, in the worst worst case might introduce a vulnerability.&lt;/p&gt;

&lt;p&gt;How to detect it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deny[msg] {
    input[i].Cmd == "from"
    val := split(input[i].Value[0], ":")
    contains(lower(val[1]), "latest"])
    msg = sprintf("Line %d: do not use 'latest' tag for base images", [i])
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Avoid curl bashing
&lt;/h2&gt;

&lt;p&gt;Pulling stuff from internet and piping it into a shell is as bad as it could be. Unfortunately it’s a widespread solution to streamline installations of software.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://cloudberry.engineering/absolutely-trustworthy.sh | sh

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The risk is the same framed for supply chain attacks and it &lt;strong&gt;boils down to trust&lt;/strong&gt;. If you really have to curl bash, do it right:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;use a trusted source&lt;/li&gt;
&lt;li&gt;use a secure connection&lt;/li&gt;
&lt;li&gt;verify the authenticity and integrity of what you download&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How to detect it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deny[msg] {
    input[i].Cmd == "run"
    val := concat(" ", input[i].Value)
    matches := regex.find_n("(curl|wget)[^|^&amp;gt;]*[|&amp;gt;]", lower(val), -1)
    count(matches) &amp;gt; 0
    msg = sprintf("Line %d: Avoid curl bashing", [i])
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Do not upgrade your system packages
&lt;/h2&gt;

&lt;p&gt;This might be a bit of a stretch but the reasoning is the following: you want to pin the version of your software dependencies, if you do &lt;code&gt;apt-get upgrade&lt;/code&gt; you will effectively upgrade them all to the latest version.&lt;/p&gt;

&lt;p&gt;If you do upgrade &lt;strong&gt;and&lt;/strong&gt; you are using the &lt;code&gt;latest&lt;/code&gt; tag for the base image, you amplify the unpredictability of your dependencies tree.&lt;/p&gt;

&lt;p&gt;What you want to do is to pin the base image version and just &lt;code&gt;apt/apk update&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;How to detect it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;upgrade_commands = [
    "apk upgrade",
    "apt-get upgrade",
    "dist-upgrade",
]

deny[msg] {
    input[i].Cmd == "run"
    val := concat(" ", input[i].Value)
    contains(val, upgrade_commands[_])
    msg = sprintf(“Line: %d: Do not upgrade your system packages", [i])
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Do not use ADD if possible
&lt;/h2&gt;

&lt;p&gt;One little feature of the &lt;code&gt;ADD&lt;/code&gt; command is that you can point it to a remote url and it will fetch the content at building time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ADD https://cloudberry.engineering/absolutely-trust-me.tar.gz

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Ironically the official docs suggest to use curl bashing instead.&lt;/p&gt;

&lt;p&gt;From a security perspective the same advice applies: don’t. Get whatever content you need before, verify it and then &lt;code&gt;COPY&lt;/code&gt;. But if you really have to, &lt;strong&gt;use trusted sources over secure connections&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Note: if you have a fancy build system that dynamically generate Dockerfiles, then &lt;code&gt;ADD&lt;/code&gt; is effectively a sink asking to be exploited.&lt;/p&gt;

&lt;p&gt;How to detect it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deny[msg] {
    input[i].Cmd == "add"
    msg = sprintf("Line %d: Use COPY instead of ADD", [i])
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Do not root
&lt;/h2&gt;

&lt;p&gt;Root in a container is the same root as on the host machine, but restricted by the docker daemon configuration. No matter the limitations, if an actor breaks out of the container he will still be able to find a way to get full access to the host.&lt;/p&gt;

&lt;p&gt;Of course this is not ideal and your threat model can’t ignore the risk posed by running as root.&lt;/p&gt;

&lt;p&gt;As such is best to always specify a user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;USER hopefullynotroot

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Note that explicitly setting a user in the Dockerfile is just one layer of defence and won’t solve the whole &lt;a href="https://www.redhat.com/en/blog/understanding-root-inside-and-outside-container"&gt;running as root problem&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Instead one can — and &lt;em&gt;should&lt;/em&gt; — adopt a defence in depth approach and mitigate further across the whole stack: strictly configure the docker daemon or use a rootless container solution, restrict the runtime configuration (prohibit &lt;code&gt;--privileged&lt;/code&gt; if possible, etc), and so on.&lt;/p&gt;

&lt;p&gt;How to detect it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;any_user {
    input[i].Cmd == "user"
 }

deny[msg] {
    not any_user
    msg = "Do not run as root, use USER instead"
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Do not sudo
&lt;/h2&gt;

&lt;p&gt;As a corollary to &lt;code&gt;do not root&lt;/code&gt;, you shall not sudo either.&lt;/p&gt;

&lt;p&gt;Even if you run as a user make sure the user is not in the &lt;code&gt;sudoers&lt;/code&gt; club.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deny[msg] {
    input[i].Cmd == "run"
    val := concat(" ", input[i].Value)
    contains(lower(val), "sudo")
    msg = sprintf("Line %d: Do not use 'sudo' command", [i])
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Acknowledgements
&lt;/h2&gt;

&lt;p&gt;This work has been inspired and is an iteration on &lt;a href="https://blog.madhuakula.com/dockerfile-security-checks-using-opa-rego-policies-with-conftest-32ab2316172f"&gt;prior art&lt;/a&gt; from &lt;a href="https://blog.madhuakula.com/@madhuakula"&gt;Madhu Akula&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;Did you find this post interesting? I’d love to hear your thoughts: &lt;code&gt;hello AT cloudberry.engineering&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I write about &lt;strong&gt;cloud security&lt;/strong&gt; on my &lt;a href="https://cloudberry.engineering"&gt;blog&lt;/a&gt;, you can subscribe to the &lt;a href="https://cloudberry.engineering/index.xml"&gt;RSS feed&lt;/a&gt; or to the &lt;a href="http://eepurl.com/hecKnf"&gt;newsletter&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>security</category>
      <category>docker</category>
      <category>containers</category>
    </item>
    <item>
      <title>Stricter Access Control to Google Cloud Registry</title>
      <dc:creator>Gianluca Brindisi</dc:creator>
      <pubDate>Sat, 26 Sep 2020 14:31:29 +0000</pubDate>
      <link>https://forem.com/gbrindisi/stricter-access-control-to-google-cloud-registry-2k81</link>
      <guid>https://forem.com/gbrindisi/stricter-access-control-to-google-cloud-registry-2k81</guid>
      <description>&lt;p&gt;Google Cloud Registry (GCR) is the Docker container registry offered by Google Cloud Platform (GCP). Under the hood it's an interface on top of Google Cloud Storage (GCS), and it’s so thin that access control is entirely delegated to the storage layer. &lt;/p&gt;

&lt;h2&gt;
  
  
  There are no dedicated roles
&lt;/h2&gt;

&lt;p&gt;In fact, there are no dedicated Identity Access Management (IAM) Roles to govern publishing and retrieval of container images: to push and pull we must use role bindings that will grant write (and read) to the underlying bucket that GCR is using.&lt;/p&gt;

&lt;p&gt;According to &lt;a href="https://cloud.google.com/container-registry/docs/access-control#grant-bucket"&gt;the docs&lt;/a&gt; this bucket is &lt;code&gt;artifacts.&amp;lt;PROJECT-ID&amp;gt;.appspot.com&lt;/code&gt; and the roles to use are &lt;code&gt;roles/storage.admin&lt;/code&gt; and &lt;code&gt;roles/storage.objectViewer&lt;/code&gt; (but any powerful primitive role such as  &lt;code&gt;Owner&lt;/code&gt;, &lt;code&gt;Editor&lt;/code&gt; and &lt;code&gt;Viewer&lt;/code&gt; will do).&lt;/p&gt;

&lt;p&gt;The role binding can be applied to the IAM policy of the Project, Folder, Organization or the bucket itself. &lt;/p&gt;

&lt;h2&gt;
  
  
  What's the risk
&lt;/h2&gt;

&lt;p&gt;While binding the role on the bucket's IAM could be fine, binding on an IAM policy higher in the hierarchy will result in a wider authorization grant affecting all buckets in scope.&lt;/p&gt;

&lt;p&gt;Such grant could have an impact on your compliance posture. &lt;br&gt;
A common example is if one of the buckets contains Personal Identifiable Information (PII) and your organization is subject to GDPR.&lt;/p&gt;

&lt;p&gt;IAM is tricky and things get messier when the number of Projects we need to administer increases and there is a business case to give programmatic access to all GCR instances. For example if we have a centralized build system that needs to push container images, or if we need to integrate a third party container scanner.&lt;/p&gt;

&lt;p&gt;In such cases, especially when a third party is involved, binding a Service Account with read/write permissions to the GCS layer is unacceptable as it will increase a potential attack's &lt;a href="https://cloudberry.engineering/article/lateral-movement-cloud"&gt;blast radius&lt;/a&gt;. &lt;/p&gt;
&lt;h2&gt;
  
  
  How to mitigate
&lt;/h2&gt;

&lt;p&gt;While we wait for Google to implement a set of dedicated Roles (&lt;a href="https://cloud.google.com/artifact-registry/docs/access-control#permissions"&gt;see Artifact Registry&lt;/a&gt;), there are a couple of solutions we can adopt to minimize the authorization grant.&lt;/p&gt;

&lt;p&gt;The first is organizational: minimize the number of GCR instances. &lt;br&gt;
Ideally, if you can use a single instance you can bind the Role on the associated bucket’s IAM policy. &lt;br&gt;
A small number of instances could be managed that way but I let you decide what &lt;em&gt;small&lt;/em&gt; means in your context.&lt;/p&gt;

&lt;p&gt;The second solution is technical: leverage &lt;a href="https://cloud.google.com/iam/docs/conditions-overview"&gt;IAM Conditions&lt;/a&gt; to reduce the scope of the role binding to only the buckets that are used by GCR.&lt;br&gt;
IAM Conditions is a feature of Cloud IAM that allows operators to scope down role bindings.&lt;/p&gt;

&lt;p&gt;Luckily these buckets have similar names and we can use this pattern to set up a role binding that will be applied only when the bucket’s name match, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "expression": "resource.name.startsWith(\"projects/_/buckets/artifacts\")",
    "title": "GCR buckets only",
    "description": "Reduce the binding scope to affect only buckets used by GCR"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This solution is pragmatic and scale well with the number of Projects / Folders affected, as long as there are no other buckets named &lt;code&gt;artifact*&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Keep in mind that you need to use the full bucket identifier in the condition and If GCR is configured to use explicit storage regions, the bucket name will be &lt;code&gt;(eu|us|asia).artifacts.&amp;lt;PROJECT-ID&amp;gt;.appspot.com&lt;/code&gt;.&lt;/p&gt;




&lt;p&gt;Did you find this post interesting? I’d love to hear your thoughts: &lt;code&gt;hello AT cloudberry.engineering&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I write about &lt;strong&gt;cloud security&lt;/strong&gt; on my &lt;a href="https://cloudberry.engineering"&gt;blog&lt;/a&gt;, you can subscribe to the &lt;a href="https://cloudberry.engineering/index.xml"&gt;RSS feed&lt;/a&gt; or to the &lt;a href="http://eepurl.com/hecKnf"&gt;newsletter&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>security</category>
      <category>gcp</category>
      <category>containers</category>
    </item>
    <item>
      <title>Lateral Movement in the Cloud</title>
      <dc:creator>Gianluca Brindisi</dc:creator>
      <pubDate>Wed, 16 Sep 2020 22:53:55 +0000</pubDate>
      <link>https://forem.com/gbrindisi/lateral-movement-in-the-cloud-4ph0</link>
      <guid>https://forem.com/gbrindisi/lateral-movement-in-the-cloud-4ph0</guid>
      <description>&lt;p&gt;In the context of incident response lateral movement is how attackers are able to penetrate deeper inside a system. Understanding this concept is critical to contain an ongoing breach.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Attack Lifecycle
&lt;/h2&gt;

&lt;p&gt;There are many &lt;a href="https://en.wikipedia.org/wiki/Kill_chain#The_cyber_kill_chain"&gt;models&lt;/a&gt; that help to understand an attack lifecycle in depth, but from a practical perspective assume all attackers will do the following endlessly: &lt;strong&gt;compromise a resource&lt;/strong&gt; , &lt;strong&gt;gain persistence&lt;/strong&gt; , &lt;strong&gt;move laterally&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bhNj0V9B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cloudberry.engineering/attacklifecycle-lateralmovement.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bhNj0V9B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cloudberry.engineering/attacklifecycle-lateralmovement.jpg" alt="The attack lifecycle"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a defender you will need to walk forward and backward this cycle to stop the bleeding, and the tricky part is indetifying the &lt;strong&gt;entry point&lt;/strong&gt; : the resource that have been compromised first.&lt;/p&gt;

&lt;p&gt;The entry point is the front door you left open. It is usually a resource that is publicly exposed like a virtual machine running a public website.&lt;/p&gt;

&lt;p&gt;If you identify a compromised resource that is not publicly exposed, most probably the attacker reached it by &lt;strong&gt;moving laterally&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modelling the movements
&lt;/h2&gt;

&lt;p&gt;Lateral movement happens when the attacker compromise other resources from one which has already been breached.&lt;/p&gt;

&lt;p&gt;It is enabled by two factors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What other resources we be reached from a compromised one&lt;/li&gt;
&lt;li&gt;What credentials we be found on compromised resources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These two factors defines the &lt;strong&gt;blast radius&lt;/strong&gt; : the area that an attacker can &lt;em&gt;traverse&lt;/em&gt; inside your infrastructure and, as a consequence, the impact of the attack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BQoH9Hhm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cloudberry.engineering/blastradius.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BQoH9Hhm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cloudberry.engineering/blastradius.jpg" alt="The blast radius"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The text book example of lateral movement is breaching a website, gain access to the virtual machine running it, finding the credentials to the database in the backend and compromise it.&lt;/p&gt;

&lt;p&gt;In the cloud you can model attacker movements on three layers, which are loosely related to the way the cloud stack is segmented (IaaS, PaaS, SaaS):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The network layer.&lt;/li&gt;
&lt;li&gt;The Identity &amp;amp; Access Management (IAM) layer.&lt;/li&gt;
&lt;li&gt;The application layer.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Network Layer
&lt;/h2&gt;

&lt;p&gt;The network layer is mostly tied to infrastructure services (the “I” in IaaS).&lt;/p&gt;

&lt;p&gt;It boils down to a classic network security problem: which cloud services are connected to the same Virtual Private Network (VPC), which firewall rules are in place, etc.&lt;/p&gt;

&lt;p&gt;In the worst case a compromised virtual machine’s blast radius is everything attached to the same VPC.&lt;/p&gt;

&lt;h2&gt;
  
  
  Identity &amp;amp; Access Management Layer
&lt;/h2&gt;

&lt;p&gt;IAM is how access controls is governed in the cloud. It’s about authorising identities to certain actions on specific cloud services.&lt;/p&gt;

&lt;p&gt;The concept of identity is fluid between cloud providers but it can be described as the authenticated entity to which authorisation grants are assigned. A user, a group, a service account, a workload.&lt;/p&gt;

&lt;p&gt;Because identities can be very different things, and authorisation grants can be very granular, understanding the blast radius becomes complicated quickly.&lt;/p&gt;

&lt;p&gt;From the attacker perspective compromising a user or compromising a random cloud service can have the very same impact if they have the same authorisation grants.&lt;/p&gt;

&lt;p&gt;Instead from a defender point of view you will hardly be able to apply the same security controls, the same auditing and the same monitoring practices.&lt;/p&gt;

&lt;p&gt;In short IAM is the layer where the economics of an attack skews in favour of attackers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Layer
&lt;/h2&gt;

&lt;p&gt;All the secrets and credentials that are bundled in the application layer of a service will also contribute to the blast radius, since they can be used to access and compromise further resources.&lt;/p&gt;

&lt;p&gt;Think of passwords to databases, api tokens, credentials to third party services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Modelling the lateral movements and the blast radius of an attack is an obligatory step for a successful containment of a breach.&lt;/p&gt;

&lt;p&gt;An incident resolution will be as effective as how fast we will be able to answer the question: &lt;em&gt;“what services am I running in my infrastructure, and how can I reach them?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So be prepared: invest in an inventory of running cloud services you can query quickly and map their interconnections on the three different layers.&lt;/p&gt;




&lt;p&gt;Did you find this post interesting? I’d love to hear your thoughts: &lt;code&gt;hello AT cloudberry.engineering&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I write about &lt;strong&gt;cloud security&lt;/strong&gt; on my &lt;a href="https://cloudberry.engineering"&gt;blog&lt;/a&gt;, you can subscribe to the &lt;a href="https://cloudberry.engineering/index.xml"&gt;RSS feed&lt;/a&gt; or to the &lt;a href="http://eepurl.com/hecKnf"&gt;newsletter&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>security</category>
      <category>threatmodeling</category>
      <category>incidentresponse</category>
    </item>
  </channel>
</rss>
