<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Eddiesegal</title>
    <description>The latest articles on Forem by Eddiesegal (@eddiesegal).</description>
    <link>https://forem.com/eddiesegal</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/eddiesegal"/>
    <language>en</language>
    <item>
      <title>A Gentle Introduction to Incident Response
</title>
      <dc:creator>Eddiesegal</dc:creator>
      <pubDate>Sun, 14 Jun 2020 17:30:13 +0000</pubDate>
      <link>https://forem.com/eddiesegal/a-gentle-introduction-to-incident-response-3e18</link>
      <guid>https://forem.com/eddiesegal/a-gentle-introduction-to-incident-response-3e18</guid>
      <description>&lt;p&gt;An incident response plan can identify vulnerabilities, and detect and respond to security incidents. The goal of an incident response plan is to facilitate and standardize effective response to incidents and reduce potential damage. In this article, you will learn what is incident response, incident response steps and what components are critical to include in the plan.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is an Incident Response Plan?
&lt;/h2&gt;

&lt;p&gt;Incident response refers to the actions you take when a cyber attack occurs, and during events of data loss or service outage. Without a solid incident response plan, you are likely to suffer the full effects of a &lt;a href="https://www.exabeam.com/dlp/data-loss-prevention-policies-best-practices-and-evaluating-dlp-software/."&gt;data loss incident&lt;/a&gt; These incidents can lead to loss of customer data, intellectual property, trade secrets, and the resulting compliance fines. An incident response plan usually consists of a set of guidelines and instructions for responding to security incidents. &lt;/p&gt;

&lt;p&gt;An incident response plan enables you to detect issues as fast as possible and minimize damages. A well-developed incident response strategy prevents cyber-criminals from attacking your system and stealing or manipulating your assets. &lt;/p&gt;

&lt;h2&gt;
  
  
  6 Incident Response Steps
&lt;/h2&gt;

&lt;p&gt;An efficient incident response plan should include the following steps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Preparation&lt;/strong&gt;&lt;br&gt;
The preparation stage includes the review and assessment of the underlying security policy of your systems. Assess your potential risks and prioritize security issues. Identify the most sensitive assets, and define the most important incidents your team should focus on. &lt;/p&gt;

&lt;p&gt;Prepare documentation that clearly states the roles, responsibilities and processes, and create a brief communication plan. However, note that planning is not enough. You also have to hire CIRST members and train them. Make sure they have access to all relevant tools, and systems they need to identify and respond to incidents.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Identification&lt;/strong&gt; &lt;br&gt;
The incident response team has to effectively identify anomalies in the normal behavior of organizational systems. The team also has to find out if those anomalies represent real security threats. &lt;/p&gt;

&lt;p&gt;The team should immediately gather additional evidence, decide on severity and type, and document every action when they discover a potential incident. Proper documentation enables companies to prosecute attackers in court by providing answers to questions like Who, What, Where, Why, and How.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Containment&lt;/strong&gt; &lt;br&gt;
The immediate goal after a security incident identification is to prevent additional damage from occurring. This includes:&lt;/p&gt;

&lt;p&gt;Short term containment—simple actions like isolation of a network segment that is under attack, or shutting down hacked servers and moving the traffic to backup servers.&lt;br&gt;
Long term containment—creating new clean systems, and preparing to bring them online in the recovery stage, while implementing temporary fixes on affected systems in production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Eradication&lt;/strong&gt;&lt;br&gt;
The team must find the main reason for the attack and prevent similar attacks in the future. For instance, if the authentication system was the root cause of the attack, you have to replace it with a stronger authentication mechanism. Any exploited vulnerability should be immediately patched.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Recovery&lt;/strong&gt;&lt;br&gt;
Incident response teams have to carefully bring affected production systems back online to prevent additional incidents. Teams need to decide from which date and time to restore operations, how to verify and test that the affected systems are back to normal, and how long to monitor the systems to ensure activity is back to normal. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Learning&lt;/strong&gt;&lt;br&gt;
The objective of this step is to document things that you could not document during the incident response process. You also need to identify the full scope of the incident with further investigation of how it was eradicated and contained, how the system was recovered, and incident response actions that require improvement. Teams should perform this phase no later than two weeks from the end of the incident, to ensure information is fresh. &lt;/p&gt;

&lt;h2&gt;
  
  
  Considerations for Incident Response Planning
&lt;/h2&gt;

&lt;p&gt;An effective incident response plan should include the following elements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Consistent testing&lt;/strong&gt;—security teams must test the incident response plan before actually using it. Teams should conduct a planned or unplanned security drills. You should run through the plan and identify weak spots to ensure that the team is ready for a real incident.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Senior management support&lt;/strong&gt;—support from management enables you to recruit the most qualified members for your incident response team. The right kind of support enables you to create processes and information flows for effective incident management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Balance between detail and flexibility&lt;/strong&gt;—the plan has to include specific, actionable incident response steps. However, creating rigid processes leads to complex processes and prevents flexibility in unexpected scenarios. You should create a detailed plan but allow a certain degree of flexibility to support different incidents. Frequent updates of the plan can also help with flexibility. You should review the plan approximately every six months to update the plan with new security issues and attacks that can affect your organization.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Define your stakeholders&lt;/strong&gt;—the plan should define who should care and be involved in a security incident. This can change depending on the incident type and the targeted organizational resources. Stakeholders could include senior management, department managers, customers, and legal partners.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clear communication&lt;/strong&gt;—the plan should clearly define the communication channels of the incident response team. The team has to know what channels to use to transfer information. This part is often overlooked in incident response plans. For instance, the plan should describe what level of detail you can communicate to senior management, IT management, to affected customers, to affected departments, and to the press.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A simple plan&lt;/strong&gt;—incident teams are not likely to follow a complicated plan in real time, even if the plan is very well thought out. Keep details, procedures and steps to a minimum. A simple plan ensures that the team can process and apply the steps as they enter the &lt;a href="https://www.linkedin.com/pulse/fog-war-fow-security-bob-du-charme/"&gt;“fog of war”&lt;/a&gt;.
## Conclusion
Cyber criminals use advanced technology and social engineering to hack systems, networks, and devices. They deploy bots, use Artificial Intelligence (AI) to imitate human behavior, and trick users into revealing information. Differentiating between regular user behavior and malicious activity is getting harder because hackers always improve their techniques. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Organizations must always update their incident response plans to ensure the safety of their systems and networks. Additional technologies like threat intelligence and UEBA can help keep organizations protected even during zero-day attacks.&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>incidentresponse</category>
    </item>
    <item>
      <title>JAMstack for Marketing: Building Fast and Secure Websites</title>
      <dc:creator>Eddiesegal</dc:creator>
      <pubDate>Tue, 26 May 2020 05:39:26 +0000</pubDate>
      <link>https://forem.com/eddiesegal/jamstack-for-marketing-building-fast-and-secure-websites-1i9a</link>
      <guid>https://forem.com/eddiesegal/jamstack-for-marketing-building-fast-and-secure-websites-1i9a</guid>
      <description>&lt;p&gt;Today’s web users expect fast and secure websites. They cannot tolerate long page load times, and they value the privacy and security of their sensitive and financial information. To ensure positive user experience and brand authority, you can leverage modern web development approaches like JAMstack.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is JAMstack?
&lt;/h2&gt;

&lt;p&gt;The term JAMstack was introduced by Mathias Biilmann to describe a modern web development approach based on reusable Application Programming Interfaces (APIs), client-side JavaScript, and &lt;a href="https://jamstack.org/resources/"&gt;prebuilt markup&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;JAMstack enables front-end developers to build apps without using a back-end server. A back-end server consists of three things—a database, server and application. Instead of back-end technologies, developers use APIs to enable connections from front-end frameworks. This approach saves a lot of time and effort.&lt;/p&gt;

&lt;p&gt;The “JAM” in JAMstack stands for JavaScript, APIs, and Markup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;JavaScript&lt;/strong&gt;—is running entirely on the client-side. As a result, the client-side can handle any dynamic programming during the request or response.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;APIs&lt;/strong&gt;—all database actions or server-side functions are transformed into reusable APIs. You can access those APIs with third-party services or custom-built tools over HTTPS with JavaScript.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Markup&lt;/strong&gt;—JAMstack uses prebuilt markup templates at build time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Marketers can leverage JAMstack to build static websites instead of dynamic. A static website consists of HTML files that represent an actual website page. Static websites do not use any server technologies, and most of the back-end functionality takes place in the user's browser.&lt;/p&gt;

&lt;p&gt;The static nature of a JAMstack makes scaling easy and causes little to no operational overhead. In addition, JAMstack improves security since static websites are taking databases out of the equation. As a result, hackers cannot use database vulnerabilities attacks like SQL injections. &lt;/p&gt;

&lt;h2&gt;
  
  
  Moving Away from Traditional CMS and Plugins
&lt;/h2&gt;

&lt;p&gt;For a long time, marketers have used Content Management Systems (CMS) like Wordpress or Drupal to manage website content. Traditional &lt;a href="https://medium.com/@OPTASY.com/headless-cms-vs-traditional-cms-which-one-is-the-best-fit-for-your-needs-d0fa999fe0be"&gt;CMS&lt;/a&gt; provide marketers the autonomy to manage sites with plugins, add-ons, and other user-friendly features. &lt;/p&gt;

&lt;p&gt;WordPress loads plugins only when a user is about to browse a website. This process affects load speed and performance. As a result, plugins and page builders can seriously affect user experience and overall performance of dynamic websites managed by a traditional CMS.&lt;/p&gt;

&lt;p&gt;Moving away from the concept of plugins and adopting JAMstack does not mean you have to know how to code like a front-end developer or give up functionality. You can still achieve the same results with dedicated services and tools like headless CMS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Leveraging the Power of JAMstack with Headless CMS
&lt;/h2&gt;

&lt;p&gt;The front-end and back-end in traditional CMS like WordPress are tightly locked together. As a result, non-technical users can create, manage, and publish content in a single interface. However, developers have to spend more time on delivering sophisticated content to a wide variety of devices.&lt;/p&gt;

&lt;p&gt;Headless CMS separates front-end tasks—like presentation—from back-end content tasks—like storage and management. Developers can build front-end applications without any back-end restrictions by using APIs. Headless CMS enable marketers to generate content from multiple front-ends making it a perfect match for JAMstack websites. &lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Headless CMS
&lt;/h2&gt;

&lt;p&gt;Marketers use CMS to manage website content. When managing a JAMstack website with Headless CMS, marketers gain the below benefits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Create content only once&lt;/strong&gt;&lt;br&gt;
Marketers can create content once while enabling developers to display it on any device. This means more time for building engaging user experience and less time spent on administration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Enhanced user experience&lt;/strong&gt;&lt;br&gt;
The client-side just renders content, it does not need to communicate with the back-end system. As a result, the website design feels more &lt;a href="https://cloudinary.com/documentation/responsive_images"&gt;responsive&lt;/a&gt;, fast, and consistent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. No back-end restrictions&lt;/strong&gt;&lt;br&gt;
Developers can build user experience functionalities by using tools they know without any restrictions and then deliver content using APIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Disadvantages of a Headless CMS
&lt;/h2&gt;

&lt;p&gt;Headless CMS is not a magic bullet to fix all your content challenges. They can come with some major issues that you need to consider.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Marketers cannot use What You See Is What You Get (WYSIWYG)&lt;/strong&gt;&lt;br&gt;
A WYSIWYG editor enables you to see the result of how the content will look like while it is being created. Marketers do not have this option in headless CMS, since there are no front-end functionalities. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No real-time communication&lt;/strong&gt;&lt;br&gt;
Headless CMS does not provide the option to send customer data from the front-end to the back-end in real-time. As a result, you cannot run content analytics activities or personalize user experiences. Personalization is a fundamental requirement in modern websites. Personalization means using data analytics to meet individual user needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pros &amp;amp; Cons of JAMstack for Marketers
&lt;/h2&gt;

&lt;p&gt;There are more JamStack benefits for marketing professionals besides using it instead of a traditional CMS.&lt;/p&gt;

&lt;p&gt;JAMstack benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Improved technical skills&lt;/strong&gt;—when launching a JAMstack website, marketers often find themselves working with code every day. As a result, you can manage, and optimize the content more easily after the launch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A better understanding of the site structure&lt;/strong&gt;—when working with JAMstack, marketers are exposed to the building process of the site structure. This leads to a better understanding and communication between dev, design and marketing teams.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simultaneous development and content workflow&lt;/strong&gt;—marketers can see the iterative content improvements being made instantly by developers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;JAMstack downsides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Longer delivery times&lt;/strong&gt;—delivery of new marketing material can take longer compared to traditional methods like CDN. It depends on your stack and your team. Sometimes marketers need to wait for the development team to address their needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;New technology&lt;/strong&gt;—often comes with exciting bugs or issues that can take a while to resolve.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Technical challenge&lt;/strong&gt;—there may still be parts of the site that based entirely on code. You need to edit content and metadata in multiple places.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tips for a Successful JAMstack Website Launch
&lt;/h2&gt;

&lt;p&gt;Launch of a JAMstack website requires close collaboration with developers. Developers need to know exactly what functionality you want to build into the website. You need to be clear about what content and data you expect to push into your static site with headless CMS.&lt;/p&gt;

&lt;p&gt;Most likely you will need to integrate your website with analytics services and tools. Each JAMstack website may have a different way of integration. Therefore, you need to have a concrete plan from the start, so nothing gets lost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The marketing stack becomes more complex when things like website speed and performance influence user experience. JAMstack and headless CMS are effective methods for quick website deployment. &lt;/p&gt;

&lt;p&gt;Keep in mind that JAMstack requires technical skills and seamless collaboration between marketers and development teams. You will also need to integrate with analytics tools, and do away with real-time analysis. &lt;/p&gt;

&lt;p&gt;If you’re considering adopting JAMstack, assess your situation critically and plan in advance. JAMstack can be ideal for marketers managing content websites that require simple analytics, but it might not work for eShops that need real-time personalization. &lt;/p&gt;

&lt;p&gt;Hopefully, this article has helped you gain a better understanding of JAMstack, and whether it’ll be a good fit for you or any of your projects.&lt;/p&gt;

</description>
      <category>jamstack</category>
      <category>marketing</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Top 5 Open-Source Incident Response Tools</title>
      <dc:creator>Eddiesegal</dc:creator>
      <pubDate>Thu, 16 Apr 2020 11:08:25 +0000</pubDate>
      <link>https://forem.com/eddiesegal/top-5-open-source-incident-response-tools-4k4j</link>
      <guid>https://forem.com/eddiesegal/top-5-open-source-incident-response-tools-4k4j</guid>
      <description>&lt;p&gt;In the overall field of cybersecurity, incident response is the strategy that covers how teams, organizations, and tools respond to security events. Typically, you use an incident response plan (IRP) to outline the practices and resources used during cyber security events. While there is much to be said about the composition of an IRP, this article focuses on incident response tooling, including an overview of different IR tools and a review of top open source solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Incident Response?
&lt;/h2&gt;

&lt;p&gt;Incident response (IR) is a strategy you can use to respond to and recover from security incidents. It includes procedures and processes for detecting, identifying, halting, and recovering from an incident. It also typically includes steps to protect your systems from future attacks by applying knowledge gained during response. &lt;/p&gt;

&lt;p&gt;Incident response is performed via an incident response team using an incident response plan. The primary goals of both team and plan include reducing the:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Number of affected systems and users in an incident&lt;/li&gt;
&lt;li&gt;Length and depth of attack &lt;/li&gt;
&lt;li&gt;Amount of damage inflicted by an attack&lt;/li&gt;
&lt;li&gt;Length of recovery time&lt;/li&gt;
&lt;li&gt;Cost of remediation and recovery &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Types of Incident Response Tools
&lt;/h2&gt;

&lt;p&gt;When implementing your &lt;a href="https://www.exabeam.com/incident-response/steps/" rel="noopener noreferrer"&gt;incident response strategy&lt;/a&gt;, there are a variety of tools you can incorporate. These tools can help your teams respond faster and more effectively to most incident types. Many tools can also help you automate monitoring and responses, allowing you to better optimize your resources. &lt;/p&gt;

&lt;p&gt;Below is a breakdown of the most commonly used tools:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;IR tool type&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Tool description&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Examples of tools&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;System information and event management (SIEM)&lt;/td&gt;
&lt;td&gt;SIEM solutions are used to collect and aggregate data from logs created by applications, host systems, and network and security tools. These solutions analyze and correlate data to provide insight into system and network events. Solutions can help teams identify, investigate, and track possible incidents.&lt;/td&gt;
&lt;td&gt;Exabeam, AlienVault OSSIM, QRadar, USM, ESM&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Intrusion detection system (IDS)&lt;/td&gt;
&lt;td&gt;IDS monitors your network and systems for suspicious activity or known threats. Often, these tools use a combination of behavior baselines and attack signatures to identify events. These tools can then feed attack information to SIEMs for centralization.&lt;/td&gt;
&lt;td&gt;Snort, Suricata, BroIDS, OSSEC&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Netflow analyzer&lt;/td&gt;
&lt;td&gt;Netflow analyzers evaluate network traffic internally and across your perimeter. These tools enable you to track activity as it travels across your network, including protocols used and assets accessed.&lt;/td&gt;
&lt;td&gt;Ntop, NfSen, Nfdump&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vulnerability scanner&lt;/td&gt;
&lt;td&gt;Vulnerability scanners enable you to assess your systems for known issues and vulnerabilities. For example, out-of-date software or misconfigurations. These tools can provide you with an inventory of your risks and recommendations for remediating issues.&lt;/td&gt;
&lt;td&gt;OpenVAS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Availability monitoring&lt;/td&gt;
&lt;td&gt;Availability monitoring tools help you monitor your networks to identify the status of applications or devices. These tools can help you identify drops in performance or device failures early on to limit the impact on your systems and services.&lt;/td&gt;
&lt;td&gt;Nagios&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Web proxy&lt;/td&gt;
&lt;td&gt;Web proxies enable you to control what websites are accessed through your network and to log what connections are made. These tools can help you track threats that stem from HTTP connections.&lt;/td&gt;
&lt;td&gt;Squid Proxy, IPFire&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Top Open-Source Incident Response Tools
&lt;/h2&gt;

&lt;p&gt;Depending on the level of protection you need and the amount of in-house expertise you have, there are numerous open-source tools you can use. Below are five of the top tools you should consider.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GRR Rapid Response&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2Fi2OhffQ.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2Fi2OhffQ.jpg"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://grr-doc.readthedocs.io/en/v3.3.0/what-is-grr.html" rel="noopener noreferrer"&gt;GRR Rapid Response&lt;/a&gt; is an incident response framework, developed by Google, that you can use to investigate incidents and collect forensic evidence. It is composed of a client, deployed on the systems you want to investigate, and a server that provides a web-based GUI and API endpoint. GRR Rapid Response is designed to enable you to perform forensic analyses at scale and can operate on hundreds of thousands of machines. &lt;/p&gt;

&lt;p&gt;Features of GRR Rapid Response include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Client libraries in Python, Go, and PowerShell&lt;/li&gt;
&lt;li&gt;Data export in a variety of formats&lt;/li&gt;
&lt;li&gt;Automated scheduling capabilities&lt;/li&gt;
&lt;li&gt;Asynchronous messaging for scalability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AlienVault OSSIM&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FRSrfIIm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FRSrfIIm.jpg"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://cybersecurity.att.com/products/ossim" rel="noopener noreferrer"&gt;AlienVault&lt;/a&gt; OSSIM is a SIEM that was recently incorporated into AT&amp;amp;T’s cybersecurity offerings. It incorporates information from the AlienVault® Open Threat Exchange® (OTX™) to ensure that your system is protected with the latest threat information. &lt;/p&gt;

&lt;p&gt;You can use AlienVault OSSIM on-premises or in virtual environments, such as the cloud. However, the open-source version can only be deployed on a single server. If you want to federate servers, you can upgrade to the paid version.&lt;/p&gt;

&lt;p&gt;Features of AlienVault OSSIM include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Asset discovery&lt;/li&gt;
&lt;li&gt;Vulnerability assessments&lt;/li&gt;
&lt;li&gt;Intrusion detection&lt;/li&gt;
&lt;li&gt;Behavior monitoring&lt;/li&gt;
&lt;li&gt;Event correlation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Malware Information Sharing Platform (MISP)&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FN4225i0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FN4225i0.jpg"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.misp-project.org/" rel="noopener noreferrer"&gt;MISP&lt;/a&gt; is a threat intelligence sharing platform you can use to gather, store, correlate, and share threat intelligence. This includes indicators of compromise (IoCs), vulnerability information, financial fraud data, and counter-terrorism information. The purpose behind MISP is to enable organizations to help each other more accurately identify threats and develop methods for detecting threats sooner. &lt;/p&gt;

&lt;p&gt;Features of MISP include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Database of IoCs&lt;/li&gt;
&lt;li&gt;Automated correlation engines&lt;/li&gt;
&lt;li&gt;Flexible data model &lt;/li&gt;
&lt;li&gt;Intuitive user interface&lt;/li&gt;
&lt;li&gt;Ability to export and import data in a variety of formats&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;TheHive&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FAR4MPrL.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FAR4MPrL.jpg"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://thehive-project.org/" rel="noopener noreferrer"&gt;TheHive&lt;/a&gt; is a four in one incident response platform that integrates with MISP. It is designed to enable security teams to collaborate in real-time, monitor systems from a central dashboard, and automate responses. TheHive incorporates another tool, &lt;a href="https://github.com/TheHive-Project/Cortex" rel="noopener noreferrer"&gt;Cortex&lt;/a&gt;, that enables you to analyze and automate the collection of network observables. For example, IP addresses, URLs, or hashes.&lt;/p&gt;

&lt;p&gt;Features of TheHive include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Integrated analysis engine&lt;/li&gt;
&lt;li&gt;Integrated threat intelligence platform&lt;/li&gt;
&lt;li&gt;Authentication support&lt;/li&gt;
&lt;li&gt;Case and alert management capabilities&lt;/li&gt;
&lt;li&gt;Custom dashboards and reporting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cyphon&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FA9seeRk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FA9seeRk.jpg"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.cyphon.io/" rel="noopener noreferrer"&gt;Cyphon&lt;/a&gt; is an incident management and response platform that you can use to collect, process, and triage event data. It can collect data from a wide range of sources, including endpoint agents, IDS solutions, packets, vulnerability scanners, and cloud APIs. To use Cyphon, you need to be able to host a Docker container as the platform is designed as a set of microservices. &lt;/p&gt;

&lt;p&gt;Features of Cyphon include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data aggregation&lt;/li&gt;
&lt;li&gt;Centralized dashboard&lt;/li&gt;
&lt;li&gt;Custom alerting with push notifications&lt;/li&gt;
&lt;li&gt;Event prioritization&lt;/li&gt;
&lt;li&gt;Response tracking&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As digital transformation continues to sweep over the globe, more data makes its way into the digital sphere. This is cause for celebration for hackers, who enjoy an increase in the resource they can steal, ransom, sell, and mine for crypto purposes. However, not every attack needs to result in a breach. &lt;/p&gt;

&lt;p&gt;With the help of incident response tooling, you can ensure that your devices, systems, and networks are monitored continuously. You can set up your IR tools to alert you during an incident, so you can respond swiftly, counter the attack, and prevent a breach on your systems. For efficient response, be sure to centralize controls, and configure alerts with as less false positives as possible.&lt;/p&gt;

</description>
      <category>incident</category>
      <category>opensource</category>
      <category>response</category>
    </item>
    <item>
      <title>Top 7 AWS EFS Performance Tuning Tips</title>
      <dc:creator>Eddiesegal</dc:creator>
      <pubDate>Sun, 05 Apr 2020 05:20:34 +0000</pubDate>
      <link>https://forem.com/eddiesegal/top-7-aws-efs-performance-tuning-tips-4plj</link>
      <guid>https://forem.com/eddiesegal/top-7-aws-efs-performance-tuning-tips-4plj</guid>
      <description>&lt;p&gt;AWS Elastic File System (EFS) is a scalable, cloud-based file system that you can use in AWS. It is designed for Linux-based applications. You can use it as a stand-alone file system, in combination with on-premises resources, or with other AWS services, such as EC2. In this article, you will learn seven techniques for optimizing EFS performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  How EFS Works
&lt;/h2&gt;

&lt;p&gt;EFS is based on the NFSv4 file system structure protocol. This mirrors most on-premises structures and enables you to smoothly and simply migrate files. When using EFS, you can choose between Standard Access and Infrequent Access. &lt;/p&gt;

&lt;p&gt;Standard access is designed for frequently accessed items and provides low-latency access. Infrequent Access is designed for long-term storage and data that isn’t often needed. It provides lower-cost storage in exchange for higher-latency. &lt;/p&gt;

&lt;p&gt;EFS does not require storage provisioning, allowing you to scale as needed. It is a pay-for-use service. &lt;/p&gt;

&lt;p&gt;Key features of &lt;a href="https://cloud.netapp.com/blog/ebs-efs-amazons3-best-cloud-storage-system"&gt;EFS&lt;/a&gt; include: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Shared storage&lt;/strong&gt;—you can simultaneously access files from up to 1000 EC2 instances across multiple regions and availability zones (AZs). You can also access files from offsite via AWS Direct Connect or virtual private network (VPN).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalable performance&lt;/strong&gt;—IOPS and throughput scale with usage and number of attached instances, up to 500k IOPS and 10 GB/s throughput. Scaling is automatic, ensuring that performance matches size and that you aren’t paying for unnecessary resources. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security and compliance&lt;/strong&gt;—you can protect EFS with existing security infrastructures, including Identity and Access Management (IAM) and virtual private cloud (VPC) security groups. You can also define individual file permissions using POSIX. The service includes built-in compliance for common regulatory standards, including SOC, PCI DSS, and HIPAA.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Top 7 AWS EFS Performance Tuning Tips
&lt;/h2&gt;

&lt;p&gt;EFS is designed to optimize performance capabilities for you. However, there are some additional steps you can take to ensure that you are really getting the best possible performance. Below are a few tips that can help.&lt;/p&gt;

&lt;h4&gt;
  
  
  Choose Your Performance Mode
&lt;/h4&gt;

&lt;p&gt;EFS does not use instances. Instead, I/O limits are determined by the performance mode you choose to use. Your options are General Performance and Max I/O. General Performance provides high-scalability and low-latency while Max I/O can scale to higher throughput in exchange for higher-latency.&lt;/p&gt;

&lt;p&gt;Regardless of which option you choose, storage volumes start at .5MB/s baseline throughput and include 7.2 minutes of 100MB/s burst credits. The only way to permanently increase your throughput is to increase your file system size. &lt;/p&gt;

&lt;p&gt;While you can wait until the size increases naturally, you can also force this growth by writing dummy data to your system. This data can be used to force your limit to whatever you need and can be overwritten as you add live data. If you use this method, be sure not to include dummy data in backups to avoid paying for wasted resources. &lt;/p&gt;

&lt;h4&gt;
  
  
  Enable Asynchronous Write
&lt;/h4&gt;

&lt;p&gt;Unless you have a pressing need for synchronously writing to your file system, you should enable the asynchronous write feature. This feature enables you to buffer pending write operations onto EC2 instances. This buffering helps you eliminate the need for a round trip between your client and EFS for each write operation. Shorter trips equal lower latency and faster operations.&lt;/p&gt;

&lt;h4&gt;
  
  
  Monitor With Metrics
&lt;/h4&gt;

&lt;p&gt;You can and should monitor your EFS resources using &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html"&gt;AWS CloudWatch metrics&lt;/a&gt;. CloudWatch is a service that automatically collects metrics data for you. You can access this data from the AWS console, CLI, or via API. &lt;/p&gt;

&lt;p&gt;Use metrics data to stay up-to-date on the performance of your system and to alert yourself to any drops in performance or bottlenecks. If you do see performance issues, you can use the metrics data that CloudWatch provides to identify the problem and correct it. &lt;/p&gt;

&lt;p&gt;In particular, be sure to watch your burst credits metric. Burst credits are used to temporarily boost performance during times of high traffic. If you run out you will see a sudden, steep drop in performance. &lt;/p&gt;

&lt;h4&gt;
  
  
  Separate Operations by Latency
&lt;/h4&gt;

&lt;p&gt;Depending on the workloads you are running, you may benefit from separating latency-sensitive operations from those that are not. Doing so provides you with separate throughput caps and burst credits for each volume created. &lt;/p&gt;

&lt;p&gt;Separating your operations also enables you to set different performance modes if needed. While this often cannot provide the same performance as you would see from locally mounted files, it can provide more consistent performance.&lt;/p&gt;

&lt;h4&gt;
  
  
  Avoid App Code
&lt;/h4&gt;

&lt;p&gt;EFS not designed to be used for managing codebases or deploying applications. It cannot provide the read volume or speed needed for these operations. Rather, it is designed for massively shared storage, containing exported data files, asynchronous logs, and media assets. If you need to run applications, particularly in production, you are better off sticking to containers or local file systems. &lt;/p&gt;

&lt;h4&gt;
  
  
  Select Proper Mount Options
&lt;/h4&gt;

&lt;p&gt;Most users do not need to adjust the default mount options set by AWS. However, if you have benchmarks and tests showing that you can get better performance from changes, you can adjust as needed. If you do take this option, make sure that you mount your file system using the DNS name. This ensures that your data is mounted in the same AZ as your EC2 instances.&lt;/p&gt;

&lt;p&gt;With EFS, you can use NFS version 4.0 or 4.1. NFSv4.1 typically provides higher performance so use this version whenever possible. Additionally, you can increase the size of your NFS client’s read/write buffer to further increase speeds. &lt;/p&gt;

&lt;h4&gt;
  
  
  Use Lifecycle Management Policies
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://developer.ibm.com/recipes/tutorials/a-simple-guide-for-creating-an-aws-snapshot-lifecycle-policy/"&gt;Lifecycle management policies&lt;/a&gt; are designed to help you tier your storage to reduce costs. These policies enable you to automatically move files between Standard and Infrequent Access tiers based on file access. &lt;/p&gt;

&lt;p&gt;To create lifecycle policies, you can use the EFS console. During policy creation, you can specify parameters for movement, including the number of access requests and time since last access. When policies are triggered, files are moved transparently to users and remain easily accessible from any location. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;EFS is a useful file system, but like all AWS services, it needs configuration and fine-tuning for optimal results and cost-savings. The most obvious, but often missed, optimization opportunity is the performance mode configuration. You can actually choose your own mode, and ensure that you’re using the mode that serves your project best. &lt;/p&gt;

&lt;p&gt;Hopefully, the tips in this article can help you refine EFS until you get the results you're interested in. Be sure to continue experimenting with optimization techniques. Maintenance is a long-lasting marathon that requires continuous monitoring and dynamic optimization.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Kubernetes as a Service: What Does it Actually Mean?</title>
      <dc:creator>Eddiesegal</dc:creator>
      <pubDate>Sun, 15 Mar 2020 15:25:32 +0000</pubDate>
      <link>https://forem.com/eddiesegal/kubernetes-as-a-service-what-does-it-actually-mean-557f</link>
      <guid>https://forem.com/eddiesegal/kubernetes-as-a-service-what-does-it-actually-mean-557f</guid>
      <description>&lt;p&gt;Kubernetes (K8S) is an open-source platform that orchestrates container deployments. K8S was developed by Google engineers to orchestrate containers on an enterprise scale. Kubernetes is currently the most popular container orchestration platform. &lt;/p&gt;

&lt;p&gt;Kubernetes features include service discovery, automated rollbacks, secrets and configuration management, load balancing and more.  This article reviews the importance of Kubernetes as a Service (KaaS), and popular companies that provide KaaS. &lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Kubernetes As a Service (KaaS)?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://platform9.com/managed-kubernetes/"&gt;KaaS&lt;/a&gt; is a service offered by third-party providers that take over responsibility for some or all of the work involved in a successful set-up and operation of Kubernetes. KaaS can refer to anything from hosting and operations to dedicated support.&lt;/p&gt;

&lt;p&gt;Kubernetes provides many benefits like scalability, self-recovery, detached credential configuration, progressive application deployment, workload management, and batch execution. However, these features usually require manual configuration. KaaS solutions can handle configuration for you, or at least guide you through the decision-making process. &lt;/p&gt;

&lt;p&gt;KaaS solutions provide the necessary tools to automate tasks like scaling, updates, monitoring and load-balancing. KaaS services that include Kubernetes hosting can also handle the configuration and maintenance of your infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes As a Service: Features and Benefits
&lt;/h2&gt;

&lt;p&gt;A small error during the implementation of in-house Kubernetes can remain undetected until production time. The result is often delayed deliveries due to troubleshooting. Delayed deliveries undermine the main reason for Kubernetes adoption—fast deliveries. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Continuous Integration and Continuous Delivery (CI/CD)&lt;/strong&gt;&lt;br&gt;
Organizations have to set up CI/CD pipelines in addition to a working Kubernetes platform. As a result, IT teams need to set up, implement, and manage many different services at the same time. KaaS services can take charge of your Kubernetes management and maintenance. Managed services constantly monitor K8S clusters and display metrics on a dashboard to ensure the health of clusters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitoring&lt;/strong&gt;&lt;br&gt;
In-house Kubernetes requires the entire team to constantly ensure the security and availability of the platform. The natural solution for this situation is monitoring. However, setting up a &lt;a href="https://github.com/coreos/kube-prometheus"&gt;monitoring service&lt;/a&gt; can be more daunting than the initial configuration of the platform. This is a huge time drain, even before the team can resolve their issues. KaaS services can efficiently monitor all your clusters and provide a real-time view of the clusters’ health. They can also try to resolve issues automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Troubleshooting&lt;/strong&gt;&lt;br&gt;
Developers need the Kubernetes platform to be highly available. Any error can delay the delivery of a product. KaaS solutions help deliver higher quality services with real-time alerts for issues requiring developers' attention. Managed services should also update Kubernetes versions across all environments and try to resolve issues automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes control plane&lt;/strong&gt;&lt;br&gt;
The control plane is a central component of the cluster that is responsible for communication with other worker nodes. The control plane consists of a master node that controls the cluster, along with data about the cluster’s state and configuration. &lt;/p&gt;

&lt;p&gt;KaaS solutions should be responsible for the operation and management tasks of the control plane. Managed Kubernetes services should deploy a Kubernetes control panel quickly and enable developers to easily plug their different environments. &lt;/p&gt;

&lt;h2&gt;
  
  
  Top  Kubernetes as a Service Platforms
&lt;/h2&gt;

&lt;p&gt;There is an increasing number of platforms available depending on your needs. The below list reviews the most popular KaaS vendors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google Kubernetes Engine (GKE)&lt;/strong&gt;&lt;br&gt;
GKE is one of the most popular managed platforms since Kubernetes was created by Google engineers for in-house container orchestration. GKE is designed for use on Google Cloud, as well as on hybrid environments. GKE runs on the &lt;a href="https://cloud.google.com/container-optimized-os/"&gt;Container-Optimized OS&lt;/a&gt; operating system.&lt;/p&gt;

&lt;p&gt;GKE features include automatic repair of stopped applications, master nodes management, IP range reservation, the ability to configure private container registries, and integrated logging and monitoring via Stackdriver. GKE also offers high availability, auto-scaling, and automatic updates. &lt;/p&gt;

&lt;p&gt;The GKE platform enables you to create private image repositories via an integrated image builder, transfer microservices with minimal configuration changes, and manage access rights and authentication through an integrated console.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Elastic Container Service for Kubernetes (Amazon EKS)&lt;/strong&gt;&lt;br&gt;
Amazon EKS is a Kubernetes specific expansion of Elastic Container Service (ECS). EKS is built to run containers on EC2 instances across multiple AWS availability zones.&lt;/p&gt;

&lt;p&gt;EKS includes automatic updating, built-in security and encryption, and integration with CloudTrail for auditing, CloudWatch for logging, and AWS Identity and Access Management (IAM) for access permissions. The service is highly-available, you just need to provision worker nodes and connect them to EKS endpoints. The drawback of EKS is that it currently cannot support hybrid cloud configurations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Platform9&lt;/strong&gt;&lt;br&gt;
Platform9 is an enterprise-grade Kubernetes as a service provider. Platform9 can work on any public cloud platform, on VMware, and on-premises. The solution enables you to focus on developing applications, instead of wasting time on infrastructure upgrades, monitoring, and management.&lt;/p&gt;

&lt;p&gt;Platform9 offers high-availability across multiple availability zones so you can operate without any downtime. Platform9 enables you to manage multiple clusters and their services with an easy to use dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Azure Kubernetes Service (AKS)&lt;/strong&gt;&lt;br&gt;
Azure Kubernetes Service (AKS) is a fully managed solution for running containers on the Azure cloud platform. You can provision a cluster on Azure using a command line, web console, Terraform or Azure resource manager. You can also leverage the Azure traffic manager to route application requests to the nearest data centers for a quick response.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OpenShift&lt;/strong&gt;&lt;br&gt;
OpenShift is an open-source platform that can run on any public cloud. OpeShift offers both lightweight and enterprise versions. You can run OpenShift entirely in the cloud with pre-designed container templates. You can host OpenShift on a public cloud as a managed private cluster, or as a private Platform as a Service (PaaS) in private clouds or data centers.&lt;/p&gt;

&lt;p&gt;OpenShift includes a software-defined network with an image library of prepackaged applications, domain routing, built-in security, and built-in monitoring with Grafana and Prometheus. You can manage OpenShift through a CLI tool or a unified console and also connect it to different &lt;a href="https://www.zdnet.com/article/red-hat-introduces-first-kubernetes-native-ide/"&gt;Integrated Development Environments&lt;/a&gt; (IDEs).&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Kubernetes provides many benefits, including batch processing, workload management, self-healing, scalability, and progressive application deployment. However, small companies or companies with insufficient technical expertise can have some difficulties when adopting Kubernetes. KaaS solutions can help you leverage Kubernetes benefits regardless of your expertise level or company size. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Amplify AWS: Serverless Backends Using Angular, React, or Vue</title>
      <dc:creator>Eddiesegal</dc:creator>
      <pubDate>Fri, 06 Dec 2019 06:58:31 +0000</pubDate>
      <link>https://forem.com/eddiesegal/amplify-aws-serverless-backends-using-angular-react-or-vue-11dg</link>
      <guid>https://forem.com/eddiesegal/amplify-aws-serverless-backends-using-angular-react-or-vue-11dg</guid>
      <description>&lt;p&gt;&lt;a href="https://hackernoon.com/what-is-serverless-architecture-what-are-its-pros-and-cons-cc4b804022e9"&gt;Serverless functions and architectures&lt;/a&gt; are becoming increasingly popular. Some of the largest organizations around, including Netflix and Reuters, have already implemented serverless. Every major cloud provider has functionality for them, from AWS to Oracle.&lt;/p&gt;

&lt;p&gt;As serverless grows in popularity, tools for helping you create apps using these functions are increasingly available. AWS Amplify is one such tool. This article provides an overview of what Amplify is, as well as a brief tutorial on how to deploy a serverless function using React and Amplify.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Amplify?
&lt;/h2&gt;

&lt;p&gt;Amplify is a free framework for building scalable mobile and web apps using a serverless backend. A serverless backend is an infrastructure used to host serverless applications. Serverless applications are apps based on event driven functions that use client-side logic with external services. These applications use API calls to invoke functions from client applications, other functions, or cloud services.&lt;/p&gt;

&lt;p&gt;Amplify includes a library of functions and utilities, a toolchain, ready to use UI components, and a Command Line Interface (CLI). It is designed to allow developers to focus on front-end rather than back-end development. This framework is most useful if you want to create simple applications with few dependencies. It is provided as an alternative to AWS Serverless Framework and Serverless Application Model (SAM). &lt;/p&gt;

&lt;p&gt;The Amplify framework includes features for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Real-time data retrieval and storage&lt;/strong&gt; — via AWS AppSync and a REST or GraphQL API. APIs query databases stored in &lt;a href="https://cloud.netapp.com/blog/ebs-volumes-5-lesser-known-functions"&gt;EBS volumes&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authentication&lt;/strong&gt; — via Amazon Cognito. Enables users to sign up/in to app with name, email, and phone number.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time analytics&lt;/strong&gt; — includes session data, in-app metrics, and authentication data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chatbots&lt;/strong&gt; — via Amazon Lex chatbot. Only available with Vue.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AR and VR&lt;/strong&gt; — incorporates Amazon Sumerian scenes for 3D user experience.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Amplify supports use of React/React Native, Angular, Ionic, and Vue. &lt;/p&gt;

&lt;h2&gt;
  
  
  Create and Deploy a Serverless Function in React Using Amplify
&lt;/h2&gt;

&lt;p&gt;This tutorial will show you how to create a serverless function that calls to another API. You can invoke a function like this from an HTTP endpoint, from the AWS SDK, or from a cloud service event. In this tutorial the function is invoked by HTTP. This tutorial is abbreviated from a more in-depth tutorial by &lt;a href="https://read.acloud.guru/serverless-functions-in-depth-507439b4be88?gi=c9bf2e525140"&gt;Nadir Dabit&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a Client Application&lt;/strong&gt;&lt;br&gt;
You’ll use this application to make requests to your serverless application. You can use Create React App to quickly create a single-page application for this purpose.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install and Configure the Amplify CLI&lt;/strong&gt;&lt;br&gt;
To get started, you need to install and configure the Amplify CLI.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;npm install -g @aws-amplify/cli&lt;br&gt;
amplify configure&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Initialize a New Project&lt;/strong&gt;&lt;br&gt;
Next, you need to connect your app at an AWS backend. This step needs to be performed at the beginning of each project.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;amplify init&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create your Lambda Function&lt;/strong&gt;&lt;br&gt;
You need to create your function and provision resources for its use. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;amplify add api&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;? Please select from one of the below mentioned services&lt;/em&gt; &lt;strong&gt;REST&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;? Provide a friendly name for your resource to be used as a label for this category in the project&lt;/em&gt; &lt;strong&gt;shibaapi&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;? Provide a path (e.g., /items)&lt;/em&gt; &lt;strong&gt;/pictures&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;? Choose a Lambda source&lt;/em&gt; &lt;strong&gt;❯ Create a new Lambda function&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;? Provide a friendly name for your resource to be used as a label for this category in the project:&lt;/em&gt; &lt;strong&gt;shibafunction&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;? Provide the AWS Lambda function name:&lt;/em&gt; &lt;strong&gt;shibafunction&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;? Choose the function template that you want to use:&lt;/em&gt; &lt;strong&gt;Serverless express function&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;? Do you want to edit the local lambda function now?&lt;/em&gt; &lt;strong&gt;n&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;? Restrict API access&lt;/em&gt; &lt;strong&gt;n&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;? Do you want to add another path?&lt;/em&gt; &lt;strong&gt;n&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create your Resources&lt;/strong&gt; &lt;br&gt;
This step deploys your function but won’t yet call the Shiba API. You first need to modify your function to call the API. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;amplify push&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verify Requests are Forwarded Correctly&lt;/strong&gt;&lt;br&gt;
You can do this by either directly working with the function or with the &lt;a href="https://expressjs.com/"&gt;serverless express framework&lt;/a&gt;. You can see the base code for your function by accessing: “amplify/backend/function/shibafunction/src/index.js”. &lt;/p&gt;

&lt;p&gt;To use serverless express, use the following code:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;// amplify/backend/function/shibafunction/src/index.js&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;const awsServerlessExpress = require('aws-serverless-express');&lt;br&gt;
const app = require('./app');&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;const server = awsServerlessExpress.createServer(app);&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;exports.handler = (event, context) =&amp;gt; {&lt;br&gt;
  console.log(&lt;code&gt;EVENT: ${JSON.stringify(event)}&lt;/code&gt;);&lt;br&gt;
  awsServerlessExpress.proxy(server, event, context);&lt;/em&gt;&lt;br&gt;
&lt;em&gt;);&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Customize the Function&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;First, you need to install &lt;a href="https://github.com/axios/axios"&gt;axios&lt;/a&gt; so you can make HTTP requests. Axios is a promise based HTTP client, meaning it allows you to make and account for asynchronous requests.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;//amplify/backend/function/shibafunction/src&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;yarn add axios&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;# or&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;npm install axios&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Next, you need to update the function to call the API.&lt;/p&gt;

&lt;p&gt;Finally, you need to update the backend with &lt;em&gt;amplify push&lt;/em&gt;. You need to perform this step after making any changes to the function.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invoke the Endpoint in Your Client App&lt;/strong&gt;&lt;br&gt;
You need to call the function in your client app once the function is ready. Once you’re done, be sure to test your code. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Serverless functions might not replace traditional applications. However, these functions are incredibly useful for creating simple applications. The relative ease and speed with which serverless applications can be created and deployed is a huge draw for many developers and organizations. &lt;/p&gt;

&lt;p&gt;Hopefully, after reading this article you feel better prepared to create serverless applications of your own. If you’re not already an AWS cloud subscriber, you can still try this tutorial out using their free tiers.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>angular</category>
      <category>aws</category>
    </item>
    <item>
      <title>7 Open-Source Tools for Securing Your Code</title>
      <dc:creator>Eddiesegal</dc:creator>
      <pubDate>Fri, 15 Nov 2019 08:20:15 +0000</pubDate>
      <link>https://forem.com/eddiesegal/8-open-source-tools-for-securing-your-code-4gm5</link>
      <guid>https://forem.com/eddiesegal/8-open-source-tools-for-securing-your-code-4gm5</guid>
      <description>&lt;p&gt;In the last few years, some of the largest data breaches have been due to vulnerabilities in source code. From the &lt;a href="https://www.cnet.com/news/equifaxs-hack-one-year-later-a-look-back-at-how-it-happened-and-whats-changed/"&gt;Equifax breach&lt;/a&gt; to the notorious &lt;a href="https://www.techrepublic.com/article/facebook-data-privacy-scandal-a-cheat-sheet/"&gt;Facebook’s breach&lt;/a&gt; that exposed the private data of almost 87 million users. These breaches may have been prevented or at least minimized had the code in their applications been secured from the start.&lt;/p&gt;

&lt;p&gt;In this article, you’ll learn what secure coding is. You’ll also be introduced to some tools that can help you secure your own code as well as that of your applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Secure Coding?
&lt;/h2&gt;

&lt;p&gt;Secure coding is the process of developing code in a way that ensures security and eliminates vulnerabilities. It requires language-specific knowledge of exploitable issues. It also requires an understanding of vulnerabilities associated with host environments and integrations. &lt;/p&gt;

&lt;p&gt;Secure code is the first line of defense against attacks. While you might not be able to control all of the variables that can lead to vulnerabilities in your environments or integrations, you do have control over your source code. The fewer vulnerabilities you include, the more secure you and your users are.&lt;/p&gt;

&lt;p&gt;As teams adopt DevSecOps methodologies, the use of secure coding practices is becoming a requirement for many developers. Eliminating vulnerabilities in code during development is cheaper and often easier than patching issues in production. &lt;/p&gt;

&lt;h2&gt;
  
  
  7 Open-Source Tools for Secure Coding
&lt;/h2&gt;

&lt;p&gt;There are a wide variety of open-source tools available to help you develop and ensure &lt;a href="https://resources.whitesourcesoftware.com/blog-whitesource/secure-coding"&gt;secure coding practices&lt;/a&gt;. The tools below can be used in a variety of environments and languages. However, there are language-specific tools you can use that might be able to give you more specific recommendations for your applications.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://juice-shop.herokuapp.com/#/"&gt;Juice Shop&lt;/a&gt;&lt;br&gt;
Juice Shop is a training tool created by the Open Web Application Security Project (OWASP). It is an intentionally vulnerable web application that includes examples of common vulnerabilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/CoolerVoid/codewarrior"&gt;Code Warrior&lt;/a&gt;&lt;br&gt;
Code Warrior is a tool you can use to perform manual code review and static analysis. You can use it with Linux, BSD, and MacOS. Code Warrior works through your web browser on your localhost using HTTP with TLS.&lt;br&gt;
Code Warrior supports multiple languages, including C, C#, Java, PHP, Ruby, and JavaScript. It includes built-in rules to cover known secure coding standards. You can also create custom rules.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.arachni-scanner.com/"&gt;Arachni&lt;/a&gt;&lt;br&gt;
Arachni is a fully automated Dynamic Application Security Testing (DAST) tool that you can use to scan websites and applications. It works using asynchronous HTTP requests and you can use it on all major operating systems. &lt;br&gt;
Arachni is commercially supported but free for most use cases. Arachni includes features for detecting cross-site scripting, code injections, file inclusions, and data scraping. It also includes an integrated browser environment and a REST API. You can extend its functionality through a variety of plug-ins and modules.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="http://wapiti.sourceforge.net/"&gt;Wapiti&lt;/a&gt;&lt;br&gt;
Wapiti is a DAST tool you can use to scan your web applications. You use it through a command-line interface. It works by attempting to inject payloads into forms and scripts. It supports GET and POST methods of attack. &lt;br&gt;
Wapiti includes features for fuzzing, performing brute force attacks, detecting file disclosures, and using a variety of authentication methods. Fuzzing is when you provide various types of invalid, unexpected, or random inputs to check how an application responds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dependencytrack.org/"&gt;Dependency Track&lt;/a&gt;&lt;br&gt;
Dependency Track is a tool that enables you to keep track of third-party components in your applications. It works for applications you’ve developed as well as those you’re using. You can use it on-premise or as a web application. It is integrated with vulnerability databases, such as the National Vulnerability Database (NVD), Sonatype OSS Index, and VulnDB. Dependency Track includes features for centralized tracking, integration with Active Directory and LDAP, and notifications via webhooks. It can also provide impact analyses of vulnerabilities and out of data components.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.metasploit.com/"&gt;Metasploit Framework&lt;/a&gt;&lt;br&gt;
Metasploit is a penetration testing framework that enables you to automate attack testing. Using Metasploit, you can attempt specific exploitation of issues with built-in or custom payloads. You use it via a command-line interface. It works on both Windows and Linux.&lt;br&gt;&lt;br&gt;
Metasploit includes modules that function as encoders, shellcode, post-exploitation code, and listeners. It comes already integrated with Kali, a popular pentesting Linux distribution.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.sonarqube.org/"&gt;SonarQube&lt;/a&gt;&lt;br&gt;
SonarQube is a tool you can use to expose vulnerabilities in code and measure your source code quality. It ranks vulnerabilities according to severity. You use it via an interactive GUI that is beginner-friendly. It is written in Java but can be used with over 20 common languages. &lt;br&gt;
SonarQube includes features for analyzing pull requests, code branch tracking, and project timeline visualization. You can integrate it with continuous integration tools like Jenkins.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In the past, developers were not necessarily responsible for ensuring that their code was secure from the start. Security was the responsibility of security teams. This is often no longer the case and security is now a shared responsibility. &lt;/p&gt;

&lt;p&gt;If you’re not used to standards of secure coding, it can seem overwhelming at first. Luckily, there is an abundance of tools and resources available to help you learn and practice secure coding standards. With a little patience and dedication, secure coding should become second nature. Hopefully, the tools covered here can help get you started.&lt;/p&gt;

</description>
      <category>security</category>
      <category>codequality</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Deception Technology for Endpoint Security</title>
      <dc:creator>Eddiesegal</dc:creator>
      <pubDate>Fri, 13 Sep 2019 14:43:47 +0000</pubDate>
      <link>https://forem.com/eddiesegal/deception-technology-for-endpoint-security-25l1</link>
      <guid>https://forem.com/eddiesegal/deception-technology-for-endpoint-security-25l1</guid>
      <description>&lt;p&gt;Deception Technology (DT) deflects the attention of threat actors from real assets to fake assets, thus protecting network, systems, and files. Read on to learn what deception technology is and how you can apply it for endpoint security. &lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Deception Technology?
&lt;/h2&gt;

&lt;p&gt;Deception technology is comprised of a set of security tools and techniques designed to prevent threat actors from breaching the security parameter. This technology works by using decoys to deflect the attackers’ attention and delay or prevent them from reaching their target. &lt;/p&gt;

&lt;p&gt;The decoys look like genuine digital assets and can be deployed in real or emulated systems. They decoys serve as bait, attracting and tricking the attackers into thinking they breached a real asset. &lt;/p&gt;

&lt;p&gt;Deception technology complements cybersecurity solutions, such as security information and event management (SIEM) systems. Deception technology can integrate log data from the organization’s SIEM system, and provide you with threat alerts. Some advanced deception systems can communicate with the attacker’s command and control (C&amp;amp;C) to gather more information about the attacker’s methods and the tools he is using. &lt;/p&gt;

&lt;p&gt;Deception technology can help you protect your assets from the following attacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Credential theft&lt;/strong&gt;—when an attacker tries to lift username and passwords from Online Analytics Processing (OLAP) directories. &lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Lateral movement&lt;/strong&gt;—when an attacker tries to access other parts of the network that were off-limits until then. &lt;/li&gt;
&lt;li&gt;Attacks on directory systems—can be user directories or file directories.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Man-in-the-middle&lt;/strong&gt;—an attacker intercepts and modifies communications between two parties without their knowledge. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Deception Technology Works
&lt;/h2&gt;

&lt;p&gt;During the early days of deception, deploying decoys required a lot of manual work. It wasn’t practical for large and distributed environments. Nowadays, &lt;a href="https://www.cynet.com/platform/threat-protection/deception/"&gt;deception technology&lt;/a&gt; is an integral part of endpoint protection solutions, threat detection, and incident response platforms. &lt;/p&gt;

&lt;p&gt;Deception technology can help security teams detect and identify attackers as soon as a breach has occurred. This significantly reduces the time an attacker can lurk undetected in your network. &lt;/p&gt;

&lt;p&gt;Deception technology offers a different way to deal with stealth attacks, deviating them from the target. The attacker has fewer opportunities to perform lateral movements, mapping the entire infrastructure and producing further damage. &lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges of Endpoint Security
&lt;/h2&gt;

&lt;p&gt;Cybersecurity teams can easily overlook attacks as they become more sophisticated. For example, if an attacker entered a network by stealing user credentials, it can remain undetected. The rise in IoT devices means that practically any gadget can be connected. Here are two specific IoT devices that are especially vulnerable to attacks: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Industry devices&lt;/strong&gt; - such as sensors or real-time location devices for shipment tracking, including those used in manufacturing. Most manufacturing companies get the technical support   Supervisory Control and Data Acquisition (SCADA) infrastructure through a third-party, increasing the risk of attacks. An effective solution for manufacturing networks requires being easy to install and maintain while avoiding affecting operations.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;General IoT devices&lt;/strong&gt; - such as those used in healthcare or smart air conditioning and security systems. Attackers can steal Personal Identifiable Information (PII) data, or deploy ransomware in medical systems, thus risking the lives of thousands of people. &lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Application of Deception Technology for Endpoint Security
&lt;/h1&gt;

&lt;p&gt;Attackers usually enter a network through the endpoints. While deception technology cannot stop an attacker to enter, it helps minimize the damage they cause. This makes deception technology a good supporting technology for an endpoint security platform. &lt;/p&gt;

&lt;p&gt;One of the main risks an organization faces is that of an infiltrator navigating and conducting reconnaissance inside their network unnoticed. According to a report by &lt;a href="https://www.hstoday.us/home-posts/report-finds-cybersecurity-dwell-time-is-191-days-and-state-cio-says-it-should-be-zero/"&gt;Ponemon Institute&lt;/a&gt;, dwell time could last on average 191 days before getting detected. &lt;/p&gt;

&lt;p&gt;Using deception technology produces a way to get the attackers to disclose their location. The security team deploys fake assets that mimic real assets. The decoys lure the attacker into thinking they are attacking the real thing. These fake assets trigger an alert when attacked, while at the same time giving up the attacker’s location.&lt;/p&gt;

&lt;h1&gt;
  
  
  Deception Technology Features for Endpoint Security
&lt;/h1&gt;

&lt;p&gt;Deception technology helps distract the attackers away from valuable assets. You can reduce an attacker's dwelling time by including the time for detection and remediation. Deception technology can help you Improve incident response by generating accurate and prioritized alerts, thus eliminating alert fatigue. &lt;/p&gt;

&lt;p&gt;Some deception technology solutions provide deep forensic and adversary intelligence. You can use this type of intelligence to learn about the tactics, techniques, and procedures (TTPs) of attackers. You can also supplement security by deploying decoys around critical assets in the event of an attack. &lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap Up
&lt;/h2&gt;

&lt;p&gt;Deception technology provides an alternative way to address the problem of an attacker dwelling in the network. While it does not prevent attacks from taking place, it buys time for security teams to respond. At the end of the day, deception technology effectively turns the tables on attackers shifting the power into the security system&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
    <item>
      <title>Features of Azure Backup You Should Know About</title>
      <dc:creator>Eddiesegal</dc:creator>
      <pubDate>Thu, 29 Aug 2019 11:21:59 +0000</pubDate>
      <link>https://forem.com/eddiesegal/features-of-azure-backup-you-should-know-about-7</link>
      <guid>https://forem.com/eddiesegal/features-of-azure-backup-you-should-know-about-7</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MtLtVOR7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.pixabay.com/photo/2014/05/27/23/32/matrix-356024_960_720.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MtLtVOR7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.pixabay.com/photo/2014/05/27/23/32/matrix-356024_960_720.jpg" alt=""&gt;&lt;/a&gt;&lt;br&gt;Image by Comfreak from &lt;a href="https://pixabay.com/illustrations/matrix-code-computer-pc-data-356024/"&gt;Pixabay&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;If you live in the 21st century, you know that you need to back up your work. Still, there are still many companies that do not have an adequate disaster recovery strategy in place. The effects of a severe data loss on a business are enormous, and &lt;a href="https://consoltech.com/blog/10-common-causes-of-data-loss/"&gt;most&lt;/a&gt; small and medium businesses that suffer a significant data loss will end up closing down.&lt;/p&gt;

&lt;p&gt;It all starts with poor backup practices. The most effective backup system involves using a cloud, this means backing up your data with a third party. While this reduces some of your control over your data, the benefits far outweigh the shortfalls. &lt;/p&gt;

&lt;p&gt;Backing up your data to a cloud is cost-effective, secure, and scalable. &lt;a href="https://cloud.netapp.com/blog/5-considerations-before-you-backup-on-azure"&gt;Azure Backup&lt;/a&gt; is one of the most popular backup solutions on the market. In this article, we will explore five features that you should probably know about Azure Backup so that you can take full advantage of their backup solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features of Azure Backup You Should Explore
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Complementary solutions
&lt;/h3&gt;

&lt;p&gt;When it comes to disaster recovery, you want to make sure all your bases are covered. Azure backup provides you with two &lt;a href="https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-overview"&gt;complementary backup solutions&lt;/a&gt;: Azure Backup and Azure Site Recovery. Their backup scopes are slightly different, providing a combination of complementary backup systems. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Azure Backup&lt;/strong&gt; backs up your data from your VM as well as from your on-premise servers. This backup system is best for backing up detailed data, rather than your entire machine. For example, if you want to backup transaction data for compliance reasons, then Azure Backup would be the right tool. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Azure Site Recovery&lt;/strong&gt;, on the other hand, is also used to backup VM and on-premise servers, but basically copies the whole machine to another location. Should a disaster occur, you can failover to the secondary machine then failback to the original once it becomes operational again.&lt;/p&gt;

&lt;h3&gt;
  
  
  Restore your entire machine or individual files
&lt;/h3&gt;

&lt;p&gt;Azure Backup does not force you to restore the entire machine, you can choose to restore simply an individual file if you prefer. This provides you with substantial flexibility and can seriously reduce the backup time, as it is not always necessary to back up the entire machine. &lt;/p&gt;

&lt;h3&gt;
  
  
  Redundancy options
&lt;/h3&gt;

&lt;p&gt;Azure Backup provides you with &lt;a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-introduction"&gt;four different ways&lt;/a&gt; of backing up your data, depending on your needs and budget. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Locally-redundant storage (LRS)&lt;/strong&gt; is the cheapest way to backup your data. In this storage system, your data is replicated three times but stored in the primary region. This means that should a natural disaster (like flood or fire) strike the data center, you have a higher risk of all the replicas of your data. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zone-redundant storage (ZRS)&lt;/strong&gt; is a good option for backing up data that requires high availability. This solution includes three replicas of your data being stored in the primary region, but in different storage clusters. Each cluster is in a different availability zone and is physically separate from the other clusters. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Geo-redundant storage (GRS)&lt;/strong&gt; is more expensive than the other options, but also provides the most redundancy. With GRS your data is replicated locally to the primary region like with LRS. A second copy is also made to a secondary region, which is physically distant from the first data center. This is the safest option as it protects against any regional natural disasters. It is also the default setting for Azure Backups.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security
&lt;/h3&gt;

&lt;p&gt;Your data is one of your most valuable assets, and in today’s world, security is one of the top concerns. Azure Backup provides security at different levels. For the transmission and storage of data, Azure Backup uses encryption. Only you have the encryption passphrase (Azure does not have access to it), making sure that even in the extremely unlikely event that Azure is breached, your data would still be secure.  &lt;/p&gt;

&lt;p&gt;Azure requires authentication to make high-risk changes, preventing users without the required permissions from making changes related to backup. Azure will also notify you any time and an unusual change is made that could affect backups. Moreover, should the previous two safeguards be insufficient, any data that is deleted is stored by Azure for 14 days.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multiple ways to backup
&lt;/h3&gt;

&lt;p&gt;Azure Backup offers multiple ways of backing up your workloads so that you can do it in the most convenient way. The first way is through the Azure portal. You can easily access it from your browser and gives you a consolidated view of all your Azure services. &lt;/p&gt;

&lt;p&gt;If you are technical savvy, you can also access Azure through a shell, such as PowerShell AZ. If you are directing Backup through the shells, you can use API calls and scripts to manage and schedule your backups, giving you greater control and freedom.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In today’s fast-paced, digitized environment, most of our work involves some type of data. Having a solid disaster recovery plan for this data can make the difference between businesses who fail and those who thrive. One data-loss disaster, such as a crashed server, can bring a company to its feet.&lt;/p&gt;

&lt;p&gt;Backing up your work to a public cloud is one of the best ways to protect yourself from possible data loss, without having to break the bank or compromise on security. Azure Backup offers a secure and user-friendly backup solution. To take full advantage of Azure Backup, be sure to explore the features listed above.&lt;/p&gt;

</description>
      <category>azure</category>
    </item>
    <item>
      <title>Amazon Infrastructure Services: Build Your Budget for the Cloud</title>
      <dc:creator>Eddiesegal</dc:creator>
      <pubDate>Wed, 31 Jul 2019 20:43:15 +0000</pubDate>
      <link>https://forem.com/eddiesegal/amazon-infrastructure-services-build-your-budget-for-the-cloud-1852</link>
      <guid>https://forem.com/eddiesegal/amazon-infrastructure-services-build-your-budget-for-the-cloud-1852</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fc1j7bytxbt5vw3ejdq0w.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fc1j7bytxbt5vw3ejdq0w.jpeg"&gt;&lt;/a&gt;&lt;br&gt;Image source: &lt;a href="https://www.ctrl.blog/media/hero/particle-network.jpeg" rel="noopener noreferrer"&gt;ctrl.blog&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;Amazon Web Services is one of the most established cloud providers around. AWS offers a variety of services that leverage its extensive network of virtualized servers. This brief guide covers the pricing options for the various AWS architectures available.&lt;/p&gt;

&lt;h3&gt;
  
  
  CloudFront Pricing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;For regional data transfer out to Internet&lt;/strong&gt;—starting at $0.085 per GB for the first 10 TB/month, down to $0.02 per GB for over 5 PB / month&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For regional data transfer out to origin&lt;/strong&gt;—$0.02 per GB flat&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For HTTP requests&lt;/strong&gt;—$0.0075 per 10,000 requests
### CloudWatch Pricing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For Amazon CloudWatch Dashboards&lt;/strong&gt;, per dashboard per month—$3.00&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For detailed monitoring of Amazon EC2 instances&lt;/strong&gt;, per instance per month—starting at $2.10, down to $0.14 at 1-minute frequency&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For custom metrics&lt;/strong&gt;, per metric per month—$0.30 for first 10,000 metrics, down to $0.02 for over 1,000,000 metrics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For alarms&lt;/strong&gt;, per alarm per month—$0.10, $0.20 extra for high-resolution alarms&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For API Requests&lt;/strong&gt;—$0.01 per 1,000 metrics requested or 1,000 API requests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For Amazon CloudWatch Logs&lt;/strong&gt;—$0.50 per GB ingested, $0.03 per GB archived per month&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For Vended Logs&lt;/strong&gt;, per GB—starting from $0.50 for the first 10TB of vended log data ingested, down to $0.05 for over 50TB&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For events&lt;/strong&gt;, per million custom events generated—$1.00
### Amazon Elastic Load Balancing (ELB) Pricing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For app load balancer&lt;/strong&gt;—$0.0225 per Application Load Balancer-hour (or partial hour), $0.008 per LCU-hour (or partial hour)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For network load balancer&lt;/strong&gt;—$0.0225 per Network Load Balancer-hour (or partial hour)
$0.006 per LCU-hour (or partial hour)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For classic load balancer&lt;/strong&gt;—$0.0225 per Network Load Balancer-hour (or partial hour)
$0.006 per LCU-hour (or partial hour)
### Amazon Elastic Container Service (ECS) Pricing
Amazon’s Elastic Container Service can be run via Amazon EC2 or Amazon Fargate. For EC2 tasks, you can use either bind mount host volumes or Docker-managed volumes, which integrate well with external &lt;a href="https://cloud.netapp.com/blog/ebs-volumes-5-lesser-known-functions" rel="noopener noreferrer"&gt;AWS storage systems like EBS&lt;/a&gt;. I’ll use Fargate deployment as an example of ECS pricing. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWS Fargate pricing is calculated based on the vCPU and memory resources, from the time you start to download your container image (docker pull) until your collection of containers (called a Task) terminates, with a minimum charge of 1 minute.&lt;/p&gt;

&lt;p&gt;Pricing is based on requested vCPU and memory resources for the Task, both calculated independently and billed by the second:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;For vCPU&lt;/strong&gt;, per hour—$0.0506&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For GB of memory&lt;/strong&gt;, per hour—$0.0127 per hour
### Amazon Lambda Pricing
Amazon’s serverless computing service counts a request each time it starts executing in response to an event notification or invoke call. You are charged for the total number of requests across all &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-introduction-function.html" rel="noopener noreferrer"&gt;Lambda functions&lt;/a&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Duration is calculated from the time your code begins executing until it returns or otherwise terminates. The price depends on the amount of memory you allocate to  your function:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;For Requests&lt;/strong&gt;—first 1 million requests free, thereafter $0.20 per 1 million requests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duration&lt;/strong&gt;—400,000 GB-seconds per month free, thereafter $0.00001667 per GB-second&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post I covered the pricing scheme of Amazon’s infrastructure services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon CloudFront&lt;/li&gt;
&lt;li&gt;Amazon CloudWatch&lt;/li&gt;
&lt;li&gt;Amazon ELB&lt;/li&gt;
&lt;li&gt;Amazon ECS&lt;/li&gt;
&lt;li&gt;Amazon Lambda&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This should help you choose the appropriate architecture for your needs and budget. However, the figures presented here are an estimate, and may differ based on your location, or change with time. To fully understand the long-term costs of using AWS infrastructure, you’ll have to trial it yourself.&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
    <item>
      <title>Video Transcoding: Creating Content For the Future</title>
      <dc:creator>Eddiesegal</dc:creator>
      <pubDate>Tue, 16 Jul 2019 20:42:12 +0000</pubDate>
      <link>https://forem.com/eddiesegal/video-transcoding-creating-content-for-the-future-2c7g</link>
      <guid>https://forem.com/eddiesegal/video-transcoding-creating-content-for-the-future-2c7g</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M6rYfi8V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/8vcsjgjmhpcy9qdkoj59.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M6rYfi8V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/8vcsjgjmhpcy9qdkoj59.jpg" alt=""&gt;&lt;/a&gt;&lt;br&gt;&lt;a href="https://pixabay.com/illustrations/youtube-video-streaming-3327676/"&gt;Image Source: Pixabay&lt;/a&gt; 
&lt;/p&gt;

&lt;p&gt;As our world becomes increasingly connected, we are consuming more and more media online. From YouTube to Netflix, streaming is all the rage. As this industry has boomed, quality is the most important factor for consumers. &lt;/p&gt;

&lt;p&gt;According to a &lt;a href="https://www.streamingmedia.com/Articles/News/Online-Video-News/Streaming-Video-Services-Are-Getting-Worse-63-See-Buffering-122408.aspx"&gt;study conducted by IBM Cloud Video&lt;/a&gt;, when it comes to online streaming, buffering is the primary concern. While there is a large quantity of streamable content, the quality does not always meet consumer expectations. Video transcoding is the best way to deal with this issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Video Transcoding?
&lt;/h2&gt;

&lt;p&gt;Video transcoding is the conversion of video files from one format to another format, making it accessible to many different devices. In a perfect world, after making your video, you would upload the original file to the internet for the world to see. In reality, if you were to upload your original video, most viewers would not see much beyond the buffering reel. &lt;/p&gt;

&lt;p&gt;Between visual data and audio data, video files contain a ton of information and watching them takes up substantial memory and bandwidth. For people to be able to see your video, a compressed version needs to be created.  &lt;/p&gt;

&lt;p&gt;Even compressing your video may not make it accessible to every device and platform. Someone watching your video from their office computer connected to the internet via cable will be able to process media that is heavier than his friend watching the same video from his smartphone connected to public wifi. &lt;/p&gt;

&lt;p&gt;Transcoding is essential because it not only compresses the video, but it creates different versions so that it can be viewed on any screen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Is Transcoding So Important?
&lt;/h2&gt;

&lt;p&gt;If you want your video to be widely accessible to users, video transcoding is key. The main reasons to transcode your video are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;File size&lt;/strong&gt;—video transcoding is used to reduce the file size when a video is too heavy for the target device.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Formatting compatibility&lt;/strong&gt;—when the target device does not support the format of the video, transcoding is used to convert it to a compatible format.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Old file types&lt;/strong&gt;—if the video file type is very old or obsolete, video transcoding is used to convert it to a modern format. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We are consuming more and more online content, much of it in the form of video streaming. There is, therefore, a clear need to deliver this content in a way that is flexible to different devices and with an adaptive bitrate. Moreover, many video streaming platforms only accept videos in certain formats. Making transcoding a necessity or anyone involved in media streaming. &lt;/p&gt;

&lt;h2&gt;
  
  
  How Video Transcoding Works
&lt;/h2&gt;

&lt;p&gt;Video transcoding has two steps. First, the video is decoded to an uncompressed format. Second, the video is re-compressed to a new format, compatible with the target device.&lt;/p&gt;

&lt;p&gt;When we talk about video transcoding we are talking about file codecs, containers, and file types. To encode the video files we use codecs. These are hardware devices or software programs that code and decode video files. &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.computerhope.com/jargon/c/codec.htm"&gt;codec&lt;/a&gt; creates an encoded version of the video, this is then decoded by the player on which the video is watched. Some of the most common codecs are MPEG, H.264, and VP9. Desired size, quality and delivery method all factor into the choice of codec. &lt;/p&gt;

&lt;p&gt;A video file is composed of visual images, audio, and metadata. Once the different components of the video recompressed by the codec, they are held inside the container. &lt;br&gt;
The container specifies the video and audio that will be played and provides rules for the playback device to decode the video. &lt;/p&gt;

&lt;p&gt;Some common containers are MPEG4, Quicktime File Format, and Audio Video Interleave.  The container (also referred to as file format) can be identified by the file type, for example, the MPEG4 container is represented by .mp4. The file type defines on which platform the video file can be streamed. &lt;/p&gt;

&lt;h2&gt;
  
  
  When Should You Use Video Transcoding?
&lt;/h2&gt;

&lt;p&gt;Anytime you want your video to reach a wide online audience, you need to &lt;a href="https://cloudinary.com/features/video_transcoding"&gt;use video transcoding&lt;/a&gt;. Transcoding creates multiple compressed versions of the original file. Each version is optimized for a different viewing platform and internet speed, making it universally accessible to viewers. &lt;/p&gt;

&lt;p&gt;Moreover, many &lt;a href="https://www.freemake.com/blog/top-7-free-video-sharing-sites/"&gt;video sharing platforms&lt;/a&gt; require your video to be transcoded. For example, if you want to upload a video on YouTube, your video needs to be transcoded into one of the formats accepted by YouTube. &lt;/p&gt;

&lt;p&gt;Generally, most of these platforms support more than one format. However, you will still need to convert your original video from your camera, for example, to one of the supported formats using transcoding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Make Your Video Accessible to Everyone
&lt;/h2&gt;

&lt;p&gt;If you use online streaming, you need to be aware of video transcoding. In a world that is demanding an increasingly customized streaming experience, consumers will not compromise on loading speed and video quality. &lt;/p&gt;

&lt;p&gt;It is therefore crucial to create videos that can adapt to the viewing needs of the consumer. This means delivering content that can be played on any device, such as a computer, tablet or smartphone. In addition, it is imperative to be able to distribute content based on the device’s bitrate capacity. &lt;/p&gt;

&lt;p&gt;Video transcoding is a crucial component of the video production process that will optimize your video file to fit the needs of your consumer.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>5 Facts about Amazon EBS That Will Save You Time and Money</title>
      <dc:creator>Eddiesegal</dc:creator>
      <pubDate>Thu, 04 Jul 2019 21:07:06 +0000</pubDate>
      <link>https://forem.com/eddiesegal/5-facts-about-amazon-ebs-that-will-save-you-time-and-money-29f4</link>
      <guid>https://forem.com/eddiesegal/5-facts-about-amazon-ebs-that-will-save-you-time-and-money-29f4</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JmS-xhBk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/gabescsketwxnl1j68da.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JmS-xhBk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/gabescsketwxnl1j68da.jpg" alt=""&gt;&lt;/a&gt;&lt;br&gt;&lt;a href="https://pxhere.com/en/photo/1436335"&gt;PxHere&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;Are you thinking of using Amazon EBS? The easy scalability and the option to pay per usage make it an attractive service, but there are some things to consider if you want to avoid any surprises in your AWS bill. In this article, we offer five tips to help you effectively cut your Amazon EBS usage costs.  &lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Amazon EBS?
&lt;/h2&gt;

&lt;p&gt;Amazon Elastic Block Store (EBS) is a service offered to Amazon Web Services (AWS) users. This service provides block storage volumes, which can be used in combination with Amazon Elastic Compute Cloud (EC2) and in the AWS cloud. &lt;/p&gt;

&lt;p&gt;The Amazon EC2 provides resizable cloud-based compute capacity, which can facilitate web-scale cloud computing. EBS provides storage for EC2 instances through block level storage volumes. &lt;br&gt;
EBS volumes are versatile and can be used as a primary storage device for a database or attached as a root partition to an EC2 instance. Using a system of snapshots, the volume can serve as a backup, remaining after the EC2 instance is deleted.  It's key features are: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It’s a plug-and-play system&lt;/li&gt;
&lt;li&gt;It allows for automatic replication of each volume&lt;/li&gt;
&lt;li&gt;Low latency performance&lt;/li&gt;
&lt;li&gt;Easy scalability&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Much It Costs
&lt;/h2&gt;

&lt;p&gt;Amazon EBS storage works as a pay-per-use service. Companies pay according to how many gigabytes per month of storage they need. This is different from &lt;a href="https://aws.amazon.com/ec2/instance-types/"&gt;EC2 instances&lt;/a&gt;, which only generate charges while running, EBS volumes generate charges as long as they are attached to instances. This happens even when the instance has stopped, but not when it’s deleted. &lt;/p&gt;

&lt;p&gt;Therefore, there is a risk of EBS volumes generating charges in the background, while unnoticed because their attached instances are not running. This can amount to hefty sums accrued in the AWS bill. &lt;/p&gt;

&lt;p&gt;How can you reduce your costs while maximizing the EBS features? Below we explain five tricks to reduce EBS costs. &lt;/p&gt;

&lt;h2&gt;
  
  
  5 Tips to Reduce EBS Costs
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Adjust the Volume to Your Performance Requirements
&lt;/h3&gt;

&lt;p&gt;If you buy more storage than you need, you will end up paying more for unused storage. Therefore, it is crucial to select the right size of EBS Volumes. The blocks should be adjusted considering factors such as capacity, traffic (IOPS, Input/Output operations per second), and the application throughput. You should monitor the read-write access volume of the blocks, provisioned periodically to detect unused blocks. If the volume of requests/responses is low, you should downsize the EBS blocks and reduce costs. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/ebs/"&gt;EBS volumes&lt;/a&gt; come in three sizes━General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic. A good tip is to attach EC2 to General Purpose SSD volumes and reserve Provisioned IOPS for applications that are mission critical and need a high throughput per volume.  &lt;/p&gt;

&lt;h3&gt;
  
  
  2. Delete Unattached Volumes
&lt;/h3&gt;

&lt;p&gt;The volume persists even after the EC2 instance has stopped, and is generating charges as long as the EBS is attached to the EC2 instance. This persistence is good to retain the data, but you can end up paying for unused storage. &lt;/p&gt;

&lt;p&gt;A simple way to cut costs is to delete unattached EBS volumes. While they are marked as “available”, they can’t take any traffic, so you cannot use them. You should delete an orphaned volume only after checking you don’t need the data in it. A good option is to take a snapshot of the EBS volume and then eliminate it. &lt;/p&gt;

&lt;p&gt;You can use &lt;a href="https://n2ws.com/blog/aws-cloud/how-to-create-a-disaster-recovery-plan-for-aws"&gt;AWS snapshots for disaster recovery&lt;/a&gt;. They are a great tool to compress the data, and they are cheaper as they are hosted in Amazon S3, which has lower rates. This allows you to store cold data, which you don’t need to access frequently, but with wich you can restore the EBS volume if needed. &lt;/p&gt;

&lt;h3&gt;
  
  
  3. Identify Idle Volumes
&lt;/h3&gt;

&lt;p&gt;Having eliminated unattached volumes, you need to look out for volumes that are still attached but aren’t doing anything, and which generate unnecessary charges. To discover these idle volumes, a good tip is to look at the volume throughput and IOPS. If this volume has not had any traffic or disc operations in a while, the volume is not in use and can be eliminated.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Tag Everything
&lt;/h3&gt;

&lt;p&gt;Tags are useful tools that allow you to locate the high-cost areas in your database. &lt;br&gt;
By applying tags to EBS volumes, you can search, manage and filter resources using metadata. &lt;/p&gt;

&lt;p&gt;Moreover, you can use tags to organize and edit resources, forming groups within the AWS console. &lt;/p&gt;

&lt;h3&gt;
  
  
  5. Manage Snapshots
&lt;/h3&gt;

&lt;p&gt;AWS snapshots are copies of the data present in an EBS volume. Since Snapshots are cheaper than active EBS volumes, they are an easy way to backup unattached volumes before terminating them.&lt;/p&gt;

&lt;p&gt;However, snapshots get outdated from time to time. A good rule of thumb is to set a period of time after which the snapshots are no longer relevant. Afterward, you can set up the system to delete older snapshots periodically while deciding how many snapshots you are going to retain per volume. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;While managing the costs of your AWS platform can seem tricky at first, it all boils down to monitoring your usage periodically. The key is to identify unused space and reorganize it. Equipped with these tips, you can minimize costs and optimize your platform performance. &lt;/p&gt;

</description>
      <category>aws</category>
    </item>
  </channel>
</rss>
