<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Arunasri Maganti</title>
    <description>The latest articles on Forem by Arunasri Maganti (@arunasri).</description>
    <link>https://forem.com/arunasri</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/arunasri"/>
    <language>en</language>
    <item>
      <title>Why Prioritizing Cloud Security Best Practices is Critical in 2024</title>
      <dc:creator>Arunasri Maganti</dc:creator>
      <pubDate>Tue, 15 Oct 2024 11:40:36 +0000</pubDate>
      <link>https://forem.com/techpartner/why-prioritizing-cloud-security-best-practices-is-critical-in-2024-276g</link>
      <guid>https://forem.com/techpartner/why-prioritizing-cloud-security-best-practices-is-critical-in-2024-276g</guid>
      <description>&lt;p&gt;In today’s hyper digitalized age, securing cloud infrastructure is no longer just an option. It has become a necessity as more and more organizations migrate workloads to the cloud. Back in 2019, &lt;a href="https://www.gartner.com/smarterwithgartner/is-the-cloud-secure" rel="noopener noreferrer"&gt;Gartner wrote&lt;/a&gt; that, “Through 2025, 99% of cloud security failures will be the customer’s fault.” As 2025 approaches in 3 months, it is now more important than ever to ensure that sensitive data is protected, regulatory compliance is maintained, and that the evolving and dynamic cyber threat landscape is mitigated. Amazon Web Services (AWS) includes a detailed &lt;a href="https://aws.amazon.com/security/" rel="noopener noreferrer"&gt;cloud security framework&lt;/a&gt; to ensure the safety of cloud-based access and associated systems. Cloud security best practices and cloud security tools are mandatory to leverage the strength of AWS infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security in AWS Cloud is a Shared Responsibility Model
&lt;/h2&gt;

&lt;p&gt;AWS’s &lt;a href="https://aws.amazon.com/compliance/shared-responsibility-model/" rel="noopener noreferrer"&gt;shared responsibility model&lt;/a&gt; divides the ownership of different security aspects between AWS and the customer. While AWS secures the infrastructure such as the physical servers and networking hardware, customers are responsible for securing the actual information and applications that reside on those servers and maintain access control.&lt;br&gt;
AWS has inbuilt security guardrails, which are a good first line of protection. &lt;a href="https://aws.amazon.com/iam/" rel="noopener noreferrer"&gt;AWS Identity and Access Management (IAM)&lt;/a&gt; grants identity, &lt;a href="https://aws.amazon.com/kms/" rel="noopener noreferrer"&gt;AWS Key Management Service (KMS)&lt;/a&gt; encrypts data, and &lt;a href="https://aws.amazon.com/cloudtrail/" rel="noopener noreferrer"&gt;AWS CloudTrail&lt;/a&gt; monitors what’s going on, but putting them in place to match best practices is up to you. The combination of these cloud security tools with the right cloud security policy can make your cloud immune to threats.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Security in the Cloud Matters
&lt;/h2&gt;

&lt;p&gt;Cloud security in cloud computing is arguably the most important aspect of your AWS infrastructure. In the year 2023, on average, a data breach around the world cost $4.45 million, according to IBM’s &lt;a href="https://www.ibm.com/reports/data-breach" rel="noopener noreferrer"&gt;Cost of a Data Breach Report&lt;/a&gt;. Cloud security challenges are manifold - causing you to have a data breach or a regulatory fine and tarnish your company’s reputation. By following &lt;a href="https://aws.amazon.com/architecture/security-identity-compliance/" rel="noopener noreferrer"&gt;AWS security best practices&lt;/a&gt;, you protect yourself from these risks, and you also help your organization meet certain industry security standards, such as HIPAA for healthcare and PCI-DSS for finance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Reasons to Adopt AWS Security Best Practices
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;- Data Protection:&lt;/strong&gt; It has multiple security layers, but you must especially focus on encrypting data at rest and in transit. Using the S3 encryption service of AWS (based in the region), you can prevent serious data exploitation between the connection of the EC2 Server and the S3 Server, making it impossible for anyone other than the official EC2 server to access the memory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Compliance:&lt;/strong&gt; AWS infrastructure and Application running on that infrastructure can comply with regulations like GDPR and SOC 2, but proper configuration is the key. &lt;br&gt;
&lt;a href="https://aws.amazon.com/security-hub/" rel="noopener noreferrer"&gt;AWS Security Hub&lt;/a&gt; simplifies this by giving you a clear view of your security across all AWS accounts. It automatically checks your environment against standards like CIS, PCI DSS, and ISO 27001, flagging issues so you can address them quickly. It also integrates with other AWS services like GuardDuty, Inspector, and Macie, along with third-party tools, offering a centralized view of all security concerns. With Security Hub, you get continuous monitoring and easy-to-follow reports that make staying compliant and secure much simpler.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Access Management:&lt;/strong&gt; You can enact fine-grained access control with AWS IAM. Least privilege is the rule when you define user and group policies to reduce attack surfaces by granting people access to only the resources they truly need.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strengthening Your AWS Security in Cloud
&lt;/h2&gt;

&lt;p&gt;Here’s how you can bolster your AWS cloud security:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- AWS Native Tools:&lt;/strong&gt; AWS offers a collection of security capabilities such as AWS GuardDuty for threat detection and AWS Shield for DDoS protection, both built to integrate natively and intelligently with your cloud infrastructure.&lt;br&gt;
&lt;strong&gt;-  Principle of least privilege:&lt;/strong&gt; Users are granted only the level of privilege that they need, and IAM roles should be used instead of static credentials to reduce accidents that might lead to exposure of sensitive data.&lt;br&gt;
&lt;strong&gt;- Implement Multi-Factor Authentication (MFA):&lt;/strong&gt; MFA adds an extra layer of protection. &lt;a href="https://www.verizon.com/business/resources/reports/2023-data-breach-investigations-report-dbir.pdf" rel="noopener noreferrer"&gt;Verizon’s 2023 Data Breach Investigations Report&lt;/a&gt; states that 61% of data breaches involve credential compromise. MFA could keep unauthorized access from occurring, even when credentials have been compromised.&lt;br&gt;
&lt;strong&gt;- Encrypt Everything at rest &amp;amp; in motion:&lt;/strong&gt; Encrypt everything (data coming and going ) using encryption tools from AWS, including AWS KMS and SSL/TLS certificates so that, even if data is intercepted, it cannot be understood or used without the proper decryption keys.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As the cloud continues its rapid evolutionary path, so do the vulnerabilities and threats. By heeding the best practices on AWS, businesses not only secure their data, but they also build a foundation of credibility with their customers that will help them succeed in the long run. Estimates from Cybersecurity Ventures showing that &lt;a href="https://www.esentire.com/cybersecurity-fundamentals-defined/glossary/cybersecurity-ventures-report-on-cybercrime" rel="noopener noreferrer"&gt;cybercrime’s global costs will reach $10.5 trillion a year by 2025&lt;/a&gt;, no one can afford not to take steps to secure their cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Your Free &lt;a href="https://www.techpartneralliance.com/the-ultimate-aws-security-guide/" rel="noopener noreferrer"&gt;Ultimate AWS Security Guide&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Figuring out how to secure your AWS Cloud may seem daunting, but we’ve made it easier. Download your free copy of our &lt;a href="https://www.techpartneralliance.com/the-ultimate-aws-security-guide/" rel="noopener noreferrer"&gt;Ultimate AWS Security Guide&lt;/a&gt;, and you’ll gain practical insight into creating IAM policies, using encryption, writing an incident response plan and more. We’re here to help you secure your cloud infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  About Techpartner
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.techpartneralliance.com/" rel="noopener noreferrer"&gt;Techpartner Alliance&lt;/a&gt; is an &lt;a href="https://partners.amazonaws.com/partners/001E000000t1TtfIAE/Techpartner%20Alliance%20Pvt%20Ltd." rel="noopener noreferrer"&gt;AWS Advanced Partner&lt;/a&gt; with 10 years of experience on AWS solutions. It was founded in 2014 by &lt;a href="https://www.linkedin.com/in/ravindrakatti/" rel="noopener noreferrer"&gt;Ravindra Katti&lt;/a&gt; (previously Director and Head IT, Gupshup) and &lt;a href="https://www.linkedin.com/in/prasadwani/" rel="noopener noreferrer"&gt;Prasad Wani&lt;/a&gt;. Being a TechOps organization, we are the go-to partners for businesses for all things technology. We offer more than just individual benefits by blending our specialized cloud security services with AWS’s reliable infrastructure. Our seamless integration future-proofs network infrastructures, enabling businesses to become more efficient, scalable, and innovative. We provide exhaustive cloud security solutions that truly meet all your needs. &lt;/p&gt;

&lt;p&gt;AWS recommends conducting Well-Architected Framework Reviews (WAFR) regularly to ensure continued alignment of cloud architectures with best practices and business objectives. Here’s where we come in – &lt;a href="https://www.techpartneralliance.com/" rel="noopener noreferrer"&gt;Techpartner Alliance&lt;/a&gt; is an AWS advanced partner and a certified &lt;a href="https://www.techpartneralliance.com/well-architected-review/" rel="noopener noreferrer"&gt;AWS-Well Architected Review Partner&lt;/a&gt;. This is to say, we are fully equipped to conduct the Well Architected Framework Review, especially with the focus on the security pillar of WAFR. &lt;/p&gt;

&lt;p&gt;Follow our &lt;a href="https://www.linkedin.com/company/techpartner-alliance/" rel="noopener noreferrer"&gt;LinkedIn Page&lt;/a&gt; and check out our other &lt;a href="https://www.techpartneralliance.com/blogs/" rel="noopener noreferrer"&gt;Blogs&lt;/a&gt; to stay updated on the latest tech trends and AWS Cloud.&lt;/p&gt;

&lt;p&gt;Set up a &lt;a href="https://forms.gle/McZixym8kjjihDu59" rel="noopener noreferrer"&gt;complimentary security assessment&lt;/a&gt; for your IT infrastructure&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>cloud</category>
    </item>
    <item>
      <title>IPv6 Migration Simplified: Techpartner's Blueprint for Future-Proofing Your Network</title>
      <dc:creator>Arunasri Maganti</dc:creator>
      <pubDate>Mon, 16 Sep 2024 15:12:09 +0000</pubDate>
      <link>https://forem.com/techpartner/ipv6-migration-simplified-techpartners-blueprint-for-future-proofing-your-network-5hgp</link>
      <guid>https://forem.com/techpartner/ipv6-migration-simplified-techpartners-blueprint-for-future-proofing-your-network-5hgp</guid>
      <description>&lt;p&gt;In the enormous landscape of networking and the internet, migration to IPv6 is a critical evolution. Currently, &lt;a href="https://www.google.com/intl/en/ipv6/statistics.html#tab=ipv6-adoption" rel="noopener noreferrer"&gt;global IPv6 adoption&lt;/a&gt; is at 46% as of Sept, 2024 with &lt;a href="https://www.google.com/intl/en/ipv6/statistics.html#tab=per-country-ipv6-adoption" rel="noopener noreferrer"&gt;India leading the charge&lt;/a&gt; at 70% IPv6 adoption.  But what, in essence, is the migration to IPv6 and why is it important? If you haven’t yet migrated to IPv6 or you’re facing challenges post migration - dive into this comprehensive blog for more information. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is IPv6 migration?
&lt;/h2&gt;

&lt;p&gt;IPv6 migration is the transition of the current Internet Protocol version, generally referred to as IPv4, to the new and more advanced Internet Protocol version 6. The IPv4 has a 32-bit address space, which hosts about 4.3 billion addresses on the internet. The unique and new IPv4 addresses have run out as of 2019. Whereas, IPv6 address space is 128-bit, making the number of unique addresses almost infinite. This transition is essential for internet growth and scalability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is IPv6 Adoption Critical?
&lt;/h2&gt;

&lt;p&gt;IPv6 adoption is important for many reasons:&lt;br&gt;
&lt;strong&gt;·       Address Exhaustion:&lt;/strong&gt; As IPv4 addresses are running out, it is imperative to utilize IPv6 because of the vast address space it provides to accommodate the rising number of internet-connected devices.&lt;br&gt;
&lt;strong&gt;·       Better Security:&lt;/strong&gt; IPv6 has been designed with security in mind and boasts other securities-based features, such embedded IPsec support for end-to-end encryption.&lt;br&gt;
&lt;strong&gt;·       Better Performance:&lt;/strong&gt; IPv6 minimizes the size of routing tables, thereby making IPv6 routing more optimal, and thus enhancing the general performance of the network. It is a necessity in the world of high-speed internet.&lt;br&gt;
&lt;strong&gt;·       Simplified Network Configuration:&lt;/strong&gt; IPv6 provides Stateless Address Autoconfiguration     (IPv6 SLAAC) functionality, where a device can configure IP addresses automatically on its own without manual configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the Three Types of IPv6 Migration Techniques?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;·       Dual Stack:&lt;/strong&gt; This method gives the running of IPv4 and IPv6 concurrently in usage. It means that the transition is done in such a way that systems can operate with a provision to run in a mode supporting one of the protocols or another; hence, this supports compatibility without much disruption.&lt;br&gt;
&lt;strong&gt;·       IPv6 Tunneling:&lt;/strong&gt; In a process known as tunneling, IPv6 packets are encapsulated within IPv4 packets that can move through IPv4 infrastructure. This can be a practical transitional measure allowing IPv6 connectivity even while parts of the network are still using IPv4.&lt;br&gt;
&lt;strong&gt;·       Translation:&lt;/strong&gt; This works to translate IPv6 packets to IPv4 and vice versa so that both can communicate with each other. It's quite handy in translating legacy systems, which have to communicate in a world dominated by IPv6.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the challenges in IPv6 Migration?
&lt;/h2&gt;

&lt;p&gt;Although the introduction of IPv6 offers various advantages, it is accompanied by a few challenges:&lt;br&gt;
&lt;strong&gt;·       Compatibility:&lt;/strong&gt; The process to make all the devices and software compatible with IPv6 is a bit complex and time-consuming.&lt;br&gt;
&lt;strong&gt;·       Cost:&lt;/strong&gt; Ensuring IPv6 readiness through infrastructure development is expensive, particularly for large organizations with extensive networks.&lt;br&gt;
&lt;strong&gt;·       Training and Knowledge:&lt;/strong&gt; It is important to have technical staff trained for both management and troubleshooting in IPv6 networks.&lt;br&gt;
&lt;strong&gt;·       Transition Complexity:&lt;/strong&gt; The coexistence of IPv4 and IPv6 must be dealt with great care and execution of plans during the transition period.&lt;br&gt;
How can IPv6 Migration challenges be addressed?&lt;br&gt;
The following solutions can assist:&lt;br&gt;
&lt;strong&gt;·       Gradual Transition:&lt;/strong&gt; The possibility to have a dual-stack approach gives a smooth transition—where disruption is minimized, and compatibility is assured.&lt;br&gt;
&lt;strong&gt;·       Training Programs:&lt;/strong&gt; Investment in training for IT personnel to ensure they are well-equipped with the knowledge and skills needed to manage IPv6 networks.&lt;br&gt;
&lt;strong&gt;·       Cost management:&lt;/strong&gt; Planning and budgeting can help manage the process of migration. Furthermore, utilizing available infrastructure to the maximum reduces the cost.&lt;br&gt;
&lt;strong&gt;·       Automated Tools:&lt;/strong&gt; The automated tools can assist in network configuration and implementation of the transition by reducing the overhead from the technical workforce.&lt;/p&gt;

&lt;h2&gt;
  
  
  Techpartner Alliance Case Study: IPv6 Migration for a Digital Lending NBFC
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Client Profile&lt;/strong&gt;&lt;br&gt;
This client was a leading NBFC lending to Micro, Small, and Medium Enterprises (MSME). As a Systemically Important, Non-Deposit taking NBFC, they have partnered with over 25,000 enterprises and disbursed loans exceeding $1 billion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges&lt;/strong&gt;&lt;br&gt;
The challenges that IPv6 transition posed to the client include a lack of adequate knowledge about IPv6, regulatory compliance requirements, and managing migration costs. They also faced difficulties in configuring dual-stack networks and ensuring compatibility between their head office and branch offices. These issues put their operational efficiency and growth scalability at risk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;&lt;br&gt;
Techpartner's project audited the current IPv4 infrastructure of the client and developed a detailed migration plan, with focus on compliance and cost. Configuration was done for dual-stack support with both IPv4 and IPv6, ensuring secure communication protocol deployment, and thorough training accompanied by continuous support to make sure no bottlenecks through the process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits&lt;/strong&gt;&lt;br&gt;
Through Techpartner’s IPv6 migration, the client achieved full compliance with the new regulations. This simultaneously brought down operational costs and improved network compatibility. They also increased their security and future-proofed their infrastructure to provide seamless operations across all office locations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features of the IPv6 Migration:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Dual-Stack Configuration:&lt;/strong&gt; Integration of dual-stack IPv4 and IPv6 support within AWS assured the client of seamless operations across networks. This increased network performance, compliance, and cost efficiency and put them on easier footing to grow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- IPSec Encrypted Tunnels:&lt;/strong&gt; The security of the AWS platform was leveraged in building the encrypted IPSec tunnels over IPv6 and provided a method that was secure for communication from the head office to the branch offices. In such a way, network security became higher, and it was much easier to comply, while data exchange across their infrastructure became reliable.&lt;br&gt;
This has, in fact, made the client so much more scalable, secure, and efficient in operations, ensuring long-term success for the firm.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The migration toward IPv6 is a must for the Internet to further develop and grow. While it poses a few challenges, it has turned out to be an indispensable step in view of an improved level of security, better performance, and virtually limitless address space. Transitioning to IPv6 for a business means migrating under careful planning and execution. Businesses are able to migrate to IPv6, thereby, future-proofing their networks. IPv6 may be daunting at first sight, but it is a leap toward a stronger and more scalable internet infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  About Techpartner
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.techpartneralliance.com/" rel="noopener noreferrer"&gt;Techpartner Alliance&lt;/a&gt; is an &lt;a href="https://partners.amazonaws.com/partners/001E000000t1TtfIAE/Techpartner%20Alliance%20Pvt%20Ltd." rel="noopener noreferrer"&gt;AWS Advanced Partner&lt;/a&gt; with 10 years of experience on AWS solutions. It was founded in 2014 by &lt;a href="https://www.linkedin.com/in/ravindrakatti/" rel="noopener noreferrer"&gt;Ravindra Katti&lt;/a&gt; (previously Director and Head IT, Gupshup) and &lt;a href="https://www.linkedin.com/in/prasadwani/" rel="noopener noreferrer"&gt;Prasad Wani&lt;/a&gt;. Being a TechOps organization, we are the go-to partners for businesses for all things technology. We offer more than just individual benefits by blending our specialized services with AWS's reliable infrastructure. This collaboration enhances performance, reliability, and security, ensuring our customers meet regulatory requirements and reduce costs. Our seamless integration future-proofs network infrastructures, enabling businesses to become more efficient, scalable, and innovative. Together we provide a comprehensive solution that truly meets all your needs.&lt;/p&gt;

&lt;p&gt;Follow our &lt;a href="https://www.linkedin.com/company/techpartner-alliance/" rel="noopener noreferrer"&gt;LinkedIn Page&lt;/a&gt; and check out our other &lt;a href="https://www.techpartneralliance.com/blogs/" rel="noopener noreferrer"&gt;Blogs&lt;/a&gt; to stay updated on the latest tech trends and AWS Cloud.&lt;br&gt;
Set up a &lt;a href="https://forms.gle/eFqs53pgAMroQ3Vu9" rel="noopener noreferrer"&gt;complimentary migration readiness assessment&lt;/a&gt; for your IT infrastructure &lt;/p&gt;

</description>
      <category>ipv6</category>
      <category>networking</category>
      <category>ipv4</category>
      <category>migration</category>
    </item>
    <item>
      <title>Eclipse Che on AWS with EFS</title>
      <dc:creator>Arunasri Maganti</dc:creator>
      <pubDate>Tue, 27 Aug 2024 10:00:50 +0000</pubDate>
      <link>https://forem.com/techpartner/eclipse-che-on-aws-with-efs-38b8</link>
      <guid>https://forem.com/techpartner/eclipse-che-on-aws-with-efs-38b8</guid>
      <description>&lt;p&gt;This blog is for Eclipse Che 7 (Kubernetes-Native in-browser IDE) on AWS with EFS Integration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcugt5vbzeifzc68b5tvo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcugt5vbzeifzc68b5tvo.jpg" alt="Image description" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Eclipse Che makes Kubernetes development accessible for developer teams, providing one-click developer workspaces and eliminating local environment configuration for your entire team. Che brings your Kubernetes application into your development environment and provides an in-browser IDE, allowing you to code, build, test and run applications exactly as they run on production from any machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Eclipse Che Works&lt;/strong&gt;&lt;br&gt;
One-click centrally hosted workspaces&lt;br&gt;
Kubernetes-native containerized development&lt;br&gt;
In-browser extensible IDE&lt;br&gt;
Here we will go through how to installing Eclipse Che 7 on AWS Cloud, which focuses on simplifying writing, building and collaborating on cloud native applications for teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
A running instance of Kubernetes, version 1.9 or higher.&lt;br&gt;
The kubectl tool installed.&lt;br&gt;
The chectl tool installed&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installing Kubernetes on Amazon EC2&lt;/strong&gt;&lt;br&gt;
Launch a minimum sized linux Ec2 instance, say like t.nano or t3 micro&lt;br&gt;
Set up the AWS Command Line Interface (AWS CLI). For detailed installation instructions, see Installing the AWS CLI.&lt;br&gt;
Install Kubernetes on EC2. There are several ways to have a running Kubernetes instance on EC2. Here, the kops tool is used to install Kubernetes. For details, see Installing Kubernetes with kops. You will also need kubectl to install kops which can be found at Installing kubectl&lt;br&gt;
Create a Role with Admin privileges and attach it to the EC2 instance where kops is installed. This role will be creating kubernetes cluster with master, nodes with autoscaling groups, updating route53, creating load balancer for ingress. For detailed instructions, see Creating Role for EC2&lt;/p&gt;

&lt;p&gt;So to summarise, so far we have installed aws cli, kubectl, kops tool and attached AWS admin role to EC2 instance.&lt;/p&gt;

&lt;p&gt;Next, We need route53 records which kops can use to point kubernetes api, etcd…&lt;/p&gt;

&lt;p&gt;Throughout the document, I will be using eclipse.mydomain.com as my cluster domain.&lt;/p&gt;

&lt;p&gt;Now, let’s create public hosted zone for “eclipse.mydomain.com” in Route53. Once done, make a note of zone id which will be used later&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsca74drm73ae4rtpapgc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsca74drm73ae4rtpapgc.jpg" alt="Image description" width="468" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy the four DNS nameservers on the eclipse.mydomain.com hosted zone and create a new NS record on mydomain.com and update the above copied DNS entries. Note that when using a custom DNS provider, updating the record takes a few hours.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faq7n5qzjfycoyjkdv94p.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faq7n5qzjfycoyjkdv94p.jpg" alt="Image description" width="415" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, Create the Simple Storage Service (s3) storage to store the kops configuration.&lt;/p&gt;

&lt;p&gt;$ aws s3 mb s3://eclipse.mydomain.com&lt;br&gt;
make_bucket: eclipse.mydomain.com&lt;br&gt;
Inform kops of this new service:&lt;/p&gt;

&lt;p&gt;$ export KOPS_STATE_STORE=s3://eclipse.mydomain.com&lt;br&gt;
Create the kops cluster by providing the cluster zone. For example, for Mumbai region, the zone is ap-south-1a.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kops create cluster --zones=ap-south-1a apsouth-1a.eclipse.mydomain.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above kops command will create new VPC with CIDR 172.20.0.0/16 and new subnet to install nodes and master in kubernetes cluster and will use Debian OS by default. Incase you want to use your own existing VPC, Subnet and AMI, then use below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$  kops create cluster --zones=ap-south-1a apsouth-1a.eclipse.mydomain.com --image=ami-0927ed83617754711 --vpc=vpc-01d8vcs04844dk46e --subnets=subnet-0307754jkjs4563k0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above kops command uses ubuntu 18.0 AMI to be used for master and worker nodes. You can add your own AMIs as well.&lt;/p&gt;

&lt;p&gt;You can review / update the cluster config for cluster, master and nodes using below commands&lt;/p&gt;

&lt;p&gt;For cluster —&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kops edit cluster — name=ap-south-1a.eclipse.mydomain.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For master —&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kops edit ig — name=ap-south-1a.eclipse.mydomain.com master-ap-south-1a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For nodes —&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kops edit ig — name=ap-south-1a.eclipse.mydomain.com nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the cluster, master, node config is reviewed and updated , you can create cluster using following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kops update cluster --name ap-south-1a.eclipse.mydomain.com --yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the cluster is ready, validate it using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kops validate cluster

Using cluster from kubectl context: ap-south-1a.eclipse.mydomain.com

Validating cluster eclipse.mydomain.com
INSTANCE GROUPS
NAME                ROLE    MACHINETYPE  MIN  MAX  SUBNETS
master-ap-south-1a   Master  m3.medium    1    1    eu-west-1c
nodes               Node    t2.medium    2    2    eu-west-1c

NODE STATUS
NAME                                         ROLE    READY
ip-172-20-38-26.ap-south-1.compute.internal   node    True
ip-172-20-43-198.ap-south-1.compute.internal  node    True
ip-172-20-60-129.ap-south-1.compute.internal  master  True

Your cluster is ap-south-1a.eclipse.mydomain.com ready
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It may take approx 10 -12 min for cluster to come up&lt;br&gt;
Check the cluster using the kubectl command. The context is also configured automatically by the kops tool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl config current-context
ap-south-1a.eclipse.mydomain.com
$ kubectl get pods --all-namespaces
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All the pods in the running state are displayed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installing Ingress-nginx&lt;/strong&gt;&lt;br&gt;
To install Ingress-nginx:&lt;br&gt;
Install the ingress nginx configuration from the below github location.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f https://raw.githubusercontent.com/binnyoza/eclipse-che/master/mandatory.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install the configuration for AWS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f https://raw.githubusercontent.com/binnyoza/eclipse-che/master/service-l4.yaml

$ kubectl apply -f https://raw.githubusercontent.com/binnyoza/eclipse-che/master/patch-configmap-l4.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following output confirms that the Ingress controller is running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods --namespace ingress-nginx
NAME                                        READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-76c86d76c4-gswmg   1/1     Running   0          9m3s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the pod is not showing ready yet, wait for couple of minutes and check again.&lt;/p&gt;

&lt;p&gt;Find the external IP of ingress-nginx.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get services --namespace ingress-nginx -o jsonpath='{.items[].status.loadBalancer.ingress[0].hostname}'
Ade9c9f48b2cd11e9a28c0611bc28f24-1591254057.ap-south-1.elb.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Troubleshooting: If the output is empty, it implies that the cluster has configuration issues. Use the following command to find the cause of the issue:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl describe service -n ingress-nginx ingress-nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now in Route53, create a wildcard record in zone eclipse.mydomain.com with the record as LB url as received in previous kubectl get services command. You can create CNAME record or Alias A record.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0u1g4daz9bz6nvhtc1o.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0u1g4daz9bz6nvhtc1o.jpg" alt="Image description" width="415" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following is an example of the resulting window after adding all the values.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhoipurssca1fsxpiqlt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhoipurssca1fsxpiqlt.jpg" alt="Image description" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is now possible to install Eclipse Che on this existing Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enabling the TLS and DNS challenge&lt;/strong&gt;&lt;br&gt;
To use the Cloud DNS and TLS, some service accounts must be enabled to have cert-manager managing the DNS challenge for the Let’s Encrypt service.&lt;/p&gt;

&lt;p&gt;In the EC2 Dashboard, identify the IAM role used by the master node and edit the same. Add the below inline policy to the existing IAM role of the master node and name it appropriately like &lt;em&gt;eclipse-che-route53&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "route53:GetChange",
                "route53:ListHostedZonesByName"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "route53:ChangeResourceRecordSets"
            ],
            "Resource": [
                "arn:aws:route53:::hostedzone/&amp;lt;INSERT_ZONE_ID&amp;gt;"
            ]
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the DNS Zone ID copied earlier while creating zone&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installing cert-manager&lt;/strong&gt;&lt;br&gt;
To install cert-manager, run the following commands&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create namespace cert-manager
namespace/cert-manager created
$ kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
namespace/cert-manager labeled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;validate=false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If set to true, it will only work with the latest Kubernetes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply \
  -f https://github.com/jetstack/cert-manager/releases/download/v0.8.1/cert-manager.yaml \
  --validate=false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the Che namespace if it does not already exist:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create namespace che
namespace/che created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create an IAM user cert-manager with programatic access and below policy. Copy the Access key and Secret Access key generated for further use. This user is required to manage route53 records for eclipse.mydomain.com DNS validation during certificate creation and certificate renewal.&lt;/p&gt;

&lt;p&gt;Policy to be used with cert-manager IAM user&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "route53:GetChange",
            "Resource": "arn:aws:route53:::change/*"
        },
        {
            "Effect": "Allow",
            "Action": "route53:ChangeResourceRecordSets",
            "Resource": "arn:aws:route53:::hostedzone/*"
        },
        {
            "Effect": "Allow",
            "Action": "route53:ListHostedZonesByName",
            "Resource": "*"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a secret from the SecretAccessKey content&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create secret generic aws-cert-manager-access-key \
  --from-literal=CLIENT_SECRET=&amp;lt;REPLACE WITH SecretAccessKey content&amp;gt; -n cert-manager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To create the certificate issuer, change the email address and specify the Access Key ID.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: che-certificate-issuer
spec:
  acme:
    dns01:
      providers:
      - route53:
          region: eu-west-1
          accessKeyID: &amp;lt;USE ACCESS_KEY_ID_CREATED_BEFORE&amp;gt;
          secretAccessKeySecretRef:
            name: aws-cert-manager-access-key
            key: CLIENT_SECRET
        name: route53
    email: user@mydomain.com
    privateKeySecretRef:
      name: letsencrypt
    server: https://acme-v02.api.letsencrypt.org/directory
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the certificate by editing the domain name value&lt;br&gt;
(eclipse.mydomain.com) and in this case, the dnsName and the value:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
 name: che-tls
 namespace: che
spec:
 secretName: che-tls
 issuerRef:
   name: che-certificate-issuer
   kind: ClusterIssuer
 dnsNames:
   - '*.eclipse.mydomain.com'
 acme:
   config:
     - dns01:
         provider: route53
       domains:
         - '*.eclipse.mydomain.com'
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A new DNS challenge is being added to the DNS zone for &lt;em&gt;Let’s encrypt&lt;/em&gt;. The cert-manager logs contain information about the DNS challenge.&lt;/p&gt;

&lt;p&gt;Obtain the name of the Pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods --namespace cert-manager
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-6587688cb8-wj68p              1/1     Running   0          6h
cert-manager-cainjector-76d56f7f55-zsqjp   1/1     Running   0          6h
cert-manager-webhook-7485dd47b6-88m6l      1/1     Running   0          6h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ensure that the certificate is ready, using the following command. It takes approximately 4-5 min for the certificate creation process to complete. Once the certificate is successfully created, you will be below output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl describe certificate/che-tls -n che

Status:
  Conditions:
    Last Transition Time:  2019-07-30T14:48:07Z
    Message:               Certificate is up to date and has not expired
    Reason:                Ready
    Status:                True
    Type:                  Ready
  Not After:               2019-10-28T13:48:05Z
Events:
  Type    Reason         Age    From          Message
  ----    ------         ----   ----          -------
  Normal  OrderCreated   5m29s  cert-manager  Created Order resource "che-tls-3365293372"
  Normal  OrderComplete  3m46s  cert-manager  Order "che-tls-3365293372" completed successfully
  Normal  CertIssued     3m45s  cert-manager  Certificate issued successfully
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have Kubernetes cluster, Ingress controller (AWS Load Balancer )and TLS certificate ready, we are ready to install Eclipse Che&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installing Che on Kubernetes using the chectl command&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Chectl is the Eclipse Che command-line management tool. It is used for operations on the Che server (start, stop, update, delete) and on workspaces (list, start, stop, inject) and to generate devfiles.&lt;/p&gt;

&lt;p&gt;Install chectl cli tool to manage eclipse che cluster. For installation instructions see, Installing chectl&lt;/p&gt;

&lt;p&gt;You will also need Helm and Tiller. To install Helm you can follow instructions at Installing Helm&lt;/p&gt;

&lt;p&gt;Once chectl is installed, you can install and start the cluster using below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chectl server:start --platform=k8s --installer=helm --domain=eclipse.mydomain.com --multiuser --tls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If using without authentication, you can skip — multiuser and start cluster as below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chectl server:start --platform=k8s --installer=helm --domain=eclipse.mydomain.com --tls
✔ ✈️  Kubernetes preflight checklist
    ✔ Verify if kubectl is installed
    ✔ Verify remote kubernetes status...done.
    ✔ Verify domain is set...set to eclipse.mydomain.com.
  ✔ 🏃‍  Running Helm to install Che
    ✔ Verify if helm is installed
    ✔ Check for TLS secret prerequisites...che-tls secret found.
    ✔ Create Tiller Role Binding...it already exist.
    ✔ Create Tiller Service Account...it already exist.
    ✔ Create Tiller RBAC
    ✔ Create Tiller Service...it already exist.
    ✔ Preparing Che Helm Chart...done.
    ✔ Updating Helm Chart dependencies...done.
    ✔ Deploying Che Helm Chart...done.
  ✔ ✅  Post installation checklist
    ✔ PostgreSQL pod bootstrap
      ✔ scheduling...done.
      ✔ downloading images...done.
      ✔ starting...done.
    ✔ Keycloak pod bootstrap
      ✔ scheduling...done.
      ✔ downloading images...done.
      ✔ starting...done.
    ✔ Che pod bootstrap
      ✔ scheduling...done.
      ✔ downloading images...done.
      ✔ starting...done.
    ✔ Retrieving Che Server URL...https://che-che.eclipse.mydomain.com
    ✔ Che status check
Command server:start has completed successfully.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can open Eclipse Che portal using URL —&lt;br&gt;
&lt;a href="https://che-che.eclipse.mydomain.com/" rel="noopener noreferrer"&gt;https://che-che.eclipse.mydomain.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Eclipse che has 3 components — Che , plugin-registry and devfile-registry&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each of these component have same versioning that goes hand in hand. For eclipse che cluster to function correctly, image used for che, plugin registry and devfile registry must have same version. The current latest version is 7.13.1.&lt;/p&gt;

&lt;p&gt;However chectl have command line option to specify only che image version. Incase if you want to use higher version of Che cluster implementation, you will need to upgrade chectl to the respective version. For example, you will need chectl version 7.12.1 to install Che, plugin-registry and devfile-registry of version 7.12.1 and so on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advanced Eclipse Che Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By default, Eclipse che uses “common” PVC strategy which means all workspaces in the same Kubernetes Namespace will reuse the same PVC&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CHE_INFRA_KUBERNETES_PVC_STRATEGY: common
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hence the challenge posed is that while using multiple worker node cluster, when the workspace pods is launched on multiple worker node, it fails as it is not able to get the EBS volume which is already mounted on other node.&lt;/p&gt;

&lt;p&gt;Other option is to use ‘unique’ or ‘per-workspace’ which will create multiple EBS volumes to manage. Here the best solution would be to use a shared file system where we can use ‘common’ PVC strategy so that all workspaces are created under same mount.&lt;/p&gt;

&lt;p&gt;We have used EFS as our preferred choice of its capabilities. More on EFS here&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integrating EFS as shared storage for use as eclipse che workspaces&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Make EFS accessible from Node Instances. This can be done by adding node instances SG (this is created by KOPS cluster already) to SG of EFS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create configmap efs-provisioner --from-literal=file.system.id=fs-abcdefgh --from-literal=aws.region=ap-south-1 --from-literal=provisioner.name=example.com/aws-efs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Download EFS deploy file from below location using wget –&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wget https://raw.githubusercontent.com/binnyoza/eclipse-che/master/efs-master.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Edit efs-master.yaml to use your EFS ID (edit is at 3 places). Also update the storage size for EFS to say 50Gi and apply using below command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create --save-config -f efs-master.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply below configurations&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/binnyoza/eclipse-che/master/aws-efs-storage.yaml
kubectl apply -f https://raw.githubusercontent.com/binnyoza/eclipse-che/master/efs-pvc.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pv
kubectl get pvc -n che
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Edit che config using below command and add below line&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl edit configmap -n che
   CHE_INFRA_KUBERNETES_WORKSPACE_PVC_STORAGEClassName: aws-efs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and exit. Restart che pod using below command. Whenever any changes made to che configmap, restarting che pod is required.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl patch deployment che -p   "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n che
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the pod running status using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -n che
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you can start creating workspaces and your IDE Environment from URL —&lt;/p&gt;

&lt;p&gt;&lt;a href="https://che-che.eclipse.mydomain.com" rel="noopener noreferrer"&gt;https://che-che.eclipse.mydomain.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Eclipse Che version 7.7.1 and below is unable to support more than 30 workspaces&lt;/li&gt;
&lt;li&gt;In case of multiple Che clusters, creating them in same VPC leads to failure of TLS certificate creation. These seems due to rate limits imposed by Lets Encrypt&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;About Techpartner Alliance&lt;/strong&gt;&lt;br&gt;
Techpartner Alliance is a team of seasoned developers and IT industry leaders. Techpartner specializes in AWS and was established in 2017 by Ravindra Katti, an AWS ex-seller, and Prasad Wani, an AWS cloud architect. Follow our LinkedIn page for regular updates on latest tech trends and AWS cloud!&lt;br&gt;
For more blogs visit: &lt;a href="https://www.techpartneralliance.com/blogs/" rel="noopener noreferrer"&gt;https://www.techpartneralliance.com/blogs/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>eclipse</category>
      <category>aws</category>
      <category>efs</category>
      <category>coding</category>
    </item>
    <item>
      <title>Eclipse Che on AWS with EFS</title>
      <dc:creator>Arunasri Maganti</dc:creator>
      <pubDate>Thu, 08 Aug 2024 10:17:21 +0000</pubDate>
      <link>https://forem.com/arunasri/eclipse-che-on-aws-with-efs-2jcg</link>
      <guid>https://forem.com/arunasri/eclipse-che-on-aws-with-efs-2jcg</guid>
      <description>&lt;p&gt;This blog is for Eclipse Che 7 (Kubernetes-Native in-browser IDE) on AWS with EFS Integration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flrrznovhv5dsdg2dd0dp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flrrznovhv5dsdg2dd0dp.png" alt="Image description" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>eclipse</category>
      <category>aws</category>
      <category>efs</category>
      <category>coding</category>
    </item>
    <item>
      <title>AWS Well-Architected Framework Review: Empowering Healthcare Industry</title>
      <dc:creator>Arunasri Maganti</dc:creator>
      <pubDate>Tue, 02 Jul 2024 14:45:09 +0000</pubDate>
      <link>https://forem.com/techpartner/aws-well-architected-framework-review-empowering-healthcare-industry-3hg</link>
      <guid>https://forem.com/techpartner/aws-well-architected-framework-review-empowering-healthcare-industry-3hg</guid>
      <description>&lt;p&gt;Technology is quintessential in the evolving field of healthcare and life sciences to elevate patient care, automate operations and in medical research. It can be daunting to handle and enhance these technological systems. This is where the AWS Well Architected Framework with a Healthcare lens becomes extremely beneficial.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/architecture/well-architected/?wa-lens-whitepapers.sort-by=item.additionalFields.sortDate&amp;amp;wa-lens-whitepapers.sort-order=desc&amp;amp;wa-guidance-whitepapers.sort-by=item.additionalFields.sortDate&amp;amp;wa-guidance-whitepapers.sort-order=desc"&gt;The AWS Well Architected Framework Review (WAFR)&lt;/a&gt; is a cloud infrastructure design and review methodology that helps you leverage the unique advantages of cloud and to secure, optimize and maintain your cloud environments. The WAFR defines six pillars: &lt;strong&gt;Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization and Sustainability.&lt;/strong&gt; Each of these six pillars consist of design principles which are the best practices of cloud infrastructure. The six pillars are the criteria for evaluating cloud-based infrastructures and identifying areas that require enhancement.&lt;/p&gt;

&lt;p&gt;During the execution of the Well-Architected Framework Review for healthcare companies, the "Healthcare Lens" incorporates industry specific guidelines, design principles and best practices customized to address the distinctive requirements of the healthcare and life sciences sector. It focuses on – compliance with healthcare regulations, data security, optimizing efficacy in delivering patient care services and managing costs. It also nurtures innovation in medical research endeavors and treatment practices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Well-Architected Framework: Healthcare Lens&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operational Excellence:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automate processes to reduce human error, ensure compliance, and maintain availability of critical healthcare services.&lt;/li&gt;
&lt;li&gt;Key Points to review: Continuous improvement, operational monitoring, quick issue resolution.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Protect patient data with HIPAA, GDPR and other compliance frameworks, strong access controls, and encryption.&lt;/li&gt;
&lt;li&gt;Key Points to review: Multi-factor authentication, regular security assessments, updated security protocols.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Reliability:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure system resilience and quick recovery to minimize patient care disruption.&lt;/li&gt;
&lt;li&gt;Key Points to review:&lt;/li&gt;
&lt;li&gt;Redundancy, automated recovery, regular disaster recovery drills&lt;/li&gt;
&lt;li&gt;RPO (Recovery Point Objective): The maximum acceptable amount of data loss measured in time.&lt;/li&gt;
&lt;li&gt;RTO (Recovery Time Objective): The maximum acceptable time to restore the system after a failure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Performance Efficiency:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Optimize application performance for variable workloads, especially during peak times.&lt;/li&gt;
&lt;li&gt;Key Points to review: Auto-scaling, right-sizing, performance metric reviews.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cost Optimization:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manage cloud costs effectively to avoid resource wastage while maintaining quality patient care.&lt;/li&gt;
&lt;li&gt; Nearly a third of cloud spend is wasted, highlighting the need for effective cost management (&lt;a href="https://www.flexera.com/stateofthecloud"&gt;Flexera 2024 State of the Cloud Report&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Key Points to review: FinOps practices, cost allocation tags, regular resource review.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Sustainability:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Support climate goals by reducing the carbon footprint of cloud operations.&lt;/li&gt;
&lt;li&gt;Key Points to review: Optimize energy consumption, use energy-efficient instances, leverage renewable energy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Impact of WAFR Healthcare Lens on various Healthcare Services&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Here are some key examples where applying Healthcare Lens can significantly enhance healthcare services:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Electronic Health Record (EHR) Systems:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Benefits: Enhances data integrity, availability, and security while ensuring compliance with healthcare regulations like HIPAA. Improves scalability and performance to handle large volumes of patient data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Telemedicine and Remote Patient Monitoring:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Benefits: Increases accessibility to healthcare services, particularly in remote areas, and enables continuous health monitoring. Supports timely medical interventions and better chronic disease management.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Health Information Exchanges (HIE):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Benefits: Facilitates secure, real-time data sharing across different healthcare providers, enhancing interoperability and coordination of patient care. Reduces duplication of tests and procedures.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Clinical and Research Data Lakes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Benefits: Centralizes clinical and research data, supporting advanced analytics and machine learning. Ensures data privacy and compliance, accelerating medical research and improving data-driven decision-making.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Genomic Data Processing and Analysis:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Benefits: Provides scalable compute resources for high-throughput sequencing, ensuring secure storage and compliance. Accelerates genetic research and personalized medicine initiatives.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;6. AIML in Healthcare:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Benefits: Generative AI and Machine Learning is being applied to many workflows in healthcare such as predicting health outcomes, improving patient access to care, revenue cycle operations and provider workflows. Healthcare lens oversees best practices to adhere to regulatory oversight, design control obligations, and interpretability requirements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For detailed information on these and other scenarios, refer to the &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/healthcare-industry-lens/scenarios.html"&gt;AWS documentation.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Grave Consequences of Misconfigurations in Cloud Architectures&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Data Breaches:&lt;/strong&gt; Misconfigurations in cloud storage and database settings has led to breaches of millions of patient records causing significant harm, particularly in the healthcare field. According to IBM Security '&lt;a href="https://www.ibm.com/reports/data-breach"&gt;Cost of a Data Breach&lt;/a&gt;’ report in 2023 found that the average cost of a data breach in healthcare has surged to $11 million, a 53% increase from 2020. This figure surpasses the average of $4.45 million for data breaches across industries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Compliance Violations:&lt;/strong&gt; Misconfigurations in deploying cloud architectures in the right regions can result in violation of regulations like HIPAA and GDPR. These violations can be levied very huge sums of money for fines and permanently damage an organization’s credibility and public image. The U.S. Department of Health and Human Services Office for Civil Rights (OCR) resolved several cases of HIPAA violations leading to significant penalties in the year 2023. (&lt;a href="https://www.hhs.gov/hipaa/for-professionals/compliance-enforcement/data/enforcement-highlights/index.html"&gt;HHS.gov&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Disruptions in Services:&lt;/strong&gt; Downtime due to improper set up of cloud resources can affect patient care which is a crucial factor for the healthcare sector. A survey conducted by &lt;a href="https://www.logicmonitor.com/resource/outage-impact-survey#:~:text=96%25%20of%20global%20IT%20decision,Downtime%20is%20expensive."&gt;LogicMonitor&lt;/a&gt; revealed that 96% of participants encountered one cloud outage in the last three years with an average downtime period lasting around 7 hours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deliverables of the AWS Well Architected Framework Review&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Gap Analysis Report:&lt;/strong&gt; Detailed report on issues in cloud infrastructure and deviations from AWS best practices and compliance requirements.&lt;br&gt;
&lt;strong&gt;2. Recommendations:&lt;/strong&gt; Suggested actions based on their impact on the AWS six pillars, with prioritization in terms of level of risk.&lt;br&gt;
&lt;strong&gt;3. Roadmap to Fix Issues:&lt;/strong&gt; Outlines actions needed to fix the gaps, including timelines and resource requirements while highlighting possible dependencies.&lt;br&gt;
&lt;strong&gt;4. Visibility on Risks:&lt;/strong&gt; Provides clear visibility into risks associated with misconfigurations and non-compliance. This ensures one is fully aware of the consequences and is mindful of these risks.&lt;br&gt;
&lt;strong&gt;5. Continuous Improvement Plan:&lt;/strong&gt; Establishes processes for continuous monitoring, review, and betterment of cloud architecture.&lt;/p&gt;

&lt;p&gt;The Well Architected Framework Review helps healthcare and life sciences organizations identify gaps and offers a plan to address security, compliance and operational issues in their cloud setups. In an industry where the stakes are high, proactive measures to mitigate risks and optimize cloud architectures are essential for long-term success.&lt;/p&gt;

&lt;p&gt;AWS recommends conducting Well-Architected Framework Reviews (WAFRs) regularly to ensure continued alignment of cloud architectures with best practices and business objectives. Reviews should be conducted at least annually or after significant changes to the architecture. Here’s where we come in – &lt;a href="https://www.techpartneralliance.com/"&gt;Techpartner Alliance&lt;/a&gt; is an AWS advanced partner and a certified &lt;a href="https://www.techpartneralliance.com/well-architected-review/"&gt;AWS-Well Architected Review Partner&lt;/a&gt;. This is to say, we are fully equipped to conduct the Well Architected Framework Review, especially with the Healthcare lens for your technological infrastructure. We will partner with you on your journey to build cloud infrastructure in line with the design principles of the six pillars of the Well-Architected Framework. Follow our &lt;a href="https://www.linkedin.com/company/techpartner-alliance/"&gt;LinkedIn Page&lt;/a&gt; and check out our other &lt;a href="https://www.techpartneralliance.com/blogs/"&gt;Blogs&lt;/a&gt; to stay updated on the latest tech trends and AWS Cloud.&lt;/p&gt;

&lt;p&gt;Schedule your complimentary &lt;a href="https://forms.gle/kHJctfwhtxJiCSTH9"&gt;AWS Well-Architected Framework Assessment Now&lt;/a&gt;&lt;/p&gt;

</description>
      <category>healthcare</category>
      <category>aws</category>
      <category>wellarchitected</category>
    </item>
    <item>
      <title>AWS Graviton Migration - Embracing the Path to Modernization</title>
      <dc:creator>Arunasri Maganti</dc:creator>
      <pubDate>Mon, 10 Jun 2024 13:03:44 +0000</pubDate>
      <link>https://forem.com/techpartner/aws-graviton-migration-embracing-the-path-to-modernization-5594</link>
      <guid>https://forem.com/techpartner/aws-graviton-migration-embracing-the-path-to-modernization-5594</guid>
      <description>&lt;p&gt;Usually, companies tend to associate the idea of application modernization with drastic transformations such as migrating from large monolithic applications into microservices. Being cognizant of obsolete technology and modernizing through advanced architectures such as &lt;a href="https://aws.amazon.com/ec2/graviton/"&gt;AWS Graviton architecture (Arm processor)&lt;/a&gt;, makes it possible to reveal more nuanced problems. This helps keep your systems current and in tune for the performance and cost optimization actions that &lt;a href="https://aws.amazon.com/ec2/graviton/getting-started/"&gt;AWS Graviton migration&lt;/a&gt; provides.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trapped with Legacy x86: Between Comfort and Opportunity&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Inertia and Stability: This is because people are inclined to believe that the x86 environments are steadfast and easy to comprehend, hence many organizations are reluctant to move to the new operating environments. When everything is functioning smoothly, the urgency for change isn't felt.&lt;/li&gt;
&lt;li&gt;Backward Compatibility: This type of argument can be referred to as the ‘double-edged sword’ since it often serves two masters; the purpose behind it is usually achieved at the cost of another.
Backward compatibility is also one of the key assets that can be attributed to x86 architectures. This makes it possible to use older software to operate on newer machines despite the lack of changes. Although this can be useful in the short-term, this creates a vicious cycle where organizations and firms can remain bound to outdated software, inhibit the process of improving and enhancing these applications, and expose themselves to risks.
Here's an example of how you might inventory current versions and evaluate the need for upgrades:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpy2ubhgytr8ua3fmbiyu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpy2ubhgytr8ua3fmbiyu.png" alt="Image description" width="789" height="712"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Resistance to Change: Changing from Intel x86 processor to Arm processor core means that users enter unknown space. Compatibility challenges, performance and operation issues combined with the fear of possible slowdowns can be quite a discouraging and daunting element to anyone willing to engage in change. Nonetheless, the move to AWS Graviton processor is a well-established process by now, with many businesses having already experienced this transition. Subject to comprehensive support from AWS and specialist partners, the migration can be smooth, thus, while reducing these risks greatly.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Risks of Outdated Architectures Lurking in the Shadows&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;    Security Vulnerabilities: The continued use of old software versions on x86 architecture entails a considerable security risk for any organization experiencing cyber-attacks and data leakage. Exposed flaws give attackers the liberty to exploit such vulnerabilities.&lt;/li&gt;
&lt;li&gt;    Performance Degradation: This trend showcases that the longer the time between the creation of the legacy x86 architectures and modern options, the larger the performance difference becomes. Old version software is heavy and requires a lot of shop space, time, and system resources that cause slow down.&lt;/li&gt;
&lt;li&gt;    Compatibility Challenges: This incompatibility stems from the fact that as technology progresses, original x86 applications are less compatible with today’s architectural structures and specific programming languages. This results in dependencies that are difficult to overcome and stagnate the progress of technological innovations and advancement.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Journey to Modernization with AWS Graviton&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;    Architectural Paradigm Shift: AWS Graviton requires an uncomplicated switch to Amazon’s chips with actual application modifications to deal with new architecture. Incorporating &lt;a href="https://www.arm.com/partners/aws"&gt;Arm processor architecture&lt;/a&gt; opens up extra capabilities of performance and power conservation.&lt;/li&gt;
&lt;li&gt;    Leveraging ARM's Power: Arm architecture which has logged itself as energy efficient and sound performer gives us a peep into the latest options of computing. Ultimately, AWS Graviton instances enable companies to unlock the full potential of Arm and place them on the cutting edge.&lt;/li&gt;
&lt;li&gt;    Security Fortification: Graviton use in AWS not only improves the performance of the processors but also increases security. One must seek solace in the core security measures hard-wired within the Arm architectures which comprise robust defense against cyber threats.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;A Cost-Effective Modernization Solution&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Distinct from other application modernization projects that could be costly and time-consuming, the migration to AWS Graviton offers an opportunity for organizations to undertake a cost-effective strategy. Effectiveness and productivity can be achieved through the means of improved migratory tendencies also not only to save costs and minimize non-essential expenditures but also to help liberate resources. AWS Graviton price and effort for migration pays for itself in the costs that you will save! AWS Graviton processors offer up to a 40% better price performance than x86 processors and help you reach your sustainability goals. &lt;/p&gt;

&lt;p&gt;Conclusion: When it comes to application modernization, organizations face a pivotal choice: fall back to familiar x86 designs or unlock the full potential of AWS Graviton. While x86 can give a sense of security and stick with the tried-and-true, AWS Graviton shows a way for greater advancement, and optimization along with safety. Thus, to pursue AWS Graviton transformation, a company undertakes an optimization process based on technological perspectives and future opportunities to be safer and more efficient.&lt;/p&gt;

&lt;p&gt;As a premier organization for technology partner partnerships, &lt;a href="https://www.techpartneralliance.com/"&gt;Techpartner Alliance&lt;/a&gt; is committed towards providing optimum customer support throughout your modernization process. Techpartner Alliance is an &lt;a href="https://www.techpartneralliance.com/graviton-arm-processor/"&gt;AWS certified Graviton Service Delivery Partner&lt;/a&gt; and an &lt;a href="https://www.arm.com/partners/catalog/techpartneralliancellc?searchq=techpartner%20&amp;amp;sort=relevancy&amp;amp;numberOfResults=12"&gt;Arm partner&lt;/a&gt;. If you require more information or have any questions as to whether AWS Graviton would work for your organization, &lt;strong&gt;we provide consultation services for free.&lt;/strong&gt; Start the journey towards the modern future with an increase in productivity now.&lt;/p&gt;

&lt;p&gt;Follow Techpartner’s &lt;a href="https://www.linkedin.com/company/techpartner-alliance/"&gt;LinkedIn Page&lt;/a&gt; for regular updates on latest tech trends and AWS cloud!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.techpartneralliance.com/contact-us/"&gt;Schedule Your Complimentary Assessment Now&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>graviton</category>
      <category>aws</category>
      <category>modernization</category>
      <category>arm</category>
    </item>
    <item>
      <title>AWS Cost Optimization: Top 5 Best Practices &amp; Tools</title>
      <dc:creator>Arunasri Maganti</dc:creator>
      <pubDate>Fri, 31 May 2024 13:03:33 +0000</pubDate>
      <link>https://forem.com/techpartner/aws-cost-optimization-top-5-best-practices-tools-59hc</link>
      <guid>https://forem.com/techpartner/aws-cost-optimization-top-5-best-practices-tools-59hc</guid>
      <description>&lt;p&gt;In order to get the most return on your cloud investment, AWS cost optimization is essential. AWS continues to gain popularity and importance for the flexible and scalable infrastructure it provides to many firms out there; as a result, managing and optimizing costs plays a significant role in its organizations’ objectives of sustaining and increasing profitability while also increasing operational performance. It is crucial for your cost optimization to run through your approaches from time to time in order to save money, become more flexible, and choose the right instances for your range of business. In this blog, we dive into the top 5 AWS cost reduction strategies and AWS cost optimization tools to enable you to get the most out of your investment in AWS cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zd4ypddsdzv2vsuqkxj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zd4ypddsdzv2vsuqkxj.jpg" alt="Mind Map for AWS Cost Optimization Strategies (discussed in detail below" width="800" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Top 5 AWS Cost Optimization Best Practices&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Right-Sizing Your Instances&lt;/strong&gt;&lt;br&gt;
Right sizing deals with the careful assessment of your usage of different resources to fit your actual requirements. This means that it is possible to avoid over-provisioning risks and save money by choosing the right instance types for services such as EC2, RDS and Redshift among others. You will need to first locate underutilized instances and then eliminate or scale back these instances, either by de-provisioning or shrinking them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Save money by using savings plans &amp;amp; reserved instances&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/savingsplans/"&gt;AWS Savings Plans&lt;/a&gt; provides up to 72% more cost savings than on-demand pricing on AWS EC2 instances, Fargate, and Lambda. This is because the more you commit to using it, either for 1 or 3 years consistently, you will qualify for more savings. &lt;a href="https://aws.amazon.com/ec2/pricing/reserved-instances/"&gt;AWS EC2 Reserved Instances&lt;/a&gt; are 1 or 3 year term commitments getting up to 75% off the on-demand price but for specific instances in specific regions and mostly useful for predictable loads. But you can’t decrease the instance during this period, and increasing the instance will be charged at on-demand pricing. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Leveraging Spot Instances&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/ec2/spot/"&gt;AWS EC2 Spot instances&lt;/a&gt; are unused AWS instances available at a bid level thus achieving discounts of 90% on an on-demand instance price. These are best used in batch processes, stateless website services, high-performance computing tasks, or big data applications and applications that can be interrupted. However, AWS can allow someone else to bid and take the instance back within two minutes if the someone has bid higher than you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Optimize Storage Costs&lt;/strong&gt;&lt;br&gt;
A number of measures used in AWS S3 cost optimization ensure that storage costs are kept as low as possible while still ensuring that data is readily accessible. Store production files in S3/GLACIER and activity based storage tier migration to move production files between different storage tiers. This involves using the &lt;a href="https://aws.amazon.com/s3/storage-classes/intelligent-tiering/"&gt;Amazon S3 Intelligent-Tiering&lt;/a&gt; option that enables the automatic moving of data based on their access. For long-term storage, principally use &lt;a href="https://aws.amazon.com/s3/storage-classes/glacier/"&gt;Amazon S3 Glacier&lt;/a&gt; and, even more cost-efficient, &lt;a href="https://aws.amazon.com/s3/storage-classes/glacier/"&gt;Amazon S3 Glacier Deep Archive&lt;/a&gt; to archive infrequently accessed data. Select the EBS volume type based on the requirement of the application and make sure that ‘Delete on termination’ checkbox is checked in order to prevent further charges when the EC2 instances are terminated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. AWS Auto Scaling for Cost Optimization&lt;/strong&gt;&lt;br&gt;
Use &lt;a href="https://aws.amazon.com/autoscaling/"&gt;AWS Auto Scaling Groups&lt;/a&gt; (ASGs) to control the size of your EC2 instances rendering them either larger or smaller depending on your utilization and defined scaling policies. optimizing both performance and cost with ASGs by regularly reviewing the policies and updating them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Top 5 AWS Cloud Cost Optimization Tools&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. AWS Cost Explorer&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/aws-cost-management/aws-cost-explorer/"&gt;AWS Cost Explorer&lt;/a&gt; is a tool that enables users to analyze, review, control, and contain their expenditures and usage patterns on the platform over time. You can generate various reports, customize them and filter or group by various dimensions and costs. Cost Explorer forecasts future costs using historical data and alerts you to cost anomalies. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. AWS Budgets&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/aws-cost-management/aws-budgets/"&gt;AWS Budgets&lt;/a&gt; lets you create cost and usage budgets, and informs you when a budgetary ceiling has been breached. Some of the features include having set maximum allowed cost, maximum allowed usage, maximum allowed utilization on Reserved Instance/Savings Plan, and a notification when the budgets have been surpassed. This tool can be integrated with AWS Cost Explorer, for richer visual cost analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. AWS Trusted Advisor&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/premiumsupport/technology/trusted-advisor/"&gt;AWS Trusted Advisor&lt;/a&gt; is an on-demand resource that helps you to maximize your AWS usage by giving real-time recommendations. Cost Controller includes Unused Capacity which helps in identifying underutilized or idle resources. Reserved Instance Purchase Recommendations provide a guide to the appropriate purchase of Reserved Instances, and lastly Cost Saving Suggestions for identifying more savings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Amazon Web Services Cost and Usage Report (CUR)&lt;/strong&gt;&lt;br&gt;
The &lt;a href="https://aws.amazon.com/aws-cost-management/aws-cost-and-usage-reporting/"&gt;AWS Cost and Usage Report&lt;/a&gt; is a summary of AWS costs and usage metrics that offer detailed information on the company’s expenses. Some of the main ones that appeal to most users are for example the ability to get cost and usage information precisely and in the smallest detail possible; the ability to be able to design a report in a way that the end user would need it to be designed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. AWS Compute Optimizer&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/compute-optimizer/"&gt;AWS Compute Optimizer&lt;/a&gt; is a service that enables you to identify the optimal AWS services to use for your instances, thereby minimizing cost and improving efficiency. This includes daily and weekly usage analytics, proposing the most suitable EC2 instance types, Auto Scaling Groups, and Lambda functions optimization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About Techpartner Alliance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.techpartneralliance.com/"&gt;Techpartner Alliance&lt;/a&gt; specializes in AWS and was established in 2017 by &lt;a href="https://www.linkedin.com/in/ravindrakatti/"&gt;Ravindra Katti&lt;/a&gt;, an AWS ex-seller, and &lt;a href="https://www.linkedin.com/in/prasadwani/"&gt;Prasad Wani&lt;/a&gt;, an AWS cloud architect. In our capacity as a reviewed partner of the &lt;a href="https://www.techpartneralliance.com/well-architected-review/"&gt;Well-Architected Framework Review (WAR)&lt;/a&gt;, we can perform the WAR which seeks to evaluate various architectural weaknesses in your ecosystem and then establish and implement a WAR for AWS cost management. Additionally, in our capacity as a certified service delivery partner of &lt;a href="https://www.techpartneralliance.com/graviton-arm-processor/"&gt;AWS Graviton&lt;/a&gt;, we can assess your workloads to migrate Graviton processors which provide up to a 40% price-performance saving compared to Intel x86 processors. &lt;/p&gt;

&lt;p&gt;Follow our &lt;a href="https://www.linkedin.com/company/techpartner-alliance/"&gt;LinkedIn page&lt;/a&gt; for regular updates on latest tech trends and AWS cloud!&lt;/p&gt;




&lt;p&gt;Citations:&lt;br&gt;
[1] &lt;a href="https://spot.io/resources/aws-cost-optimization/8-tools-and-tips-to-reduce-your-cloud-costs/"&gt;https://spot.io/resources/aws-cost-optimization/8-tools-and-tips-to-reduce-your-cloud-costs/&lt;/a&gt; &lt;br&gt;
[2] &lt;a href="https://www.cloudzero.com/blog/aws-cost-optimization-tools/"&gt;https://www.cloudzero.com/blog/aws-cost-optimization-tools/&lt;/a&gt; &lt;br&gt;
[3] &lt;a href="https://www.nops.io/blog/aws-cost-optimization-tools/"&gt;https://www.nops.io/blog/aws-cost-optimization-tools/&lt;/a&gt; &lt;br&gt;
[4] &lt;a href="https://aws.amazon.com/aws-cost-management/cost-optimization/"&gt;https://aws.amazon.com/aws-cost-management/cost-optimization/&lt;/a&gt; &lt;br&gt;
[5] &lt;a href="https://docs.aws.amazon.com/whitepapers/latest/cost-optimization-laying-the-foundation/reporting-cost-optimization-tools.htm"&gt;https://docs.aws.amazon.com/whitepapers/latest/cost-optimization-laying-the-foundation/reporting-cost-optimization-tools.htm&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudcomputing</category>
      <category>cloudpractitioner</category>
    </item>
  </channel>
</rss>
