<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Cloud Tech</title>
    <description>The latest articles on Forem by Cloud Tech (@cloudtech).</description>
    <link>https://forem.com/cloudtech</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/cloudtech"/>
    <language>en</language>
    <item>
      <title>Best practices for ML lifecycle stages</title>
      <dc:creator>Adit Modi</dc:creator>
      <pubDate>Tue, 08 Feb 2022 04:20:24 +0000</pubDate>
      <link>https://forem.com/cloudtech/best-practices-for-ml-lifecycle-stages-4g9b</link>
      <guid>https://forem.com/cloudtech/best-practices-for-ml-lifecycle-stages-4g9b</guid>
      <description>&lt;p&gt;Building a machine learning model is an iterative process. For a successful deployment, most of the steps are repeated several times to achieve optimal results. The model must be maintained after deployment and adapted to changing environment. Let’s look at the details of the lifecycle of a machine learning model.&lt;/p&gt;

&lt;h1&gt;
  
  
  Data collection
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The first step in the development of ML workloads is identification of data that is needed for training and performance evaluation of an ML model. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the cloud environment, a data lake usually serves as a centralized repository that enables you to store all structured and unstructured data regardless of scale.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS provides a number of ways to ingest data, both in bulk and in real-time, from a wide variety of sources. You can use services such as AWS Direct Connect and AWS Storage Gateway to move data from on-premises environments, and tools like AWS Snowball and AWS Snowmobile for moving data at scale. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can also use Amazon Kinesis to collect and ingest streaming data. You also have the option to use services such as AWS Lake Formation and Amazon HealthLake to quickly set up data lakes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The following best practices are recommended for data collection and integration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Detail and document&lt;/strong&gt; various sources and steps needed to extract the data. This can be achieved using AWS Glue Catalog, which automatically discovers and profiles your data, and generates ETL code to transform your source data to target schemas. AWS also recently announced a new feature named AWS Glue DataBrew, which provides a visual data preparation interface that makes it easy for data analysts and data scientists to clean and normalize data to prepare it for analytics and ML.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Define data governance&lt;/strong&gt; — Who owns the data, who has access, the appropriate usage of the data, and the ability to access and delete specific pieces of data on demand. Data governance and access management can be handled using AWS Lake Formation and AWS Glue Catalog.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Data integration and preparation
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;An ML model is only as good as the data being used to train it. Bad data is often referred to as “Garbage in, Garbage out”. Once the data has been collected, the next step is to integrate, prepare and annotate data. AWS provides a number of services that data engineers and data scientists can use to prepare their data for ML model training.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In addition to the services such as AWS Glue and Amazon EMR, which provide traditional ETL capabilities, AWS also provides tools as part of Amazon SageMaker, designed specifically for data scientists. These include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Amazon SageMaker Ground Truth&lt;/strong&gt;, which can be used for data labeling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SageMaker Data Wrangler&lt;/strong&gt;, which simplifies the process of data preparation and feature engineering&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SageMaker Feature Store&lt;/strong&gt;, which enables you to store, update, retrieve, and share ML features&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Additionally, SageMaker Processing allows you to run your pre-processing, post- processing, and model evaluation workloads on a fully managed environment. &lt;/p&gt;&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;We recommend implementation of the following best practices for data integration and preparation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Track data lineage&lt;/strong&gt; so that the location and data source is tracked and known during further processing. Using AWS Glue, you can visually map the lineage of their data to understand the various data sources and transformation steps that the data has been through. You can also use metadata provided by AWS Glue Catalog to establish data lineage. The SageMaker Data Wrangler Data Flow UI provides a visual map of the end-to-end data lineage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Versioning data sources and processing workflows&lt;/strong&gt; — Versioning data sources processing workflows enables you to maintain an audit trail of the changes being made to your data integration processes over time, and recreate previous versions of your data pipelines. AWS Glue provides versioning capabilities as part of AWS Glue Catalog, and AWS Glue Schema Registry (for streaming data sources). AWS Glue and Amazon EMR jobs can be versioned using a version control system such as AWS CodeCommit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automate data integration deployment pipelines&lt;/strong&gt; — Minimize human touch points in deployment pipelines to ensure that the data integration workloads are consistently and repeatedly deployed, using a pipeline that defines how code is promoted from development to production. AWS Developer Tools allow you to build CI/CD pipelines to promote your code to a higher environment.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Feature engineering
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Feature engineering involves the selection and transformation of data attributes or variables during the development of a predictive model. Amazon SageMaker Data Wrangler can be used for selection, extraction, and transformation of features. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can export your data flow, designed in Data Wrangler, as a Data Wrangler Job, or export to SageMaker Pipelines.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ETL services like Amazon EMR and AWS Glue can be used for feature extraction and transformation. Finally, you can use Amazon SageMaker Feature Store to store, update, retrieve and share ML features. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The following best practices are recommended for feature engineering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ensure feature standardization and consistency&lt;/strong&gt; — It is common to see a different definition of similar features across a business. The use of Amazon SageMaker Feature Store allows for standardization of features, and helps to ensure consistency between model training and inference.&lt;/li&gt;
&lt;li&gt;If you are using SageMaker for feature engineering, you can use SageMaker Lineage Tracking to store and track information about the feature engineering steps (along with other ML workflow steps performed in SageMaker).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Model training
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The model training step involves the selection of appropriate ML algorithms, and using the input features to train an ML model. Along with the training data (provided as input features prepared during the feature engineering stage), you generally provide model parameters to optimize the training process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To measure how well a model is performing during training, AWS uses several metrics such as training error and prediction accuracy. Metrics reported by the algorithm depend on the business problem and the ML technique being used.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Certain model parameters, called hyperparameters, can be tuned to control the behavior of the model and the resulting model architecture. Model training typically involves an iterative process of training a model, evaluating its performance against relevant metrics, and tuning the hyperparameters in search for the most optimal model architecture. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;This process is generally referred to as hyperparameter optimization. AWS recommends the application of the following best practices during the model training step:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Follow a model testing plan and track your model experiments&lt;/strong&gt; — Amazon SageMaker Experiments enables you to organize, track, compare, and evaluate ML experiments and model versions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Take advantage of managed services for model turning&lt;/strong&gt; — SageMaker Automatic Model Tuning and SageMaker Autopilot help ML practitioners explore a large number of combinations to automatically and quickly zoom in on high- performance models.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitor your training metrics to ensure your model training is achieving the desired results&lt;/strong&gt; — SageMaker Debugger can be used for this purpose, which is designed to profile and debug your training jobs to improve the performance of ML models.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ensure traceability of model training as part of the ML lifecycle&lt;/strong&gt; — SageMaker Lineage Tracking can be used for this purpose.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Model validation
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;After the model has been trained, evaluate it to determine if its performance and accuracy will enable you to achieve your business goals. Data scientists typically generate multiple models using different methods, and evaluate the effectiveness of each model. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The evaluation results inform the data scientists’ decision to fine-tune the data or algorithms to further improve the model performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;During fine-tuning, data scientists might decide to repeat the data preparation, feature engineering, and model training steps. AWS recommends the following best practices for model validation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Keep track of the experiments performed to train models using different sets of features and algorithms&lt;/strong&gt; — Amazon SageMaker Experiments, as discussed in the Model training section, can help keep track of different training iterations and evaluation results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintain different versions of the models and their associated metadata such as training and validation metrics in a model repository&lt;/strong&gt; — SageMaker Model Registry enables you to catalog models for production, manage model versions, manage approval status of the models, and associate metadata, such as the training metrics of a model.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transparency about how a model arrives at their predictions is critical for regulators who require insights into how a model makes a decision&lt;/strong&gt; — AWS recommends that you use model explainability tools, which can help explain how ML models make predictions. SageMaker Clarify provides the necessary tools for model explainability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Biases in the data can result in can introduce bias in ML algorithms&lt;/strong&gt;, which can significantly limit the effectiveness of the models. This is of special significance in healthcare and life sciences, because poorly performing or biased ML models can have a significant negative impact in the real-world. SageMaker Clarify can be used to perform the post-training bias analysis against the ML models.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Additional considerations for AI/ML compliance
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Additional considerations include:

&lt;ul&gt;
&lt;li&gt;Auditability&lt;/li&gt;
&lt;li&gt;Traceability&lt;/li&gt;
&lt;li&gt;Reproducibility&lt;/li&gt;
&lt;li&gt;Model monitoring&lt;/li&gt;
&lt;li&gt;Model interpretability&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Auditability
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Another consideration for a well governed and secure ML environment is having a robust and transparent audit trail that logs all access and changes to the data and models, such as a change in the model configuration, or the hyperparameters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS CloudTrail is one service that will log, nearly continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail logs every AWS API call, and provides an event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Another service, AWS Config, enables you to nearly continuously monitor and record configuration changes of your AWS resources. More broadly, in addition to the logging and audit capabilities, AWS recommends a defense in depth approach to security, applying security at every level of your application and environment. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS CloudTrail and AWS Config can be used as Detective controls responsible for identifying potential security threats or incidents.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As the Detective controls identify potential threats, you can set up a corrective control to respond to and mitigate the potential impact of security incidents. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon CloudWatch is a monitoring service for AWS resources, which can trigger CloudWatch Events to automate security responses. For details on setting up Detective and corrective controls, refer to Logging and Monitoring in AWS Glue.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Traceability
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Effective model governance requires a detailed understanding of the data and data transformations used in the modeling process, in addition to nearly continuous tracking of all model development iterations. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is important to keep track of which dataset was used, what transformations were applied to the data, where the dataset was stored, and what type of model was built.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Additional variables, such as hyperparameters, model file location, and model training metadata also need to be tracked. Any post-processing steps that have been applied to remove biases from predictions during batch inference also need to be recorded.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, if a model is promoted to production for inference, there needs to be a record of model files/weights used in production, and model performance in production needs to be monitored.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;One aspect of traceability that helps ensure you have visibility of what components or artifacts make their way into production, and how they evolve over time in the form of updates and patches, is the use of versioning. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;There are three key components that provide versioning for different types of components involved in developing an ML solution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using software version controls through tools such as GitHub to keep track of changes made to processing, training, and inference script. AWS provides a native version control system in the form of AWS CodeCommit that can be used for this purpose. Alternatively, you can use your own GitHub implementations.&lt;/li&gt;
&lt;li&gt;Using a model versioning capability to keep track of different iterations of models being created as part of iterative training runs. SageMaker Model Registry, which natively integrated with the wider SageMaker features, can be used for this purpose.&lt;/li&gt;
&lt;li&gt;Using a container repository to keep track of different container versions, which are used in SageMaker for processing, training, and inference. SageMaker natively integrates with Amazon ECR, which maintains a version of every container update.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Reproducibility
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Reproducibility in ML is the ability to produce identical model artifacts and results by saving enough information about every phase in the ML workflow, including the dataset, so that it can be reproduced at a later date or by different stakeholders, with the least possible randomness in the process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For GxP compliance, customers may need to reproduce and validate every stage of the ML workflow to reduce the risk of errors, and ensure the correctness and robustness of the ML solution.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Unlike traditional software engineering, ML is experimental, highly iterative, and consists of multiple phases that make reproducibility challenging. It all starts with the data. It’s important to ensure that the dataset is reproducible at each phase in the ML workflow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Variability in the dataset could arise due to randomness in subsampling methods, creating train/validation/test splits and dataset shuffling.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Variability could also arise due to changes in the data processing, feature engineering, and post-processing scripts. Inconsistencies in any of these phases can lead to an irreproducible solution. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Methods that can help ensure reproducibility of the dataset as well as the data processing scripts include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dataset versioning&lt;/li&gt;
&lt;li&gt;Using a fixed seed value across all the libraries in the code base&lt;/li&gt;
&lt;li&gt;Unit testing code to ensure that the outputs remain the same for a given set of inputs&lt;/li&gt;
&lt;li&gt;Version controlling the code base&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;The core components of the ML workflow are the ML models, which consist of a combination of model parameters and hyperparameters, which need to be tracked to ensure consistent and reproducible results.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;In addition to these parameters, the stochastic (uncertain or random) nature of many ML algorithms adds a layer of complexity, because the same dataset along with the same code base could produce to different outputs. &lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;This is more pronounced in deep learning algorithms, which make efficient approximations for complex computations. These results can be approximately reproduced with the same dataset, the same code base, and the same algorithm.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;In addition to the algorithms, the underlying hardware and software environment configurations could impact reproducibility as well. Methods that can help ensure reproducibility and limit the number of sources of nondeterministic behavior in ML modeling include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consistency in initializing model parameters&lt;/li&gt;
&lt;li&gt;Standardizing the infrastructure (CPUs and GPUs)&lt;/li&gt;
&lt;li&gt;Configuration management to ensure consistency in the runtimes, libraries and frameworks&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;When the solutions aren't fully deterministic, the need for quantifying the uncertainty in model prediction increases. Uncertainty quantification (UQ) plays a pivotal role in the reduction of uncertainties during optimization and decision making, and promotes transparency in the GxP compliance process. &lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;A review of uncertainty quantification techniques, applications, and challenges in deep learning are presented in A Review of Uncertainty Quantification in Deep Learning: Techniques, Applications and Challenges.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Few methods for uncertainty quantification include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensemble learning techniques such as Deep Ensembles, which are generalizable across ML models and can be integrated into existing ML workflows.&lt;/li&gt;
&lt;li&gt;Temperature scaling, which is an effective post-processing technique to restore network calibration, such that the confidence of the predictions matches the true likelihood. Refer to a reference paper on calibrating neural networks.&lt;/li&gt;
&lt;li&gt;Bayesian neural networks with Monte Carlo dropout.
For more information about these methods, refer to Methods for estimating uncertainty in deep learning.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Amazon SageMaker ML Lineage Tracking provides the ability to create and store information about each phase in the ML workflow. In the context of GxP compliance, this can help you establish model governance by tracking model lineage artifacts for auditing and compliance verification. &lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;SageMaker ML Lineage Tracking tracks entities that are automatically created by SageMaker, or custom created by customers, to help maintain the representation of all elements in each phase of the ML workflow.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Model interpretability
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Interpretability is the degree to which a human can understand the cause of a decision. The higher the interpretability of an ML model, the easier it is to comprehend the model’s predictions. Interpretability facilitates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understanding&lt;/li&gt;
&lt;li&gt;Debugging and auditing ML model predictions&lt;/li&gt;
&lt;li&gt;Bias detection to ensure fair decision making&lt;/li&gt;
&lt;li&gt;Robustness checks to ensure that small changes in the input do not lead to large changes in the output&lt;/li&gt;
&lt;li&gt;Methods that provide recourse for those who have been adversely affected by model predictions&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;In the context of GxP compliance, model interpretability provides a mechanism to ensure the safety and effectiveness of ML solutions by increasing the transparency around model predictions, as well as the behavior of the underlying algorithm.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Promoting transparency is a key aspect of the patient-centered approach, and is especially important for AI/ML-based SaMD, which may learn and change over time.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;There is a tradeoff between what the model has predicted (model performance) and why the model has made such a prediction (model interpretability).&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;For some solutions, a high model performance is sufficient; in others, the ability to interpret the decisions made by the model is key. The demand for interpretability increases when there is a large cost for incorrect predictions, especially in high-risk applications.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5zls6p7wod5p5k8665f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5zls6p7wod5p5k8665f.png" alt="Image description" width="735" height="442"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Trade-off between performance and model interpretability&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Based on the model complexity, methods for model interpretability can be classified into intrinsic analysis and post hoc analysis.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Intrinsic analysis&lt;/strong&gt; can be applied to interpret models that have low complexity (simple relationships between the input variables and the predictions). These models are based on:&lt;/li&gt;
&lt;li&gt;Algorithms, such as linear regression, where the prediction is the weighted sum of the inputs&lt;/li&gt;
&lt;li&gt;Decision trees, where the prediction is based on a set of if-then rules&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The simple relationship between the inputs and output results in high model interpretability, but often leads to lower model performance, because the algorithms are unable to capture complex non-linear interactions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Post hoc analysis&lt;/strong&gt; can be applied to interpret simpler models, as described earlier, as well as more complex models, such as neural networks, which have the ability to capture non-linear interactions. These methods are often model- agnostic and provide mechanisms to interpret a trained model based on the inputs and output predictions. Post hoc analysis can be performed at a local level, or at a global level.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local methods&lt;/strong&gt; enable you to zoom in on a single data point and observe the behavior of the model in that neighborhood. They are an essential component for debugging and auditing ML model predictions. Examples of local methods include:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Local Interpretable Model-Agnostic Explanations (LIME)&lt;/strong&gt;, which provides a sparse, linear approximation of the model behavior around a data point&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SHapley Additive exPlanations (SHAP)&lt;/strong&gt;, a game theoretic approach based on Shapley values which computes the marginal contribution of each input variable towards the output&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Counterfactual explanations&lt;/strong&gt;, which describe the smallest change in the input variables that causes a change in the model’s prediction&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integrated gradients&lt;/strong&gt;, which provide mechanisms to attribute the model’s prediction to specific input variables&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Saliency maps&lt;/strong&gt;, which are a pixel attribution method to highlight relevant pixels in an image&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Global methods&lt;/strong&gt; enable you to zoom out and provide a holistic view that explains the overall behavior of the model. These methods are helpful for verifying that the model is robust and has the least possible bias to allow for fair decision making. Examples of global methods include:&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Aggregating local explanations&lt;/strong&gt;, as defined previously, across multiple data points&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Permutation feature importance&lt;/strong&gt;, which measures the importance of an input variable by computing the change in the model’s prediction due to permutations of the input variable&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Partial dependence plots&lt;/strong&gt;, which plot the relationship and the marginal effect of an input variable on the model’s prediction&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Surrogate methods&lt;/strong&gt;, which are simpler interpretable models that are trained to approximate the behavior of the original complex model&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is recommended to start the ML journey with a simple model that is both inherently interpretable and provides sufficient model performance. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In later iterations, if you need to improve the model performance, AWS recommends increasing the model complexity and leveraging post hoc analysis methods to interpret the results.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Selecting both a local method and a global method gives you the ability to interpret the behavior of the model for a single data point, as well as across all data points in the dataset. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is also essential to validate the stability of model explanations, because methods in post-hoc analysis are susceptible to adversarial attacks, where small perturbations in the input could result in large changes in the output prediction and therefore in the model explanations as well.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon SageMaker Clarify provides tools to detect bias in ML models and understand model predictions. SageMaker Clarify uses a model-agnostic feature attribution approach and provides a scalable and efficient implementation of SHAP. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To run a SageMaker Clarify processing job that creates explanations for ML model predictions, refer to Explainability and bias detection with Amazon SageMaker Clarify.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Model monitoring
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;After an ML model has been deployed to a production environment, it is important to monitor the model based on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure&lt;/strong&gt; — To ensure that the model has adequate compute resources to support inference workloads&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt; — To ensure that the model predictions do not degrade over time&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitoring model performance is more challenging, because the underlying patterns in the dataset are constantly evolving, which causes a static model to underperform over time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In addition, obtaining ground truth labels for data in a production environment is expensive and time consuming. An alternative approach is to monitor the change in data and model entities with respect to a baseline. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon SageMaker Model Monitor can help to nearly continuously monitor the quality of ML models in production, which may play a role in postmarket vigilance by manufacturers of Software as a Medical Device (SaMD).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SageMaker Model Monitor provides the ability to monitor drift in data quality, model quality, model bias, and feature attribution. A drift in data quality arises when the statistical distribution of data in production drifts away from the distribution of data during model training. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This primarily occurs when there is a bias in selecting the training dataset; for example, where the sample of data that the model is trained on has a different distribution than that during model inference, or in non-stationary environments when the data distribution varies over time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A drift in model quality arises when there is a significant deviation between the predictions that the model makes and the actual ground truth labels. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SageMaker Model Monitor provides the ability to create a baseline to analyze the input entities, define metrics to track drift, and nearly continuously monitor both the data and model in production based on these metrics. Additionally, Model Monitor is integrated with SageMaker Clarify to identify bias in ML models.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F853m30byzjt7ba4efv7x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F853m30byzjt7ba4efv7x.png" alt="Image description" width="800" height="526"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Model deployment and monitoring for drift&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For model monitoring, perform the following steps:&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;After the model has been deployed to a SageMaker endpoint, enable the endpoint to capture data from incoming requests to a trained ML model and the resulting model predictions.&lt;/li&gt;
&lt;li&gt;Create a baseline from the dataset that was used to train the model. The baseline computes metrics and suggests constraints for these metrics. Real-time predictions from your model are compared to the constraints, and are reported as violations if they are outside the constrained values.&lt;/li&gt;
&lt;li&gt;Create a monitoring schedule specifying what data to collect, how often to collect it, how to analyze it, and which reports to produce.&lt;/li&gt;
&lt;li&gt;Inspect the reports, which compare the latest data with the baseline, and watch for any violations reported and for metrics and notifications from Amazon CloudWatch.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;The drift in data or model performance can occur due to a variety of reasons, and it is essential for the technical, product, and business stakeholders to diagnose the root cause that led to the drift. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;*Early and proactive detection of drift enables you to take corrective actions such as model retraining, auditing upstream data preparation workflows, and resolving any data quality issues.&lt;br&gt;
If all else remains the same, then the decision to retrain the model is based on considerations such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reevaluate target performance metrics based on the use-case&lt;/li&gt;
&lt;li&gt;A tradeoff between the improvement in model performance vs. the time and cost to retrain the model&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The availability of ground truth labeled data to support the desired retraining frequency&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the model is retrained, you can evaluate the candidate model performance based on a champion/challenger setup, or with A/B testing, prior to redeployment.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;Hope this guide helps you understand the Best practices for ML lifecycle stages.&lt;/p&gt;

&lt;p&gt;Let me know your thoughts in the comment section 👇&lt;br&gt;
And if you haven't yet, make sure to follow me on below handles:&lt;/p&gt;

&lt;p&gt;👋 &lt;strong&gt;connect with me on &lt;a href="https://www.linkedin.com/in/adit-modi-2a4362191/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
🤓 &lt;strong&gt;connect with me on &lt;a href="https://twitter.com/adi_12_modi" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
🐱‍💻 &lt;strong&gt;follow me on &lt;a href="https://github.com/AditModi" rel="noopener noreferrer"&gt;github&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
✍️ &lt;strong&gt;Do Checkout &lt;a href="https://aditmodi.hashnode.dev" rel="noopener noreferrer"&gt;my blogs&lt;/a&gt;&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Like, share and follow me 🚀 for more content.&lt;/p&gt;


&lt;div class="ltag__user ltag__user__id__497987"&gt;
    &lt;a href="/aditmodi" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F497987%2F96c3f130-72d9-449a-8687-242133f019c2.jpg" alt="aditmodi image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/aditmodi"&gt;Adit Modi&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/aditmodi"&gt;Senior Cloud Engineer | AWS Community Builder | 12x AWS Certified | 3x Azure Certified | Author of Cloud Tech , DailyDevOps &amp;amp; BigDataJournal | HashiCorp Ambassador | Lift "Cloud Captain"&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;👨‍💻 &lt;strong&gt;Join our &lt;a href="https://join.slack.com/t/cloudtechcommunity/shared_invite/zt-wptacj2f-Eu4PPvq6WEkBTHg7PR2ncA" rel="noopener noreferrer"&gt;Cloud Tech Slack Community&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
👋 &lt;strong&gt;Follow us on &lt;a href="https://www.linkedin.com/company/cloud-techs" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt; / &lt;a href="https://twitter.com/AboutCloudTech" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; for latest news&lt;/strong&gt; &lt;br&gt;
💻 &lt;strong&gt;Take a Look at our &lt;a href="https://github.com/My-Machine-Learning-Projects-2020" rel="noopener noreferrer"&gt;Github Repos&lt;/a&gt; to know more about our projects&lt;/strong&gt; &lt;br&gt;
✍️ &lt;strong&gt;Our &lt;a href="https://cloudtech.hashnode.dev" rel="noopener noreferrer"&gt;Website&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://d1.awsstatic.com/whitepapers/ML-best-practices-health-science.pdf?did=wp_card&amp;amp;trk=wp_card" rel="noopener noreferrer"&gt;Reference Notes&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>machinelearning</category>
      <category>datascience</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Machine Learning Lifecycle Process</title>
      <dc:creator>Adit Modi</dc:creator>
      <pubDate>Mon, 31 Jan 2022 12:07:52 +0000</pubDate>
      <link>https://forem.com/cloudtech/machine-learning-lifecycle-process-547p</link>
      <guid>https://forem.com/cloudtech/machine-learning-lifecycle-process-547p</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Building and operating a typical ML workload is an iterative process, and consists of multiple phases. We identify these phases loosely based on the open standard process model for Cross Industry Standard Process Data Mining (CRISP-DM) as a general guideline. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CRISP-DM is used as a baseline because it’s a proven tool in the industry and is application neutral, which makes it an easy-to-apply methodology that is applicable to a wide variety of ML pipelines and workloads. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The end-to-end ML process includes the following phases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Business goal identification &lt;/li&gt;
&lt;li&gt;ML problem framing&lt;/li&gt;
&lt;li&gt;Data collection&lt;/li&gt;
&lt;li&gt;Data integration and preparation&lt;/li&gt;
&lt;li&gt;Feature engineering&lt;/li&gt;
&lt;li&gt;Model training&lt;/li&gt;
&lt;li&gt;Model validation&lt;/li&gt;
&lt;li&gt;Business evaluation&lt;/li&gt;
&lt;li&gt;Production deployment (model deployment and model inference)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;This article presents a high-level overview of the various phases of an end-to-end ML lifecycle, which helps frame our discussion around security, compliance, and operationalization of ML best practices which will be useful in our later blog posts.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  The machine learning lifecycle process
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;The preceding figure describes the ML lifecycle process, along with the subject matter experts and business stakeholders involved through different stages of the process. It is also important to note that ML lifecycle is an interactive process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyapti32u87scnstv7osv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyapti32u87scnstv7osv.png" alt="Image description" width="800" height="707"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The machine learning lifecycle process&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Phase 1
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Phase 1 is to define business problem and goals. Domain experts and business owners are most involved in this part, determining success metrics. KPIs and determining compliance and regulatory requirements also fall under this phase. Data scientists typically work with the SMEs to frame the business problem in a way that allows them to develop a viable ML solution.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Phase 2
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Phase 2 involves gathering and preparing all relevant data from various data sources. This role is often performed by data engineers with expertise in big data tools for data extraction, transformation and loading (ETL). It is important to ensure that the data is versioned and the lineage of the data tracked for auditing and compliance purpose. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the raw datasets are available, data scientists perform data exploration, determine input features and target variables, outlier analysis, and necessary data transformations that may be needed. It is also important to ensure any transformations applied to training data can also be applied in production at inference time.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Phase 3
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Next is the model development and model evaluation phase. Data scientists determine the framework they want to use, define out-of-sample, out-of-time datasets, and experiment with various ML algorithms, hyperparameters, in some cases, add additional training data.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Phase 4
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Next, you take the trained models and run them on out of time and out of sample datasets, and pick the model or models that return the best results close to the metrics determined in Phase 1. Model artifacts and any corresponding code must be properly versioned and stored in a centralized code repository or in an artifact management system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Note that this stage of the process is experimental, and data scientists may go back to the data collection or feature engineering stage if the model performance is consistently poor. More details on data and ML artifact lineage are available in the Traceability section of this document.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data scientists are also required to provide reasons or explain feature/model influence on predictions. Model interpretability is discussed in later sections.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Phase 5
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The next phase is to deploy the models into production. This is often the most impactful and difficult step because of the gap between technologies and skillsets used to build and deploy models in production. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A large part of making this successful requires intense collaboration among infrastructure professionals such as DevOps engineers, data scientists, data engineers, domain experts, end users, and business owners during the decision making process. There should be standardized metrics, and all decision makers should be able to interpret them directly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In most organizations, lifecycles of ML models end with the deployment phase. There is a need for some form of shadow validation where models are deployed but not integrated in the production workflow to capture differences between training and live data. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This ensures that the model continues to perform as expected when receiving data from production systems. Once this validation proves successful, the model's predictions can be used in production workflows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;However, for ML models to be effective in the long run, continuously monitoring the model in real-time (if possible) to determine how well it is performing is necessary, as the accuracy of models can degrade over time. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the performance of a model degrades below a certain threshold, you may need to retrain and redeploy your model. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;Hope this guide gives you an high level overview of Different phrases of Machine Learning Lifecycle.&lt;/p&gt;

&lt;p&gt;Let me know your thoughts in the comment section 👇&lt;br&gt;
And if you haven't yet, make sure to follow me on below handles:&lt;/p&gt;

&lt;p&gt;👋 &lt;strong&gt;connect with me on &lt;a href="https://www.linkedin.com/in/adit-modi-2a4362191/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
🤓 &lt;strong&gt;connect with me on &lt;a href="https://twitter.com/adi_12_modi" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
🐱‍💻 &lt;strong&gt;follow me on &lt;a href="https://github.com/AditModi" rel="noopener noreferrer"&gt;github&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
✍️ &lt;strong&gt;Do Checkout &lt;a href="https://aditmodi.hashnode.dev" rel="noopener noreferrer"&gt;my blogs&lt;/a&gt;&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Like, share and follow me 🚀 for more content.&lt;/p&gt;


&lt;div class="ltag__user ltag__user__id__497987"&gt;
    &lt;a href="/aditmodi" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F497987%2F96c3f130-72d9-449a-8687-242133f019c2.jpg" alt="aditmodi image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/aditmodi"&gt;Adit Modi&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/aditmodi"&gt;Senior Cloud Engineer | AWS Community Builder | 12x AWS Certified | 3x Azure Certified | Author of Cloud Tech , DailyDevOps &amp;amp; BigDataJournal | HashiCorp Ambassador | Lift "Cloud Captain"&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;👨‍💻 &lt;strong&gt;Join our &lt;a href="https://join.slack.com/t/cloudtechcommunity/shared_invite/zt-wptacj2f-Eu4PPvq6WEkBTHg7PR2ncA" rel="noopener noreferrer"&gt;Cloud Tech Slack Community&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
👋 &lt;strong&gt;Follow us on &lt;a href="https://www.linkedin.com/company/cloud-techs" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt; / &lt;a href="https://twitter.com/AboutCloudTech" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; for latest news&lt;/strong&gt; &lt;br&gt;
💻 &lt;strong&gt;Take a Look at our &lt;a href="https://github.com/My-Machine-Learning-Projects-2020" rel="noopener noreferrer"&gt;Github Repos&lt;/a&gt; to know more about our projects&lt;/strong&gt; &lt;br&gt;
✍️ &lt;strong&gt;Our &lt;a href="https://cloudtech.hashnode.dev" rel="noopener noreferrer"&gt;Website&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/analytics-lens/modern-data-architecture.html" rel="noopener noreferrer"&gt;Reference Guide&lt;/a&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>datascience</category>
      <category>bigdata</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Welcome to DEV, CLOUD TECH!</title>
      <dc:creator>Adit Modi</dc:creator>
      <pubDate>Wed, 22 Dec 2021 10:23:59 +0000</pubDate>
      <link>https://forem.com/cloudtech/welcome-to-dev-cloud-tech-1fja</link>
      <guid>https://forem.com/cloudtech/welcome-to-dev-cloud-tech-1fja</guid>
      <description>&lt;h2&gt;
  
  
  Welcome to our Dev.to blog!
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Everything about Cloud &amp;amp; Tech&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;At &lt;strong&gt;CloudTech&lt;/strong&gt;, Our Goal is Built a community of self reliant Cloud Architects, Builders and Developers who are eager to help people get started with cloud along with building and developing applications which impact people around us. &lt;/p&gt;

&lt;p&gt;We share latest news and articles related to Different Cloud Providers like AWS , Azure , GCP etc and much more.&lt;/p&gt;

&lt;p&gt;We have decided to start this blog to help members in contributing to the community. Every member has a story to tell and by starting this blog, we aim to encourage each member to share their story and help inspire the next generation of cloud professionals and enthusiasts.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do I contribute?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  [1] Join our Cloud Tech Community
&lt;/h3&gt;

&lt;p&gt;To contribute, you must be part of our Cloud Tech Community.&lt;br&gt;
Join our Community &lt;a href="https://join.slack.com/t/cloudtechcommunity/shared_invite/zt-wptacj2f-Eu4PPvq6WEkBTHg7PR2ncA" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  [2] Create an account here in Dev.to (if you haven't)
&lt;/h3&gt;

&lt;p&gt;Dev.to is an awesome platform where developers are able to share their ideas. Sign up here for an account.&lt;/p&gt;

&lt;h3&gt;
  
  
  [3] Fill up the form
&lt;/h3&gt;

&lt;p&gt;Fill up this &lt;a href="https://airtable.com/shrqKuPOMh1Y0Kjg8" rel="noopener noreferrer"&gt;form&lt;/a&gt; so we can properly contact you and add you to the group of Blog Contributors. We will also share with you the unique token that you will use to be able to post content in the Dev.to website.&lt;/p&gt;

&lt;h3&gt;
  
  
  [4] Let's get writing!
&lt;/h3&gt;

&lt;p&gt;Once you are part of the community, you will be able to post to the Cloud Tech. We will strictly enforce our house rules below to ensure the community remains a safe space where members can share their ideas and start a discourse.&lt;/p&gt;

&lt;p&gt;TLDR: Posts should be related to Cloud. It doesn't have to be technical in nature but it does need to somehow connect to above mentioned keywords. There should be no mention of competing products and it should not threaten or harass others.&lt;/p&gt;

&lt;h2&gt;
  
  
  Before you go, here are some house rules
&lt;/h2&gt;

&lt;p&gt;The Cloud Tech is a safe space where people of any gender, nationality, association, sexual orientation can express their ideas and have a platform to share them with the world. But every community needs a set of rules to make sure everyone can take part in the community. Here are our house rules.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each post should comply with the code of conduct of our platform, dev.to. You can view the document here: &lt;a href="https://dev.to/code-of-conduct"&gt;https://dev.to/code-of-conduct&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;On top of the platform code of conduct, our core team will also enforce:

&lt;ul&gt;
&lt;li&gt;Each post should be related to Cloud. 
The community is a safe place. Any post that threatens, harass, causes harm to any member of the community or the dev.to user base in general will be taken down.&lt;/li&gt;
&lt;li&gt;Any post that blatantly sells products without regard to the community's objectives of sharing knowledge and promoting discourse will be taken down.&lt;/li&gt;
&lt;li&gt;We have no tolerance for plagiarized content. It will be taken down.&lt;/li&gt;
&lt;li&gt;The community is not a platform for users to reflect or assert their political and social views. Therefore, posts with political or social commentary will be taken down.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Let's get bloggin' 🥂
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ssgjc01qbpgyymyb3ag.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ssgjc01qbpgyymyb3ag.jpg" alt="Alt Text" width="400" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>career</category>
      <category>codenewbie</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>Introducing Karpenter – An Open-Source High-Performance Kubernetes Cluster Autoscaler</title>
      <dc:creator>Adit Modi</dc:creator>
      <pubDate>Wed, 08 Dec 2021 08:25:29 +0000</pubDate>
      <link>https://forem.com/cloudtech/introducing-karpenter-an-open-source-high-performance-kubernetes-cluster-autoscaler-7g5</link>
      <guid>https://forem.com/cloudtech/introducing-karpenter-an-open-source-high-performance-kubernetes-cluster-autoscaler-7g5</guid>
      <description>&lt;p&gt;At re:Invent 2021, AWS announced the &lt;strong&gt;v0.5.0 release of Karpenter&lt;/strong&gt;, marking its open-source, Kubernetes node provisioning project as production ready. With Karpenter, Kubernetes users can now dynamically provision underlying compute nodes based on pod specifications more efficiently than the existing Kubernetes &lt;strong&gt;cluster-autoscaler&lt;/strong&gt; project.&lt;/p&gt;

&lt;p&gt;Karpenter is an open-source, flexible, high-performance Kubernetes cluster autoscaler built with AWS. It helps improve your application availability and cluster efficiency by rapidly launching right-sized compute resources in response to changing application load. Karpenter also provides &lt;strong&gt;just-in-time compute resources&lt;/strong&gt; to meet your application’s needs and will soon automatically optimize a cluster’s compute resource footprint to reduce costs and improve performance.&lt;/p&gt;

&lt;p&gt;Before Karpenter, Kubernetes users needed to &lt;strong&gt;dynamically adjust&lt;/strong&gt; the compute capacity of their clusters to support applications using Amazon EC2 Auto Scaling groups and the Kubernetes Cluster Autoscaler. Nearly half of Kubernetes customers on AWS report that configuring cluster auto scaling using the Kubernetes Cluster Autoscaler is challenging and restrictive.&lt;/p&gt;

&lt;p&gt;Kubernetes-native cluster autoscaler is now production-ready according to AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F976reljw8imknistt2r7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F976reljw8imknistt2r7.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So how does Karpenter work, and how is it different than cluster autoscaler?&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Karpenter
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;If you are familiar with GKE Autopilot’s dynamic node provisioning process, you can view Karpenter as an open-source version of that tool, designed to work with any Kubernetes cluster (NOTE: currently AWS is the only officially supported cloud provider). &lt;/li&gt;
&lt;li&gt;&lt;p&gt;Similar to GKE Autopilot, Karpenter observes the pod specifications of unschedulable pods, calculates the aggregate resource requests, and sends a request to the underlying compute service (e.g. Amazon EC2) with capacity needed to run all the pods. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Underneath the hood, Karpenter defines a Custom Resource called Provisioner to specify the node provisioning configuration including instance size/type, topology (e.g. zone), architecture (e.g. arm64, amd64), and lifecycle type (e.g. spot, on-demand, preemptible).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbn48304spqk31n2fhae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbn48304spqk31n2fhae.png" alt="Image description" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;On the flip-side, Karpenter can also deprovision nodes when they are no longer needed. This can be determined by node expiry config (ttlSecondsUntilExpired) or when the last workload running on Karpenter provisioned node is terminated. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Either of these two events triggers a finalization, which cordons the nodes, drains the pods, terminates the underlying compute resource, and deletes the node object. This deprovisioning feature can also be used to keep the nodes up to date with the latest AMI as well.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Karpenter vs. Cluster Autoscaler
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;At a cursory look, Karpenter works similarly to the existing Kubernetes cluster autoscaler project. After all, cluster autoscaler is also cloud-agnostic and can scale up or down based on pod resource requests. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Upon a closer look, however, Karpenter provides several advantages over cluster autoscaler:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes native scaling:&lt;/strong&gt; Cluster autoscaler for AWS utilizes EC2 Auto Scaling groups to trigger scaling events. Since ASGs were designed before Kubernetes, this integration is clunky and slow. For example, managed node group users still can’t configure it to scale nodegroups to 0, making batch workload types more expensive to run on EKS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No need to pre-provision node groups:&lt;/strong&gt; Cluster autoscaler can only provision nodes based on specifications provided by node groups, which require worker groups to have specific tags and work best with similar instance types. This meant that if you wanted to run performance tests, you needed to predefine a node group with beefier EC2 machine types for cluster autoscaler to trigger scaling events. With Karpenter, you can utilize all of AWS instance types on demand. Since Karpenter manages each instance directly without node groups, it is also much faster to request a new compute instance when capacity is unavailable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Faster scheduling:&lt;/strong&gt; With cluster autoscaler, pods rely on kube-scheduler to create pods to new nodes once new resources become available. Since Karpenter manages nodes directly, it can immediately launch pods to new nodes without having to wait for the scheduler.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;With Karpenter, we can offload node provisioning, autoscaling, and upgrades and focus on running our applications. Karpenter works with all kinds of Kubernetes applications, but it performs particularly well for use cases that require rapid provisioning and deprovisioning large numbers of diverse compute resources quickly. For example, this includes batch jobs to train machine learning models, run simulations, or perform complex financial calculations.&lt;/p&gt;




&lt;p&gt;Let me know your thoughts in the comment section about the new karpenter service 👇&lt;br&gt;
And if you haven't yet, make sure to follow me on below handles:&lt;/p&gt;

&lt;p&gt;👋 &lt;strong&gt;connect with me on &lt;a href="https://www.linkedin.com/in/adit-modi-2a4362191/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
🤓 &lt;strong&gt;connect with me on &lt;a href="https://twitter.com/adi_12_modi" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
🐱‍💻 &lt;strong&gt;follow me on &lt;a href="https://github.com/AditModi" rel="noopener noreferrer"&gt;github&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
✍️ &lt;strong&gt;Do Checkout &lt;a href="https://aditmodi.hashnode.dev" rel="noopener noreferrer"&gt;my blogs&lt;/a&gt;&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Like, share and follow me 🚀 for more content.&lt;/p&gt;


&lt;div class="ltag__user ltag__user__id__497987"&gt;
    &lt;a href="/aditmodi" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F497987%2F96c3f130-72d9-449a-8687-242133f019c2.jpg" alt="aditmodi image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/aditmodi"&gt;Adit Modi&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/aditmodi"&gt;Senior Cloud Engineer | AWS Community Builder | 12x AWS Certified | 3x Azure Certified | Author of Cloud Tech , DailyDevOps &amp;amp; BigDataJournal | HashiCorp Ambassador | Lift "Cloud Captain"&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;👨‍💻 &lt;strong&gt;Join our &lt;a href="https://join.slack.com/t/cloudtechcommunity/shared_invite/zt-wptacj2f-Eu4PPvq6WEkBTHg7PR2ncA" rel="noopener noreferrer"&gt;Cloud Tech Slack Community&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
👋 &lt;strong&gt;Follow us on &lt;a href="https://www.linkedin.com/company/cloud-techs" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt; / &lt;a href="https://twitter.com/AboutCloudTech" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; for latest news&lt;/strong&gt; &lt;br&gt;
💻 &lt;strong&gt;Take a Look at our &lt;a href="https://github.com/My-Machine-Learning-Projects-2020" rel="noopener noreferrer"&gt;Github Repos&lt;/a&gt; to know more about our projects&lt;/strong&gt; &lt;br&gt;
✍️ &lt;strong&gt;Our &lt;a href="https://cloudtech.hashnode.dev" rel="noopener noreferrer"&gt;Website&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Amazon launches AWS RoboRunner to support robotics apps &amp; much more</title>
      <dc:creator>Adit Modi</dc:creator>
      <pubDate>Mon, 29 Nov 2021 09:10:48 +0000</pubDate>
      <link>https://forem.com/cloudtech/amazon-launches-aws-roborunner-to-support-robotics-apps-much-more-50li</link>
      <guid>https://forem.com/cloudtech/amazon-launches-aws-roborunner-to-support-robotics-apps-much-more-50li</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;At a keynote during its &lt;strong&gt;Amazon Web Services (AWS) re:Invent 2021 conference today&lt;/strong&gt;, Amazon launched &lt;strong&gt;AWS IoT RoboRunner&lt;/strong&gt;, a new robotics service designed to make it easier for enterprises to build and deploy apps that enable fleets of robots to work together. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Alongside &lt;strong&gt;IoT RoboRunner&lt;/strong&gt;, Amazon announced the &lt;strong&gt;AWS Robotics Startup Accelerator&lt;/strong&gt;, an incubator program in collaboration with nonprofit &lt;strong&gt;MassRobotics&lt;/strong&gt; to tackle challenges in automation, robotics, and industrial internet of things (IoT) technologies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As pandemics drive digital transformation, enterprises are accelerating the adoption of robotics and, more broadly, automation. recently report Automation World companies have found that most of the companies that have adopted robotics over the past year have adopted it to reduce labor costs, increase capacity, and overcome the shortage of available workers. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;According to the same survey, &lt;strong&gt;44.9%&lt;/strong&gt; of companies now consider robots in assembly and manufacturing facilities to be an integral part of their day-to-day operations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon — a large investor in robotics itself — wasn’t shy about its intention to win most of the robotics software market. is expected It is worth more than &lt;strong&gt;$ 7.52 billion by 2022&lt;/strong&gt;. In 2018, the company announced AWS RoboMaker, A product that assists developers in deploying robot applications with AI and machine learning capabilities. And Amazon earlier this year Rolled out SageMaker Reinforcement learning Kubeflow components, a toolkit that supports RoboMaker services for tuning robotics workflows.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  IoT RoboRunner
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwrww3zfl2fupke1goq1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwrww3zfl2fupke1goq1.png" alt="Image description" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Currently previewing IoT RoboRunner is built on technology already used in Amazon Warehouse for robotics management. This allows AWS customers to connect robots to existing automation software and combine each type of data to coordinate the work of the entire operation. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fleet robot Standardize data types such as facilities, locations, and robot task data in the central repository.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the goal of IoT RoboRunner is to simplify the process of building a robot group management app. As companies become more and more dependent on robotics to automate their operations, they are choosing different types of robots, making it more difficult to organize robots efficiently. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Each robot vendor and work management system has its own, often incompatible control software, data formats, and data repositories. Also, as new robots are added to the fleet, they will need to be programmed to connect the control software to the workflow management system and program the logic of the management app.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Developers can use &lt;strong&gt;IoT RoboRunner&lt;/strong&gt; to access the data needed to build robot management apps and leverage pre-built software libraries to create apps for tasks such as work assignments. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In addition to this, you can use IoT RoboRunner to deliver metrics and KPIs to the management dashboard via the API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;With AWS IoT RoboRunner, robot developers no longer have to manage their robots in silos, and centralized management can more effectively automate tasks across the facility.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS IoT RoboRunner lets you connect your robots and work management systems, thereby enabling you to orchestrate work across your operation through a single system view.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  AWS Robotics Startup Accelerator
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhsc4j1x1ygjg1e0w9t0y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhsc4j1x1ygjg1e0w9t0y.png" alt="Image description" width="800" height="453"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Amazon also announced the &lt;strong&gt;Robotics Startup Accelerator&lt;/strong&gt;. The company says it will foster robotics by providing resources to develop, prototype, test, and commercialize products and services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The AWS Robotics Startup Accelerator delivered by MassRobotics aims to help robotics startups adopt and use AWS to boost their robotics development, as well as get hands-on support from industry and AWS experts to rapidly scale their business.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As the trend towards automation continues, robotics start-ups, especially industrial robotics, are attracting the attention of venture capitalists. From March 2020 to March 2021, venture companies invested $ 6.3 billion in robotics companies, an increase of about 50% from March 2019 to March 2020. according to To the data from PitchBook. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the long run, investment in robotics has more than quintupled over the past five years, rising from $ 1 billion in 2015 to $ 5.4 billion in 2020.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The accelerator is a &lt;strong&gt;four-week technical, business, and mentorship opportunity&lt;/strong&gt; open to robotics hardware and software startups from around the globe. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Startups accepted into the four-week program will consult with AWS and MassRobotics industry experts on business models and with AWS robotics experts for help overcoming technological blockers. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Program benefits for startups include hands-on training about AWS solutions for robotics and up to $10,000 in promotional credits for use of AWS IoT, Robotics, and ML services to help guide them forward. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Participants will gain additional knowledge through mentoring from &lt;strong&gt;robotics domain experts&lt;/strong&gt; and &lt;strong&gt;technical subject matter experts&lt;/strong&gt;. To get ready for life after the accelerator, startups will also get business development and investment guidance from MassRobotics, and co-marketing opportunities with AWS via blogs and case studies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Startups interested in applying to be part of the program can learn more &lt;a href="https://awsroboticsstartupaccelerator.splashthat.com/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Applications close on Sunday, January 16, 2022.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Let me know your thoughts in the comment section about the new aws services and accelerator program 👇&lt;br&gt;
And if you haven't yet, make sure to follow me on below handles:&lt;/p&gt;

&lt;p&gt;👋 &lt;strong&gt;connect with me on &lt;a href="https://www.linkedin.com/in/adit-modi-2a4362191/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
🤓 &lt;strong&gt;connect with me on &lt;a href="https://twitter.com/adi_12_modi" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
🐱‍💻 &lt;strong&gt;follow me on &lt;a href="https://github.com/AditModi" rel="noopener noreferrer"&gt;github&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
✍️ &lt;strong&gt;Do Checkout &lt;a href="https://aditmodi.hashnode.dev" rel="noopener noreferrer"&gt;my blogs&lt;/a&gt;&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Like, share and follow me 🚀 for more content.&lt;/p&gt;


&lt;div class="ltag__user ltag__user__id__497987"&gt;
    &lt;a href="/aditmodi" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F497987%2F96c3f130-72d9-449a-8687-242133f019c2.jpg" alt="aditmodi image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/aditmodi"&gt;Adit Modi&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/aditmodi"&gt;Senior Cloud Engineer | AWS Community Builder | 12x AWS Certified | 3x Azure Certified | Author of Cloud Tech , DailyDevOps &amp;amp; BigDataJournal | HashiCorp Ambassador | Lift "Cloud Captain"&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;👨‍💻 &lt;strong&gt;Join our &lt;a href="https://join.slack.com/t/cloudtechcommunity/shared_invite/zt-wptacj2f-Eu4PPvq6WEkBTHg7PR2ncA" rel="noopener noreferrer"&gt;Cloud Tech Slack Community&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
👋 &lt;strong&gt;Follow us on &lt;a href="https://www.linkedin.com/company/cloud-techs" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt; / &lt;a href="https://twitter.com/AboutCloudTech" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; for latest news&lt;/strong&gt; &lt;br&gt;
💻 &lt;strong&gt;Take a Look at our &lt;a href="https://github.com/My-Machine-Learning-Projects-2020" rel="noopener noreferrer"&gt;Github Repos&lt;/a&gt; to know more about our projects&lt;/strong&gt; &lt;br&gt;
✍️ &lt;strong&gt;Our &lt;a href="https://cloudtech.hashnode.dev" rel="noopener noreferrer"&gt;Website&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
      <category>news</category>
      <category>discuss</category>
    </item>
    <item>
      <title>IaaS vs. PaaS vs. SaaS</title>
      <dc:creator>Adit Modi</dc:creator>
      <pubDate>Mon, 21 Jun 2021 03:15:09 +0000</pubDate>
      <link>https://forem.com/cloudtech/iaas-vs-paas-vs-saas-41d2</link>
      <guid>https://forem.com/cloudtech/iaas-vs-paas-vs-saas-41d2</guid>
      <description>&lt;p&gt;An increasing number of businesses are choosing cloud services. If you aren’t familiar with this topic, cloud computing is when hardware (servers, storage, etc.) and software are delivered over the internet.&lt;/p&gt;

&lt;p&gt;Compared to on-premises hardware and software, cloud-based solutions such as IaaS, PaaS, and SaaS offer several major benefits. Let’s briefly mention these benefits in order to understand why cloud computing is so popular in 2019.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjkfnnrt8lw0ijnf8hlk1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjkfnnrt8lw0ijnf8hlk1.png" alt="image" width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;My Background: Cloud Engineer | AWS Community Builder | AWS Educate Cloud Ambassador | 4x AWS Certified | 3x OCI Certified | 3x Azure Certified.&lt;/em&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Scalability.
&lt;/h2&gt;

&lt;p&gt;On-premises solutions are rather difficult to scale, as the type of hardware needed depends on your application’s demands. If your app experiences heavy traffic, you might need to significantly upgrade on-premises hardware. This problem doesn’t exist with a cloud service, which you can quickly scale up or down with a few clicks. Cloud services are a perfect solution for handling peak loads. With cloud-based services, businesses can use whatever computing resources they need.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost-effectiveness.
&lt;/h2&gt;

&lt;p&gt;Cloud computing removes hardware expenses, as hardware is provided by a vendor. There’s no need to buy, install, configure, and maintain servers, databases, and other components of your runtime environment. Moreover, using cloud-based solutions, you pay only for what you use, so if you don’t need extra resources you can simply scale down and not pay for them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Immediate availability.
&lt;/h2&gt;

&lt;p&gt;Cloud solutions are available as soon as you’ve paid for them, so you can start using a cloud service right away. There’s no need to install and configure hardware.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance.
&lt;/h2&gt;

&lt;p&gt;Cloud companies equip their data centers with high-performance computing infrastructure that guarantees low network latency for your applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security.
&lt;/h2&gt;

&lt;p&gt;Cloud infrastructure is kept in safe data centers to ensure a top level of security. Data is backed up and can easily be recovered. Moreover, cloud vendors ensure the security of your data by using networking firewalls, encryption, and sophisticated tools for detecting cybercrime and fraud.&lt;/p&gt;

&lt;p&gt;The advantages of cloud solutions are huge, so it stands to reason that the cloud services market is booming. According to a forecast by Gartner, the global public cloud services market is expected to reach almost $247 billion this year and grow to over $383 billion by 2021.&lt;/p&gt;

&lt;h1&gt;
  
  
  Global Market of Public Cloud Services
&lt;/h1&gt;

&lt;p&gt;Yet choosing the right cloud service can be rather challenging. Many people have no idea what SaaS, IaaS, and PaaS mean or which of these cloud solutions they need for their projects.&lt;/p&gt;

&lt;h1&gt;
  
  
  What do IaaS, PaaS, and SaaS mean?
&lt;/h1&gt;

&lt;p&gt;There are three major types of cloud services: IaaS, PaaS, and SaaS. You’ve probably seen these abbreviations on the websites of cloud providers. Before going into details, let’s compare IaaS, PaaS, and SaaS to transportation:&lt;/p&gt;

&lt;h1&gt;
  
  
  Cloud Services Compared to Means of Transport
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;On-premises IT infrastructure is like owning a car.&lt;/strong&gt; When you buy a car, you’re responsible for its maintenance, and upgrading means buying a new car.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IaaS is like leasing a car.&lt;/strong&gt; When you lease a car, you choose the car you want and drive it wherever you wish, but the car isn’t yours. Want an upgrade? Just lease a different car!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PaaS is like taking a taxi.&lt;/strong&gt; You don’t drive a taxi yourself, but simply tell the driver where you need to go and relax in the back seat.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SaaS is like going by bus.&lt;/strong&gt; Buses have assigned routes, and you share the ride with other passengers.&lt;/p&gt;

&lt;p&gt;These analogies will help you better understand our more detailed explanations. Let’s give a definition to each of these terms.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiykkvc6hbo6rfrkea686.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiykkvc6hbo6rfrkea686.png" alt="image" width="700" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Software as a Service (SaaS)
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;SaaS allows people to use cloud-based web applications.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In fact, email services such as Gmail and Hotmail are examples of cloud-based SaaS services. Other examples of SaaS services are office tools (Office 365 and Google Docs), customer relationship management software (Salesforce), event management software (Planning Pod), and so on.&lt;/p&gt;

&lt;p&gt;SaaS services are usually available with a pay-as-you-go (which means subscription) pricing model. All software and hardware are provided and managed by a vendor, so you don’t need to install or configure anything. The application is ready to go as soon as you get your login and password.&lt;/p&gt;

&lt;h1&gt;
  
  
  Software as a Service (SaaS)
&lt;/h1&gt;

&lt;p&gt;Managed by you  Managed by vendor&lt;/p&gt;

&lt;p&gt;–Hosted applications&lt;br&gt;
Development and management tools&lt;br&gt;
Operating system&lt;br&gt;
Servers and storage&lt;br&gt;
Networking resources&lt;br&gt;
Data center&lt;br&gt;
Perfect for: end users&lt;/p&gt;

&lt;h1&gt;
  
  
  Platform as a Service (PaaS)
&lt;/h1&gt;

&lt;p&gt;PaaS refers to cloud platforms that provide runtime environments for developing, testing, and managing applications.&lt;/p&gt;

&lt;p&gt;Thanks to PaaS solutions, software developers can deploy applications, from simple to sophisticated, without needing all the related infrastructure (servers, databases, operating systems, development tools, etc). Examples of PaaS services are Heroku and Google App Engine.&lt;/p&gt;

&lt;p&gt;PaaS vendors supply a complete infrastructure for application development, while developers are in charge of the code.&lt;/p&gt;

&lt;p&gt;Just like SaaS, Platform as a Service solutions are available with a pay-as-you-go pricing model.&lt;/p&gt;

&lt;h1&gt;
  
  
  Platform as a Service (PaaS)
&lt;/h1&gt;

&lt;p&gt;Managed by you  Managed by vendor&lt;br&gt;
Hosted applications&lt;br&gt;
Development and management tools&lt;br&gt;
Operating system&lt;br&gt;
Servers and storage&lt;br&gt;
Networking resources&lt;br&gt;
Data center&lt;br&gt;
Perfect for: software developers&lt;/p&gt;

&lt;h1&gt;
  
  
  Infrastructure as a Service (IaaS)
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;IaaS is a cloud service that provides basic computing infrastructure: servers, storage, and networking resources. In other words, IaaS is a virtual data center.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;IaaS services can be used for a variety of purposes, from hosting websites to analyzing big data. Clients can install and use whatever operating systems and tools they like on the infrastructure they get. Major IaaS providers include Amazon Web Services, Microsoft Azure, and Google Compute Engine.&lt;/p&gt;

&lt;p&gt;As with SaaS and PaaS, IaaS services are available on a pay-for-what-you-use model.&lt;/p&gt;

&lt;h1&gt;
  
  
  Infrastructure as a Service (IaaS)
&lt;/h1&gt;

&lt;p&gt;Managed by you  Managed by vendor&lt;br&gt;
Hosted applications Servers and storage&lt;br&gt;
Development and management tools    Networking resources&lt;br&gt;
Operating system    Data center&lt;br&gt;
Perfect for: IT administrators&lt;/p&gt;

&lt;p&gt;As you can see, each cloud service (IaaS, PaaS, and SaaS) is tailored to the business needs of its target audience. From the technical point of view, IaaS gives you the most control but requires extensive expertise to manage the computing infrastructure, while SaaS allows you to use cloud-based applications without needing to manage the underlying infrastructure. Cloud services, thus, can be depicted as a pyramid:&lt;/p&gt;

&lt;h1&gt;
  
  
  IaaS, PaaS, SaaS Hierarchy Diagram
&lt;/h1&gt;

&lt;p&gt;Now that &lt;strong&gt;you know what SaaS, PaaS, and IaaS mean&lt;/strong&gt;, let’s be more specific about when each should be used and &lt;strong&gt;what their advantages and disadvantages are.&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  When and Why You Should Use SaaS
&lt;/h1&gt;

&lt;p&gt;We’ve already mentioned some examples of SaaS solutions, so you have a general understanding of when they’re used. Let’s provide some more details.&lt;/p&gt;

&lt;h1&gt;
  
  
  SaaS solutions can be used for:
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Personal purposes.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Millions of individuals all over the world use email services (Gmail, Hotmail, Yahoo), cloud storage services (Dropbox, Microsoft OneDrive), cloud-based file management services (Google Docs), and so on. People may not realize it, but all of these cloud services are actually SaaS services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Business.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Companies of various sizes may use SaaS solutions such as corporate email services (Gmail is available for businesses, for example), collaboration tools (Trello), customer relationship management software (Salesforce, Zoho), event management software (EventPro, Cvent), and enterprise resource planning software (SAP S/4HANA Cloud ERP).&lt;/p&gt;

&lt;p&gt;SaaS services offer plenty of advantages to individuals and businesses:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Access to applications from anywhere.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Unlike on-premises software, which can be accessed only from a computer (or a network) it’s installed on, SaaS solutions are cloud-based. Thus, you can access them from anywhere there’s internet access, be it your company’s office or a hotel room.&lt;br&gt;
Can be used from any device. Cloud-based SaaS services can be accessed from any computer. You only need to sign in. Many SaaS solutions have mobile apps, so they can be accessed from mobile devices as well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automatic software updates.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You don’t need to bother updating your SaaS software, as updates are carried out by a cloud service vendor. If there are any bugs or technical troubles, the vendor will fix them while you focus on your work instead of on software maintenance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Low cost.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Compared to on-premises software, SaaS services are rather affordable. There’s no need to pay for the whole IT infrastructure; you pay only for the service at the scale you need. If you need extra functionality, you can always update your subscription.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simple adoption.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SaaS services are available out-of-the-box, so adopting them is a piece of cake. We’ve already mentioned what you need to do: just sign up. It’s as simple as that. There’s no need to install anything.&lt;/p&gt;

&lt;p&gt;Of course, SaaS solutions have certain disadvantages as well, so let’s mention a couple of them:&lt;/p&gt;

&lt;p&gt;You have no control over the hardware that handles your data.&lt;br&gt;
Only a vendor can manage the parameters of the software you’re using.&lt;/p&gt;

&lt;h1&gt;
  
  
  When and Why You Should Use PaaS
&lt;/h1&gt;

&lt;p&gt;PaaS solutions are used mostly by software developers. PaaS provides an environment for developing, testing, and managing applications. PaaS is therefore the perfect choice for software development companies.&lt;/p&gt;

&lt;p&gt;No wonder that software developers use &lt;strong&gt;PaaS services such as Heroku, Elastic Beanstalk (offered by Amazon Web Services), and Google App Engine.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;PaaS provides a number of benefits to developers:&lt;/p&gt;

&lt;h1&gt;
  
  
  Reduced development time.
&lt;/h1&gt;

&lt;p&gt;PaaS services allow software developers to significantly reduce development time. Server-side components of the computing infrastructure (web servers, storage, networking resources, etc.) are provided by a vendor, so development teams don’t need to configure, maintain, or update them. Instead, developers can focus on delivering projects with top speed and quality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Support for different programming languages.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;PaaS cloud services usually support multiple programming languages, giving developers an opportunity to deliver various projects, from startup MVPs to enterprise solutions, on the same platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Easy collaboration for remote and distributed teams.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;PaaS gives enormous collaboration capabilities to remote and distributed teams. Outsourcing and freelancing are common today, and many software development teams are comprised of specialists who live in different parts of the world. PaaS services allow them to access the same software architecture from anywhere and at any time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High development capabilities without additional staff.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;PaaS provides development companies with everything they need to create applications without the necessity of hiring additional staff. All hardware and middleware is provided, maintained, and upgraded by a PaaS vendor, which means businesses don’t need staff to configure servers and databases or deploy operating systems.&lt;br&gt;
Of course, PaaS cloud services have certain disadvantages:&lt;/p&gt;

&lt;p&gt;You have no control over the virtual machine that’s processing your data.&lt;br&gt;
PaaS solutions are less flexible than IaaS. For example, you can’t create and delete several virtual machines at a time.&lt;/p&gt;

&lt;h1&gt;
  
  
  When and Why You Should Use IaaS
&lt;/h1&gt;

&lt;p&gt;IaaS solutions can be used for multiple purposes. Unlike SaaS and PaaS, IaaS provides hardware infrastructure that you can use in a variety of ways. It’s like having a set of tools that you can use for constructing the item you need.&lt;/p&gt;

&lt;h1&gt;
  
  
  Here are several scenarios when you can use IaaS:
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Website or application hosting.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can run your website or application with the help of IaaS (for example, using Elastic Compute Cloud from Amazon Web Services).&lt;br&gt;
Virtual data centers. &lt;/p&gt;

&lt;p&gt;IaaS is the best solution for building virtual data centers for large-scale enterprises that need an effective, scalable, and safe server environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data analysis.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Analyzing huge amounts of data requires incredible computing power, and IaaS is the most economical way to get it. Companies use Infrastructure as a Service for data mining and analysis.&lt;br&gt;
Infrastructure as a Service provides the following major advantages for businesses:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No expenses on hardware infrastructure.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;IaaS vendors provide and maintain hardware infrastructure: servers, storage, and networking resources. This means that businesses don’t need to invest in expensive hardware, which is a substantial cost savings as IT hardware infrastructure is rather pricey.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Perfect scalability.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Though all cloud-based solutions are scalable, this is particularly true of Infrastructure as a Service, as additional resources are available to your application in case of higher demand. Apps can also be scaled down if demand is low.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reliability and security.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ensuring the safety of your data is a IaaS vendor’s responsibility. Hardware infrastructure is usually kept in specially designed data centers, and a cloud provider guarantees security of your data.&lt;/p&gt;

&lt;p&gt;Finally, let’s specify the disadvantages of IaaS cloud solutions:&lt;/p&gt;

&lt;p&gt;IaaS is more expensive than SaaS or PaaS, as you in fact lease hardware infrastructure.&lt;/p&gt;

&lt;p&gt;All issues related to the management of a virtual machine are your responsibility.&lt;/p&gt;

&lt;h1&gt;
  
  
  IaaS vs PaaS vs SaaS: Which Cloud Service Is Suitable for You?
&lt;/h1&gt;

&lt;p&gt;It’s time to pick which cloud-based service you need. In fact, the choice totally depends on your business goals, so first of all consider what your company needs. Here are some common business needs that can easily be met with the appropriate cloud service:&lt;/p&gt;

&lt;p&gt;If your business needs out-of-the-box software (CRM, email, collaboration tools, etc.), choose Software as a Service.&lt;br&gt;
If your company requires a platform for building software products, pick Platform as a Service.&lt;br&gt;
If your business needs a virtual machine, opt for Infrastructure as a Service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffyrxb4579f2boiadyyoy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffyrxb4579f2boiadyyoy.png" alt="image" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hope this guide helps you understand different Cloud Services like IaaS , PaaS and SaaS, feel free to connect with me on &lt;a href="https://www.linkedin.com/in/adit-modi-2a4362191/" rel="noopener noreferrer"&gt;LinkedIn.&lt;/a&gt;&lt;br&gt;
You can view my badges &lt;a href="https://www.youracclaim.com/users/adit-modi/badges" rel="noopener noreferrer"&gt;here.&lt;/a&gt;&lt;br&gt;
If you are interested in learning more about AWS Services then follow me on &lt;a href="https://github.com/AditModi" rel="noopener noreferrer"&gt;github.&lt;/a&gt;&lt;br&gt;
If you liked this content then do clap and share it . Thank You .&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>beginners</category>
    </item>
    <item>
      <title>The Advantages of Adopting Cloud Computing Services</title>
      <dc:creator>Adit Modi</dc:creator>
      <pubDate>Fri, 18 Jun 2021 06:35:30 +0000</pubDate>
      <link>https://forem.com/cloudtech/the-advantages-of-adopting-cloud-computing-services-2275</link>
      <guid>https://forem.com/cloudtech/the-advantages-of-adopting-cloud-computing-services-2275</guid>
      <description>&lt;p&gt;As recently as just a few years ago, business leaders worried about the unknown factors of moving their core applications to the cloud. Factors like not knowing how a particular application might work in the cloud were deterrents to what would have been optimal cloud migration, according to Tech Target. With new attention to performance metrics and understanding how their applications will scale with cloud services, organizations are finding cloud computing companies more attractive all the time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CMS Wire&lt;/strong&gt; confirms the rapidly increasing adoption of cloud computing services, reporting that enterprises are more willing than ever to stretch beyond the boundaries of their on-premises data center systems to invest in IT infrastructure that supports deployment in cloud environments. This large-scale migration to the cloud—cutting across various industries and a range of business sizes—is set to result in a projected &lt;strong&gt;$266 billion annual spending on IT infrastructure and cloud services by 2021&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmz47et8v0hdg80nwa08h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmz47et8v0hdg80nwa08h.png" alt="image" width="589" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;My Background: Cloud Engineer | AWS Community Builder | AWS Educate Cloud Ambassador | 4x AWS Certified | 3x OCI Certified | 3x Azure Certified.&lt;/em&gt; &lt;/p&gt;

&lt;h1&gt;
  
  
  The Advantages of Adopting Cloud Computing Services
&lt;/h1&gt;

&lt;p&gt;As you now seriously consider migrating your data and/or applications to one of the high-caliber cloud computing companies like Amazon Web Services, it may help you to learn just a few of the many ways your company can benefit from the advantages the cloud offers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automated Application Deployments and Data Backups -&lt;/strong&gt;Managing the deployment of new code, security patches, data backup, and disaster recovery plans are crucial tasks for a software development company and should not be taken lightly. Even if your company is not a software development company, you still have to worry about security patches, data backup, and disaster recovery. Fortunately, these are all things that can be automated and managed more easily on the cloud. Cloud service providers like &lt;em&gt;Amazon Web Services (AWS)&lt;/em&gt; provide the tools and automation necessary to make managing your deployments and data backups something you don't have to worry about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reduce and Streamline IT Infrastructure Costs -&lt;/strong&gt;By adopting cloud services, you can say goodbye to the costly need to purchase servers that require expert installation, regular maintenance and full replacement every few years. You can also reduce the staffing costs you once spent to manage your detail-heavy infrastructure to focus your human resource allowance on tasks like tending to help desk issues and managing your system's internal controls.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Continually Available Infrastructure -&lt;/strong&gt;The virtualized servers in cloud computing companies do not rely on specific hardware, so they are always available to your users via office computers, remote laptops or on mobile devices. Common hardware issues, such as hard-drive failures, no longer have any direct impact on your users' ability to access data and applications stored in the cloud.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flexible, Scalable, and Resilient Services -&lt;/strong&gt;Whether you own a small business and anticipate growth in the coming years, or you plan to further expand your large-scale operations geographically, cloud service providers like Amazon Web Services offer the flexibility, scalability, and resilience that your enterprise requires. No matter what changes you have in mind for your organization, this trio of advantages means that you can grow your company without worrying about impacting your users' needs. You can add more storage space as needed while your users—across town or around the world—can access your database or applications with ease and without interruption.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trade capital expense for variable expense –&lt;/strong&gt;Instead of having to invest heavily in data centers and servers before you know how you’re going to use them, you can pay only when you consume computing resources, and pay only for how much you consume.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefit from massive economies of scale –&lt;/strong&gt;By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stop guessing capacity –&lt;/strong&gt;Eliminate guessing on your infrastructure capacity needs. When you make a capacity decision prior to deploying an application, you often end up either sitting on expensive idle resources or dealing with limited capacity. With cloud computing, these problems go away. You can access as much or as little capacity as you need, and scale up and down as required with only a few minutes’ notice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Increase speed and agility –&lt;/strong&gt;In a cloud computing environment, new IT resources are only a click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stop spending money running and maintaining data centers -&lt;/strong&gt;Focus on projects that differentiate your business, not the infrastructure. Cloud computing lets you focus on your own customers, rather than on the heavy lifting of racking, stacking, and powering servers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Go global in minutes –&lt;/strong&gt;Easily deploy your application in multiple regions around the world with just a few clicks. This means you can provide lower latency and a better experience for your customers at minimal cost.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Cloud computing continues to grow and will likely continue to do so. The &lt;strong&gt;low-cost infrastructure&lt;/strong&gt; for enterprise solutions combined with the high-value services has resulted in cloud services being consistently in-demand.&lt;/p&gt;

&lt;p&gt;The added cloud computing benefits like mobility and improvement to business insights all contribute to pushing cloud tech-forward.&lt;/p&gt;

&lt;p&gt;Numbers from Statista prove why cloud computing is such a big deal:&lt;/p&gt;

&lt;p&gt;The size of the cloud computing market has exceeded &lt;strong&gt;$146 billion&lt;/strong&gt;&lt;br&gt;
Global cloud data center traffic topped &lt;strong&gt;10.6 zettabytes&lt;/strong&gt; in 2020&lt;br&gt;
Demand for public cloud services is expected to grow more than 17 percent this year&lt;br&gt;
Growth is expected among all types of cloud services, with businesses at all levels switching to &lt;strong&gt;cloud-based tools&lt;/strong&gt; and storage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnvz7rex7fklqxlz3rf3n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnvz7rex7fklqxlz3rf3n.png" alt="image" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hope this guide helps you understand The Advantages of Adopting Cloud Computing Services, feel free to connect with me on &lt;a href="https://www.linkedin.com/in/adit-modi-2a4362191/" rel="noopener noreferrer"&gt;LinkedIn.&lt;/a&gt;&lt;br&gt;
You can view my badges &lt;a href="https://www.youracclaim.com/users/adit-modi/badges" rel="noopener noreferrer"&gt;here.&lt;/a&gt;&lt;br&gt;
If you are interested in learning more about AWS then follow me on &lt;a href="https://github.com/AditModi" rel="noopener noreferrer"&gt;github.&lt;/a&gt;&lt;br&gt;
If you liked this content then do clap and share it . Thank You .&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>beginners</category>
      <category>newbie</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>Comparing Managed Kubernetes Services: EKS vs. AKS vs. GKE</title>
      <dc:creator>Adit Modi</dc:creator>
      <pubDate>Fri, 28 May 2021 03:27:36 +0000</pubDate>
      <link>https://forem.com/cloudtech/comparing-managed-kubernetes-services-eks-vs-aks-vs-gke-4l8d</link>
      <guid>https://forem.com/cloudtech/comparing-managed-kubernetes-services-eks-vs-aks-vs-gke-4l8d</guid>
      <description>&lt;p&gt;The way organizations are using Kubernetes has quickly evolved in the past years. All the giant cloud providers offer managed Kubernetes services for their customers so that they can easily automate the deployment, scale, and manage their containerized applications.&lt;/p&gt;

&lt;p&gt;But how do these platforms perform? Do they live up to the hype? How well do they integrate? What’s it like maintaining and working with them? That’s why in this article, we reviewed the Managed Kubernetes solutions from the top cloud providers: Amazon Elastic Kubernetes Service (EKS) from Amazon, Google Kubernetes Engine (GKE) from Google Cloud Platform and Azure Kubernetes Service (AKS) from Microsoft Azure.&lt;/p&gt;

&lt;p&gt;It is best to get a deep understanding and look beyond the price and consider factors like scalability, security, features before making a final decision. We’ve also decided to group the different features available for each managed Kubernetes services in this blog. So, let’s dig in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F332uia9o2x2t2nnsikck.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F332uia9o2x2t2nnsikck.png" alt="image" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My Background: Cloud Engineer | AWS Community Builder | AWS Educate Cloud Ambassador | 4x AWS Certified | 3x OCI Certified | 3x Azure Certified.&lt;/p&gt;

&lt;h1&gt;
  
  
  General Overview
&lt;/h1&gt;

&lt;h1&gt;
  
  
  Amazon Elastic Kubernetes Service
&lt;/h1&gt;

&lt;p&gt;Amazon Elastic Kubernetes Service (EKS) is a managed service publicly available in June 2018 to run Kubernetes on AWS. It can be integrated easily with all the services, apps, and protocols that run on a Kubernetes Environment.&lt;/p&gt;

&lt;p&gt;EKS is designed entirely around Kubernetes, so everything you need to manage and deploy containers is included. Whether its seamless integration with the third-party tools for logs and performance metrics or advanced scaling capabilities.&lt;/p&gt;

&lt;p&gt;EKS is an excellent option if you already have an established AWS cloud architecture and want to experiment with Kubernetes or looking forward to migrating workloads on different clouds&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgqvrvqx26g8uzk9f5rs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgqvrvqx26g8uzk9f5rs.png" alt="image" width="318" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Azure Kubernetes Service
&lt;/h1&gt;

&lt;p&gt;Azure Kubernetes Service (AKS) is a managed Kubernetes solution that was made available in 2018 by Microsoft. This is a fully managed service that makes containerized apps easy to deploy and manage in the Kubernetes environment.&lt;/p&gt;

&lt;p&gt;AKS runs both on Azure Public Cloud, on-premises, which helps deliver mission-critical applications to customers. AWS also has Government Cloud support for Government and their partners to run sensitive workloads.&lt;/p&gt;

&lt;p&gt;AKS is worth it when it comes to seamless integration with its tools, including Visual Studio and Active Directory and the rest of the Microsoft Cloud SaaS services. If you have an established enterprise agreement with Microsoft, and no preference for any other architecture, then AKS will perfectly suit your requirements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk8e46ckx192g80jk8trt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk8e46ckx192g80jk8trt.png" alt="image" width="330" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Google Kubernetes Engine
&lt;/h1&gt;

&lt;p&gt;Kubernetes itself started as a Google’s internal project, so it makes sense that they were the first to deliver managed Kubernetes solution in 2014, known as Google Kubernetes Engine(GKE).&lt;/p&gt;

&lt;p&gt;Google Kubernetes Engine (GKE), as a managed production-grade container orchestration engine, is the most resilient and well-rounded Kubernetes offering when compared to AKS and EKS. It has support for the Istio service mesh out of the box, and gVisor for an extra layer of security between running containers. Also, one of the key benefits of GKE is that service upgrades and new versions are instantly available while other providers take time to provide update releases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cx28emk64l8xutlb0gf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cx28emk64l8xutlb0gf.png" alt="image" width="295" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  High availability of Clusters
&lt;/h1&gt;

&lt;p&gt;High availability of clusters is crucial if you are running production critical applications on Kubernetes. It ensures your cluster will be available when something goes wrong. For instance, if your services rely on a single data center and it goes down, your services will not be interrupted.&lt;/p&gt;

&lt;p&gt;GKE offers excellent support for highly available clusters in two modes: multi-zonal and regional. There is just one master node in multi-zonal mode, yet there can be many worker nodes across various zones. In regional mode, the master nodes are likewise spanned over all the regional zones to provide superior HA.&lt;/p&gt;

&lt;p&gt;AKS doesn’t provide high availability for their master nodes, as of date. However, the nodes are deployed in Availability Zones, for greater availability.&lt;/p&gt;

&lt;p&gt;EKS likewise provides HA across both workers, and master nodes spanned over different accessibility zones in the same way as GKE’s.&lt;/p&gt;

&lt;h1&gt;
  
  
  SLA
&lt;/h1&gt;

&lt;p&gt;SLA (service level agreement) is a powerful acronym in all industries, and it is no different within the cloud community.&lt;/p&gt;

&lt;p&gt;All cloud platform providers offer different SLA’s, which guarantees different uptimes according to their availability zones and their region of deployment. For example, Amazon EKS guarantees 99.95% uptime, AKS offers 99.95% when availability zones are enabled, and 99.9% when disabled, andGKE splits its managed Kubernetes clusters, offering 99.5% uptime for Zonal deployments and 99.95% for regional deployments.&lt;/p&gt;

&lt;p&gt;Differences in SLA for Kubernetes control planes also present another area to compare. A Kubernetes control plane is a management infrastructure implemented by the cloud provider to efficiently perform all the essential processes for running your worker nodes.&lt;/p&gt;

&lt;p&gt;It varies by different cloud providers. AKS Control Plane comes free of cost, and you do not pay anything for it. GKE was also free of cost initially, but they have announced they will start charging soon for the control plane. EKS initially charge for their control plan right from the beginning.&lt;/p&gt;

&lt;h1&gt;
  
  
  Resource Availability with Node Pools
&lt;/h1&gt;

&lt;p&gt;For different types of workflows, different kinds of machines are allocated to clusters by node pools. For example, storage systems require better storage disks than workflows like visual data analysis, requiring a better CPU and GPU. With node pools, we can provide the best resource available for specific nodes and provide optimal performance on those nodes while not allocating resources to all the cluster nodes.&lt;/p&gt;

&lt;p&gt;GKE and EKS are leading in this since they both provide functionality for node pooling. But AKS, on the other hand, does not provide node grouping and recommends different clusters in different scenarios.&lt;/p&gt;

&lt;h1&gt;
  
  
  Scalability
&lt;/h1&gt;

&lt;p&gt;GKE, AKS, and EKS all provide you the ability to scale up nodes very quickly, just by using the UI. In autoscaling, GKE is leading as the most mature solution available on the interface. What a user needs to do is just specify the desired VM size and the range of nodes in the node pool. And the rest of the steps are managed by Google Cloud. EKS and AKS come after GKE in auto-scaling because they need some manual configurations to setup.&lt;/p&gt;

&lt;p&gt;GKE and AKS also provides further customization in its ability to scale up. Unlike EKS. In GKE and AKS, you can configure to a cluster using the Cluster Autoscaler, which will scale your nodes up or down based on the required workload. That is especially helpful when you have to run short-lived processes. EKS can also implement auto-scaling policies, but you have to set them manually compared to GKE and AKS, which provides Cluster Autoscaler by default.&lt;/p&gt;

&lt;h1&gt;
  
  
  Bare Metal Clusters
&lt;/h1&gt;

&lt;p&gt;As understandable by name, Bare metal clusters are deployed on a cloud architecture with no virtualization layer in between, in other words, no VMs. There are various benefits of this technique. The infrastructure overhead reduced drastically, which provide access to more computing and storage resources for application deployments. Access to more computing resources also increases the computing power, which helps reducing latency and downtimes for a particular application request.&lt;/p&gt;

&lt;p&gt;Coming to GKE vs AKS vs EKS for bare-metal performance. EKS allows the use of bare metal nodes. GKE and AKS do not support bare metal nodes. Although EKS does not default to bare metal nodes as they are expensive to deploy.&lt;/p&gt;

&lt;h1&gt;
  
  
  Resource Limits
&lt;/h1&gt;

&lt;p&gt;Resource limits are handled in a different way across providers — limits are per account with EKS, where AKS handle limits per subscription, and GKE balances limits on a per-project basis. EKS offers a maximum of 100 nodes per cluster, per account. AKS offers 500 nodes per cluster, and GKE offers 5000 nodes per cluster per project.&lt;/p&gt;

&lt;p&gt;While most limits look clear on paper, some are not. In AKS, for example, the maximum number of nodes that you can have depends on whether the node is available in State Set or Availability Set. On the other hand, In EKS, the maximum number of nodes per cluster you can get varies on the node’s instance type. Whereas GKE provides you more highly available nodes without any location variables.&lt;/p&gt;

&lt;h1&gt;
  
  
  Resource monitoring
&lt;/h1&gt;

&lt;p&gt;In terms of resource monitoring, all three cloud providers have offerings. GKE uses Stackdriver for resource monitoring within their Kubernetes cluster. It monitors the master and worker nodes, and all Kubernetes components inside the platform along with logging. AKS offers Azure Monitor to evaluate the health of a container and Application Insights to monitor the Kubernetes components. EKS requires the use of third-party tools and recommends Prometheus for resource monitoring.&lt;/p&gt;

&lt;h1&gt;
  
  
  Role-based access control (RBAC)
&lt;/h1&gt;

&lt;p&gt;Role-based access control (RBAC) in Kubernetes allows admins to configure dynamic policies to deny unauthorized access. All the three hosted services providers provide RBAC implementations, but they set it differently. EKS has a slight advantage, with a tighter security policy overall, as it considers RBAC and pod security policies mandatory compared to GKE and AKS.&lt;/p&gt;

&lt;h1&gt;
  
  
  Availability as a Cloud Provider
&lt;/h1&gt;

&lt;p&gt;All three providers have their offerings available in most regions globally. Google Cloud has the best availability among these three globally. It has services in almost every region following the lead is Azure. Azure comes above AWS after launching services in Latin America and Africa,&lt;/p&gt;

&lt;p&gt;Also, EKS is not available in the AWS government cloud; AKS, however, has one Azure government cloud, whereas Google has no government clouds.&lt;/p&gt;

&lt;h1&gt;
  
  
  Secure Image Management
&lt;/h1&gt;

&lt;p&gt;All three cloud providers offer container image registry services that provide secure image management and stable build creation. But the degree of control these cloud provider provides varies.&lt;/p&gt;

&lt;p&gt;The image signing feature of Azure Container Registry (ACR) provides users with the ability to check their container images’ authenticity. In the same way, immutable image tags in Elastic Container Registry (ECR) allows users to create a secure container builds at all times.&lt;/p&gt;

&lt;p&gt;Lastly, Binary Authorization in GKE prevents deployment of images conflicting with the set policies and triggers the automatic lock-down of those risky images.&lt;/p&gt;

&lt;p&gt;Elastic Container Registry and Azure Container Registry also support resource-based permissions for access controls on a repository level to prevent unauthorized access, which Google container registry does not.&lt;/p&gt;

&lt;h1&gt;
  
  
  Pricing
&lt;/h1&gt;

&lt;p&gt;Each vendor has its specific features, limitations, and pricing plans. Management and deployment of clusters that include master and worker machines running provided at no cost by GKE and AKS. You are charged only for services that you use on the go, such as bandwidth, storage, and virtual machines. In comparison, Amazon EKS costs you $0.10 per hour for each deployed cluster other than the instances and services you are using.&lt;/p&gt;

&lt;p&gt;Concerning the price overall, here are some rough figures to help you determine costs when choosing a Kubernetes platform. This cost comparison assumes that you have 20 worker nodes, and each node has 80 CPU and 320GB of RAM. Billable hours: 14,600 hours per month.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfmqx1ug9p3we53kqr4s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfmqx1ug9p3we53kqr4s.png" alt="Alt Text" width="446" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Final Words
&lt;/h1&gt;

&lt;p&gt;AWS, Microsoft Azure, and Google Cloud Platform all are claiming to be the best Managed Kubernetes solution from the past years. It’s on you. Whether you want the advantage of the most mature and budget product of Google or you want to leverage your Microsoft Enterprise Agreement to get better pricing and support on Azure, or you want to make your transition to the cloud easier with EKS on Amazon.&lt;/p&gt;

&lt;p&gt;To find that, its always important to compare storage, network, and compute features for each provider before you decide for a managed Kubernetes service. It is also critical to compare the costs since services can vary between regions and are different for each configuration.&lt;/p&gt;

&lt;p&gt;Completely testing service’s features and capabilities in your environment will ultimately provide the real-time pricing and performance metrics, which will help determine a Kubernetes offering that perfectly suits your business needs.&lt;/p&gt;

&lt;p&gt;Hope this guide helps you understand the Different Managed Kubernetes Services: EKS vs. AKS vs. GKE, feel free to connect with me on &lt;a href="https://www.linkedin.com/in/adit-modi-2a4362191/" rel="noopener noreferrer"&gt;LinkedIn.&lt;/a&gt;&lt;br&gt;
You can view my badges &lt;a href="https://www.youracclaim.com/users/adit-modi/badges" rel="noopener noreferrer"&gt;here.&lt;/a&gt;&lt;br&gt;
If you are interested in learning more about AWS then follow me on &lt;a href="https://github.com/AditModi" rel="noopener noreferrer"&gt;github.&lt;/a&gt;&lt;br&gt;
If you liked this content then do clap and share it . Thank You .&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Database Backup Scripts For MongoDB with Amazon S3</title>
      <dc:creator>Adit Modi</dc:creator>
      <pubDate>Wed, 26 May 2021 07:23:11 +0000</pubDate>
      <link>https://forem.com/cloudtech/database-backup-scripts-for-mongodb-with-amazon-s3-d08</link>
      <guid>https://forem.com/cloudtech/database-backup-scripts-for-mongodb-with-amazon-s3-d08</guid>
      <description>&lt;p&gt;This article will provide you with database backup scripts that not only allow you to create database backups, but also upload the backup dumps to Amazon S3 and automate the process daily.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;p&gt;-&amp;gt;Why we need a database backup?&lt;br&gt;
-&amp;gt;Why Amazon S3 for backup?&lt;br&gt;
-&amp;gt;What is Cron?&lt;br&gt;
-&amp;gt;What is Chmod?&lt;br&gt;
-&amp;gt;Database Backup Script for MongoDB and Dumping to Amazon S3&lt;br&gt;
-&amp;gt;Generate a shell script which will dump the MongoDB database&lt;br&gt;
-&amp;gt;Create a shell script which sync the backups with Amazon S3&lt;br&gt;
-&amp;gt;Creating the folder in Amazon S3 for the database dumps&lt;br&gt;
-&amp;gt;How to configure the AWS CLI&lt;br&gt;
-&amp;gt;How to set up AWS key &amp;amp; Secret&lt;br&gt;
-&amp;gt;How to set up Cron (to automate the process)&lt;br&gt;
-&amp;gt;Conclusion&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn03pa0bkr2iym6n9akeg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn03pa0bkr2iym6n9akeg.png" alt="image" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My Background: I am Cloud , DevOps &amp;amp; Big Data Enthusiast | 4x AWS Certified | 3x OCI Certified | 3x Azure Certified .&lt;/p&gt;

&lt;h2&gt;
  
  
  Why we need a database backup?
&lt;/h2&gt;

&lt;p&gt;One might think why backup is necessary for my database? The answer is simple, backup creates a copy of your physical, logical, and operational data. Which you can store at any safe place such as Amazon S3. This copy comes into use if the running database gets corrupted. Database backup can include files like control files, datafiles, and archived redo logs.&lt;br&gt;
Remove MongoDB backups from your to-do list.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Amazon S3 for backup?
&lt;/h2&gt;

&lt;p&gt;For this tutorial, we have chosen Amazon S3 as it is a very common choice. You can do the same thing if you would like to use another cloud storage provider. The instructions won't differ a lot as long as the cloud provider is S3-compatible.&lt;br&gt;
Below we defined some less known terms that we used in the article:&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Cron?
&lt;/h2&gt;

&lt;p&gt;Cron is a software utility that offers time-based job scheduling. It supports Unix computer operating systems. To set up software environments, the developer uses Cron. He/she schedules commands or shell scripts so that they run at chosen times. It could be daily, once a week, or any interval as desired.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Chmod?
&lt;/h2&gt;

&lt;p&gt;The chmod a short command of 'change mode' enables the admin to set rules for file handling. In other words, with the help of a "chmod" system call. An administrator can change the access permissions of file system objects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database Backup Script for MongoDB and Dumping to Amazon S3
&lt;/h2&gt;

&lt;p&gt;You can automate the creation of backup and storing it to Amazon S3 within a few minutes. Below bullets brief about what you are going to learn in this part of the article:&lt;br&gt;
Create a script that automates the MongoDB backup directory creation&lt;br&gt;
Upload/sync the backups with Amazon S3&lt;br&gt;
Cron will run this command every day (to back up)&lt;/p&gt;

&lt;h2&gt;
  
  
  Generate a shell script which will dump the MongoDB database
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;cd ~&lt;br&gt;
mkdir scripts&lt;br&gt;
cd scripts&lt;br&gt;
nano db_backup.sh&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;#!/bin/bash&lt;br&gt;
DIR=&lt;/code&gt;date +%d-%m-%y&lt;code&gt;&lt;br&gt;
DEST=~/db_backups/$DIR&lt;br&gt;
mkdir $DEST&lt;br&gt;
mongodump -h localhost:27017 -d my_db_name -o $DEST&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now chmod the script to allow it to for execution&lt;br&gt;
&lt;code&gt;chmod +x ~/scripts/db_backup.sh&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a shell script which sync the backups with Amazon S3
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;nano db_sync.sh&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Copy and paste the script below to it&lt;br&gt;
&lt;code&gt;#!/bin/bash&lt;br&gt;
/usr/local/bin/aws s3 sync ~/db_backups s3://my-bucket-name&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now chmod the script to allow it for execution&lt;br&gt;
&lt;code&gt;chmod +x ~/scripts/db_sync.sh&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the folder in Amazon S3 for the database dumps
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;cd ~&lt;br&gt;
mkdir db_backups&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to configure the AWS CLI
&lt;/h2&gt;

&lt;p&gt;Before installing the AWS CLI you need to installpython-pi. Type the following commands:&lt;br&gt;
&lt;code&gt;apt-get update&lt;br&gt;
apt-get -y install python-pip&lt;br&gt;
curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py"&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install the AWS CLI
&lt;/h2&gt;

&lt;p&gt;Type the following command:&lt;br&gt;
&lt;code&gt;pip install awscli&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;How to set up AWS key &amp;amp; Secret&lt;br&gt;
Configuration and credential file settings&lt;br&gt;
&lt;code&gt;cd ~&lt;br&gt;
mkdir .aws&lt;br&gt;
nano ~/.aws/config&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Paste in key_id and secret_access_key as shown below&lt;br&gt;
&lt;code&gt;[default]&lt;br&gt;
aws_access_key_id=AKIAIOSFODNN7EXAMPLE&lt;br&gt;
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;How to set up Cron (to automate the process)&lt;br&gt;
&lt;code&gt;crontab -e&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Paste the below commands at the bottom to automate the process
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;0 0 * * * ~/scripts/db_backup.sh # take a backup every midnight&lt;br&gt;
0 2 * * * ~/scripts/db_sync.sh # upload the backup at 2am&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This way the backup script will run and also sync with Amazon S3 daily.&lt;br&gt;
Conclusion&lt;br&gt;
Hence, by using these scripts you can achieve 3 goals:&lt;br&gt;
Creating the database backup via a shell script&lt;br&gt;
uploading the dump to Amazon S3&lt;br&gt;
also automating this process using Cron.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxw7mdjrxp7j7obqyj4p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxw7mdjrxp7j7obqyj4p.png" alt="image" width="800" height="535"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Hope this guide helps you understand on how to use shell scripts to take daily backups of your database and push to s3 on a daily basis, feel free to connect with me on &lt;a href="https://www.linkedin.com/in/adit-modi-2a4362191/" rel="noopener noreferrer"&gt;LinkedIn.&lt;/a&gt;&lt;br&gt;
You can view my badges &lt;a href="https://www.youracclaim.com/users/adit-modi/badges" rel="noopener noreferrer"&gt;here.&lt;/a&gt;&lt;br&gt;
If you are interested in learning more about AWS Services then follow me on &lt;a href="https://github.com/AditModi" rel="noopener noreferrer"&gt;github.&lt;/a&gt;&lt;br&gt;
If you liked this content then do clap and share it . Thank You .&lt;/p&gt;

</description>
      <category>aws</category>
      <category>mongodb</category>
      <category>bash</category>
    </item>
    <item>
      <title>15 Years of Amazon S3 with 'Pi Week' Recap | Amazon S3 Object Lambda</title>
      <dc:creator>Adit Modi</dc:creator>
      <pubDate>Tue, 04 May 2021 10:44:43 +0000</pubDate>
      <link>https://forem.com/cloudtech/15-years-of-amazon-s3-with-pi-week-recap-amazon-s3-object-lambda-2c64</link>
      <guid>https://forem.com/cloudtech/15-years-of-amazon-s3-with-pi-week-recap-amazon-s3-object-lambda-2c64</guid>
      <description>&lt;p&gt;Amazon S3 was launched 15 years ago on Pi Day, March 14, 2006, and created the first generally available AWS service. Over that time, data storage and usage has exploded, and the world has never been the same.&lt;/p&gt;

&lt;p&gt;Amazon S3 has virtually unlimited scalability, and unmatched availability, durability, security, and performance. Customers of all sizes and industries can use S3 to store and protect any amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup and restore, archive, and big data analytics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyi4a4eewnuqctydvn3vv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyi4a4eewnuqctydvn3vv.png" alt="image" width="300" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My Background: I am Cloud , DevOps &amp;amp; Big Data Enthusiast | 4x AWS Certified | 3x OCI Certified | 3x Azure Certified . &lt;/p&gt;

&lt;p&gt;When you store data in Amazon Simple Storage Service (S3), you can easily share it for use by multiple applications. However, each application has its own requirements and may need a different view of the data. For example, a dataset created by an e-commerce application may include personally identifiable information (PII) that is not needed when the same data is processed for analytics and should be redacted. On the other side, if the same dataset is used for a marketing campaign, you may need to enrich the data with additional details, such as information from the customer loyalty database.&lt;/p&gt;

&lt;p&gt;To provide different views of data to multiple applications, there are currently two options. You either create, store, and maintain additional derivative copies of the data, so that each application has its own custom dataset, or you build and manage infrastructure as a proxy layer in front of S3 to intercept and process data as it is requested. Both options add complexity and costs, so the S3 team decided to build a better solution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5v5g9yctrz1uaoong0gp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5v5g9yctrz1uaoong0gp.png" alt="image" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Watch the On-Demand AWS Pi Week 4-day , virtual event which was hosted from March 15-18, 2021 hosted on the AWS channel on Twitch as AWS celebrated the 15th birthday of the AWS Cloud. the event included talks from AWS leaders and experts as they took us back in time reviewing the history of AWS and the key decisions involved in the building and evolution of Amazon S3. they also dived into how you can leverage S3 to control costs and continuously optimize your spend, while building modern, scalable applications.&lt;/p&gt;

&lt;p&gt;The event is ideal to watch for anyone ( on-demand ) who is eager to learn more about:&lt;/p&gt;

&lt;p&gt;How S3 and other AWS services are architected for availability and durability inside AWS Regions and Availability Zones&lt;br&gt;
How S3's strong consistency model works to support many different workloads&lt;br&gt;
The history of and best practices for S3 data security&lt;br&gt;
How AWS architects evolvable services that provide new features and greater scalability with no disruption to customers&lt;br&gt;
Detailed ways to move data into and out of the AWS Cloud &lt;/p&gt;

&lt;p&gt;&lt;a href="https://pages.awscloud.com/pi-week-2021.html" rel="noopener noreferrer"&gt;pi-week&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On-demand AWS Pi Week Twitch video streams&lt;/p&gt;

&lt;p&gt;Day 1 - Amazon S3 origins - foundations of cloud infrastructure&lt;br&gt;
&lt;a href="https://www.twitch.tv/videos/950331443" rel="noopener noreferrer"&gt;Video 1&lt;/a&gt; | &lt;a href="https://www.twitch.tv/videos/950384494" rel="noopener noreferrer"&gt;Video 2&lt;/a&gt;&lt;br&gt;
Day 2 - Building data lakes and enabling data movement&lt;br&gt;
&lt;a href="https://www.twitch.tv/videos/951537246?filter=archives&amp;amp;sort=time" rel="noopener noreferrer"&gt;Video 1&lt;/a&gt; | &lt;a href="https://www.twitch.tv/videos/951772985?filter=archives&amp;amp;sort=time" rel="noopener noreferrer"&gt;Video 2&lt;/a&gt;&lt;br&gt;
Day 3 - Amazon S3 security framework and best practices&lt;br&gt;
&lt;iframe src="https://player.twitch.tv/?video=952756254&amp;amp;parent=dev.to&amp;amp;autoplay=false" height="399" width="710"&gt;
&lt;/iframe&gt;
&lt;br&gt;
Day 4 - Amazon S3 and the foundations of a serverless infrastructure&lt;br&gt;
&lt;iframe src="https://player.twitch.tv/?video=953961080&amp;amp;parent=dev.to&amp;amp;autoplay=false" height="399" width="710"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h1&gt;
  
  
  What's New
&lt;/h1&gt;

&lt;p&gt;S3 Object Lambda was announced by AWS, a new capability that allows you to add your own code to process data retrieved from S3 before returning it to an application. S3 Object Lambda works with your existing applications and uses AWS Lambda functions to automatically process and transform your data as it is being retrieved from S3. The Lambda function is invoked inline with a standard S3 GET request, so you don’t need to change your application code.&lt;/p&gt;

&lt;p&gt;In this way, you can easily present multiple views from the same dataset, and you can update the Lambda functions to modify these views at any time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6fwlcikcl8nfqbsbw9v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6fwlcikcl8nfqbsbw9v.png" alt="image" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are many use cases that can be simplified by this approach, for example:&lt;/p&gt;

&lt;p&gt;Redacting personally identifiable information for analytics or non-production environments.&lt;br&gt;
Converting across data formats, such as converting XML to JSON.&lt;br&gt;
Augmenting data with information from other services or databases.&lt;br&gt;
Compressing or decompressing files as they are being downloaded.&lt;br&gt;
Resizing and watermarking images on the fly using caller-specific details, such as the user who requested the object.&lt;br&gt;
Implementing custom authorization rules to access data.&lt;br&gt;
You can start using S3 Object Lambda with a few simple steps:&lt;/p&gt;

&lt;p&gt;Create a Lambda Function to transform data for your use case.&lt;br&gt;
Create an S3 Object Lambda Access Point from the S3 Management Console.&lt;br&gt;
Select the Lambda function that you created above.&lt;br&gt;
Provide a supporting S3 Access Point to give S3 Object Lambda access to the original object.&lt;br&gt;
Update your application configuration to use the new S3 Object Lambda Access Point to retrieve data from S3.&lt;/p&gt;

&lt;h2&gt;
  
  
  Availability and Pricing
&lt;/h2&gt;

&lt;p&gt;S3 Object Lambda is available today in all AWS Regions with the exception of the Asia Pacific (Osaka), AWS GovCloud (US-East), AWS GovCloud (US-West), China (Beijing), and China (Ningxia) Regions. You can use S3 Object Lambda with the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. Currently, the AWS CLI high-level S3 commands, such as aws s3 cp, don’t support objects from S3 Object Lambda Access Points, but you can use the low-level S3 API commands, such as aws s3api get-object.&lt;/p&gt;

&lt;p&gt;With &lt;strong&gt;S3 Object Lambda&lt;/strong&gt;, you pay for the AWS Lambda compute and request charges required to process the data, and for the data S3 Object Lambda returns to your application. You also pay for the S3 requests that are invoked by your Lambda function. For more pricing information, please see the Amazon S3 pricing page.&lt;/p&gt;

&lt;p&gt;This new capability makes it much easier to share and convert data across multiple applications. &lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon S3 Glacier announces a 40% price reduction for PUT and Lifecycle requests
&lt;/h2&gt;

&lt;p&gt;Amazon S3 is reducing the cost to move data to Amazon S3 Glacier by lowering PUT and Lifecycle request charges by 40% for all AWS Regions. You can use the S3 PUT API to directly store compliance and backup data in S3 Glacier that does not require immediate access. You can also use S3 Lifecycle policies to move data from S3 Standard, S3 Standard-Infrequent Access, or S3 One Zone-Infrequent Access to S3 Glacier to save on storage costs when data becomes rarely accessed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon S3 Glacier&lt;/strong&gt; is a secure, durable, and extremely low-cost Amazon S3 cloud storage class for data archiving and long-term backup. S3 Glacier provides a low-cost option for archiving data accessed once per quarter that needs to be accessible within minutes to a few hours.&lt;/p&gt;

&lt;p&gt;In addition to being durable and secure, the S3 Glacier storage class is now even more cost-effective than before. Effective March 1, 2021, AWS is lowering the charges for PUT and Lifecycle requests to S3 Glacier by 40% for all AWS Regions. This includes the AWS GovCloud (US) Regions, the AWS China (Beijing) Region, operated by Sinnet, and the AWS China (Ningxia) Region, operated by NWCD. To learn more, see the S3 pricing page, and get started in the S3 console.&lt;/p&gt;

&lt;p&gt;I hope this guide helps you understand all the new aws s3 features that were launched during the pi week, I know it's a little late to give a recap on "pi week" but I had like to do it anyways, feel free to contact me on &lt;a href="https://www.linkedin.com/in/adit-modi-2a4362191/" rel="noopener noreferrer"&gt;LinkedIn.&lt;/a&gt;&lt;br&gt;
You can view my badges &lt;a href="https://www.youracclaim.com/users/adit-modi/badges" rel="noopener noreferrer"&gt;here.&lt;/a&gt;&lt;br&gt;
If you are interested in learning more about AWS then follow me on &lt;a href="https://github.com/AditModi" rel="noopener noreferrer"&gt;github.&lt;/a&gt;&lt;br&gt;
If you liked this content then do clap and share it . Thank You .&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>beginners</category>
      <category>aws</category>
    </item>
    <item>
      <title>Introduction to Containers with AWS</title>
      <dc:creator>Adit Modi</dc:creator>
      <pubDate>Fri, 16 Apr 2021 05:01:37 +0000</pubDate>
      <link>https://forem.com/cloudtech/introduction-to-containers-with-aws-og4</link>
      <guid>https://forem.com/cloudtech/introduction-to-containers-with-aws-og4</guid>
      <description>&lt;p&gt;&lt;strong&gt;Containerization&lt;/strong&gt;—a virtualization method used to deploy and run distributed applications without the need to launch an entire virtual machine for each application—is changing the way businesses develop and deploy applications in cloud environments. Containers decompose applications into small, manageable packages containing everything the application needs to run: code, core data, configuration files, interfaces, and dependencies.&lt;/p&gt;

&lt;p&gt;The container approach allows developers to focus on applications and not be concerned with deployment and infrastructure management. From a development perspective, there are numerous benefits to the container approach.&lt;/p&gt;

&lt;p&gt;Accelerate the development pipeline, including testing and debugging.&lt;br&gt;
Facilitate &lt;strong&gt;continuous integration (CI) **and **continuous deployment (CD)&lt;/strong&gt; workflows, automatically rebuilding whenever a new code revision is committed.&lt;/p&gt;

&lt;p&gt;Containers run locally on desktop or laptop and are easily uploaded directly to the Cloud.&lt;br&gt;
Consistent results when moving code from development to test to production systems.&lt;/p&gt;

&lt;p&gt;No need to rewrite code for each OS and cloud platform, making it easy to move containers from one cloud provider to another.&lt;br&gt;
The advantages of containers extend beyond the development cycle. Containers utilize compute resources more efficiently by eliminating the need for a hypervisor. &lt;/p&gt;

&lt;p&gt;They simply share OS kernel without impacting the performance of applications running inside the container. With a smaller footprint, more containers can run on a single host, resulting in better utilization of compute resources and lower costs. &lt;/p&gt;

&lt;p&gt;Additionally, containers can be configured with only the desired binaries and components, eliminating potential vulnerabilities that might be found in a full fledged OS. Containers that can run on &lt;strong&gt;Amazon EC2 Spot Instances can obtain up to a 90% discount compared to On-Demand &lt;br&gt;
prices.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00d9zw36vvqfaa57xap3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00d9zw36vvqfaa57xap3.png" alt="image" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;My Background: I am Cloud , DevOps &amp;amp; Big Data Enthusiast | 4x AWS Certified | 3x OCI Certified | 3x Azure Certified .&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The Introduction to AWS is a Series containing different articles that provide a basic introduction to different aws topics/categories. Each article covers the detailed guide on how to work with particular topic/category . This series aims at providing "A Getting Started Guide on Different aws topics / categories ."&lt;/p&gt;

&lt;p&gt;There are a bunch of different ways to run your &lt;strong&gt;containerized workloads on AWS.&lt;/strong&gt; This blog post compares the three most important ways to run Docker on AWS:&lt;/p&gt;

&lt;h4&gt;
  
  
  1.)Amazon Elastic Container Service (ECS) with AWS Fargate
&lt;/h4&gt;

&lt;h4&gt;
  
  
  2.)Amazon Elastic Container Service for Kubernetes (EKS)
&lt;/h4&gt;

&lt;h4&gt;
  
  
  3.)AWS Elastic Beanstalk (EB) with Single Container Docker
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjzo3tz8l9g0i3ukhp1vp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjzo3tz8l9g0i3ukhp1vp.png" alt="Alt Text" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ECS with Fargate
&lt;/h2&gt;

&lt;p&gt;First, let’s have a look at ECS, a &lt;strong&gt;fully-managed container orchestration service.&lt;/strong&gt; ECS is a proprietary but free of charge solution offered by AWS. It is important to mention that ECS provides a high level of integration with the AWS infrastructure. For example, containers are 1st class citizens of the VPC with their network interface (ENI) and security groups.&lt;/p&gt;

&lt;p&gt;ECS offers &lt;strong&gt;service discovery&lt;/strong&gt; via a load balancer or DNS (Cloud Map).&lt;/p&gt;

&lt;p&gt;Aside from that ECS is the only option to run Docker containers without running EC2 instances on AWS. Fargate is the compute engine for ECS. All the heavy lifting of &lt;strong&gt;scaling&lt;/strong&gt; the number of EC2 instances and containers, rolling out updates to EC2 instances without affecting containers, and many more is gone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ECS is free of charge.&lt;/strong&gt; Fargate is billed per second based on CPU and memory allocated for your containers. A container with 1 vCPU and 4 GB is about USD 30 per month.&lt;/p&gt;

&lt;p&gt;Keep in mind the following limitations of Fargate:&lt;/p&gt;

&lt;p&gt;General purpose compute capacity only. Fargate does not support GPU, CPU/memory optimized configurations at the moment.&lt;br&gt;
Persistent volumes are not supported out of the box (e.g., Docker volume driver).&lt;br&gt;
No discounts for reserved capacity available.&lt;/p&gt;

&lt;h2&gt;
  
  
  EKS (Kubernetes)
&lt;/h2&gt;

&lt;p&gt;The 2nd option to run Docker containers on AWS is Kubernetes (K8s). In summary, K8s is an open-source container orchestration solution. AWS offers the K8s master layer as a service. The master layer is responsible for storing the state of the container cluster and deciding on which machines new containers should be placed. On top of that, you are responsible for managing a fleet of EC2 instances used to run the containers.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;main selling point for K8s:&lt;/strong&gt; it is open-source and runs on AWS, Azure, Google Cloud, on-premises, or even on your local machine. The &lt;strong&gt;resulting disadvantage&lt;/strong&gt; is that Kubernetes is not that well integrated with the AWS infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes is designed for microservice architectures.&lt;/strong&gt; For example, a built-in service discovery allows containers to talk to each other easily by using a local proxy.&lt;/p&gt;

&lt;p&gt;EKS is about USD 144 per month for the master layer. Besides, you are paying for the EC2 instances powering your containers. A t3.medium instance provides 2 CPUs with 4 GiB of memory and costs around USD 30 USD per month.&lt;/p&gt;

&lt;p&gt;You should not underestimate the complexity of operating EKS and EC2. For example, the way EKS integrates with the VPC comes with a few unexpected limitations (see EKS vs. ECS: orchestrating containers on AWS for more details).&lt;/p&gt;

&lt;h2&gt;
  
  
  Elastic Beanstalk
&lt;/h2&gt;

&lt;p&gt;Another option to run Docker containers on AWS is Elastic Beanstalk. Some say Elastic Beanstalk is the PaaS (Platform-as-a-Service) offering from AWS. Nevertheless, Elastic Beanstalk is very easy to use. There are a bunch of environments to deploy your web application with Elastic Beanstalk. One of them is called &lt;strong&gt;Single Container Docker.&lt;/strong&gt; This environment deploys a single Docker container to one or multiple EC2 instances.&lt;/p&gt;

&lt;p&gt;Elastic Beanstalk is not only deploying your application; it is also creating the needed infrastructure consisting of a database, a load balancer, and EC2 instances. Important to note: Elastic Beanstalk creates EC2 instances automatically. But you are still responsible for these virtual machines they are not fully-managed by AWS.&lt;/p&gt;

&lt;p&gt;Elastic Beanstalk is a proprietary but free of charge solution offered by AWS. You are only paying for the underlying infrastructure. For example, a t3.medium instance provides 2 CPUs with 4 GiB of memory and costs around USD 30 USD per month.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0cb7pfjop11c8ywtvm5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0cb7pfjop11c8ywtvm5.png" alt="image" width="564" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  When Is Elastic Beanstalk The Best Method For Managing Docker Containers On AWS?
&lt;/h2&gt;

&lt;p&gt;For businesses new to AWS or new to the containerization concept, just getting started with Docker, or developing new applications, Elastic Beanstalk may be the best approach to support Docker containers. Elastic Beanstalk offers a simple interface, allows Docker images to be pulled from public or private registries, and coordinates the deployment of multiple Docker containers to Amazon ECS clusters. Elastic Beanstalk gives you less control over application scaling and capacity but makes deploying Docker containers on AWS ever so straightforward.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Is Elastic Container Service The Best Method For Managing Docker Containers On AWS?
&lt;/h2&gt;

&lt;p&gt;In comparison to Elastic Beanstalk, Elastic Container Service provides greater control over application architectures and orchestration of Docker containers. You specify the size and number of cluster nodes and determine if auto-scaling should be used.&lt;/p&gt;

&lt;p&gt;Elastic Container Service uses tasks to launch Docker containers. A task includes the container definition, providing the ability to group containers in sets that launch together then terminate simultaneously. ECS provides significantly greater flexibility and customization in scheduling and CPU and memory utilization. In addition, ECS does not require special integration efforts to work with many other AWS services.&lt;/p&gt;

&lt;p&gt;Elastic Container Service is appropriate when you need to run microservices that require integration with other AWS services, or use custom or managed schedulers to run batch workloads on EC2 On-Demand, Reserved, or Spot Instances. Businesses wanting to containerize legacy code and migrate it to AWS without needing to rewrite code should take the ECS option. Applications or workflows comprised of loosely coupled, distributed services running on various platforms or accessing widely-distributed data source can also benefit by using Elastic Container Service.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Is Elastic Kubernetes Service The Best Method For Managing Docker Containers On AWS?
&lt;/h2&gt;

&lt;p&gt;If you want the flexibility to integrate externally with the open-source Kubernetes community, spending the additional effort on setting up EKS may be the better option. Kubernetes is preferred for legacy workloads. It allows you to build a dev/test/production environment on-premises, and then move it to the cloud if and when required. Kubernetes is best known for its true enterprise-level cluster and container management. It is extremely valuable when your containerized workloads begin to scale. If you are already running workloads on Kubernetes, EKS is going to be a familiar and simple route to moving to an AWS environment. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3nu6x8k14gl8chm8mgc8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3nu6x8k14gl8chm8mgc8.png" alt="image" width="638" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hope this guide helps you understand containers on aws, feel free to connect with me on &lt;a href="https://www.linkedin.com/in/adit-modi-2a4362191/" rel="noopener noreferrer"&gt;LinkedIn.&lt;/a&gt;&lt;br&gt;
You can view my badges &lt;a href="https://www.youracclaim.com/users/adit-modi/badges" rel="noopener noreferrer"&gt;here.&lt;/a&gt;&lt;br&gt;
If you are interested in learning more about AWS then follow me on &lt;a href="https://github.com/AditModi" rel="noopener noreferrer"&gt;github.&lt;/a&gt;&lt;br&gt;
If you liked this content then do clap and share it . Thank You . &lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Deploying a Containerized App in Google GKE</title>
      <dc:creator>Adit Modi</dc:creator>
      <pubDate>Mon, 12 Apr 2021 06:14:04 +0000</pubDate>
      <link>https://forem.com/cloudtech/deploying-a-containerized-app-in-google-gke-1kl9</link>
      <guid>https://forem.com/cloudtech/deploying-a-containerized-app-in-google-gke-1kl9</guid>
      <description>&lt;p&gt;Because of its popularity and widespread adoption, Kubernetes has become the industry’s de facto for deploying a containerized app. Google Kubernetes Engine (GKE) is Google Cloud Products’ (GCP) managed Kubernetes service. It provides out-of-the-box features such as auto-scaling nodes, high-availability clusters, and automatic upgrades of masters and nodes. In addition, it offers the most convenient cluster setup workflow and the best overall developer experience.&lt;/p&gt;

&lt;p&gt;this article will trace the life cycle of a containerized application in its most mature environment, GKE.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzhuslfpf7oiuo9ixcvej.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzhuslfpf7oiuo9ixcvej.png" alt="image" width="800" height="375"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;My Background: I am Cloud , DevOps &amp;amp; Big Data Enthusiast | 4x AWS Certified | 3x OCI Certified | 3x Azure Certified . &lt;/p&gt;

&lt;h2&gt;
  
  
  State-of-the-art Kubernetes in the Cloud
&lt;/h2&gt;

&lt;p&gt;Each of the major cloud providers have a managed Kubernetes service (also known as Kubernetes-as-a-Service) which creates an isolated and supervised environment for Kubernetes clusters. These services provide the Kubernetes API setup, measure basic node health, autoscale or upgrade when needed, and maintain some security best practices.&lt;/p&gt;

&lt;p&gt;There are more than ten listed providers in the official Kubernetes documentation, including AWS, GCP, and Azure. If you want a deep dive comparison of Google Kubernetes Engine, Azure Kubernetes Service, and Amazon Elastic Container Service for Kubernetes, check out my &lt;a href="https://dev.to/aditmodi"&gt;other blogs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Google Cloud Platform provides the most reliable and easily managed Kubernetes clusters, as Google is the creator of Kubernetes and donated it to the open source community, hence we’ll be examining its service in this article. &lt;/p&gt;

&lt;p&gt;Below, we’ll start our process by creating some containers for a microservice application in Google Cloud Platform. You’ll need a GCP account to continue. Keep in mind that, if you’re a new customer, GCP’s free tier gives you $300 credit.&lt;/p&gt;

&lt;h2&gt;
  
  
  CLI Environment Setup
&lt;/h2&gt;

&lt;p&gt;The first step in deploying a containerized app is setting up a CLI environment in GCP. We will use Cloud Shell to do this, since it already has installed and configured gcloud, Docker, and kubectl. Cloud Shell will enable you to quickly start using the CLI tools with authentication and configuration in place. In addition, the CLI already runs inside the GCP console where you can access your resources and check their statuses.&lt;/p&gt;

&lt;p&gt;Open the Google Cloud Console, and click “Activate Cloud Shell” on the top of the navigation bar:&lt;/p&gt;

&lt;p&gt;A terminal with a command line prompt will open as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgm1er2fgqqh7ptw22vvt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgm1er2fgqqh7ptw22vvt.png" alt="image" width="800" height="67"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that you have the environment set up, you are ready to create containers from microservices and prepare them for release.&lt;/p&gt;

&lt;h2&gt;
  
  
  Microservice to Containers
&lt;/h2&gt;

&lt;p&gt;The life cycle of a containerized app starts with source code. In this tutorial, we’ll use an open source web server named hello-app, available on Github. Because Kubernetes is a container management system, we’ll need to create container images for our applications. The following steps will guide you through creating a Docker container for hello-app and uploading the container image to the registry. When you deploy the application, Kubernetes will download and run it from the registry.&lt;/p&gt;

&lt;p&gt;In Cloud Shell, download the source code of the sample application:&lt;/p&gt;

&lt;p&gt;git clone &lt;a href="https://github.com/GoogleCloudPlatform/kubernetes-engine-samples" rel="noopener noreferrer"&gt;https://github.com/GoogleCloudPlatform/kubernetes-engine-samples&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;cd kubernetes-engine-samples/hello-app&lt;/p&gt;

&lt;p&gt;Then, build the container image with a tag that includes your GCP project ID:&lt;/p&gt;

&lt;p&gt;docker build -t gcr.io/${DEVSHELL_PROJECT_ID}/hello-app:v1 .&lt;/p&gt;

&lt;p&gt;GCP has a managed Container Registry service that can push and pull Docker container images. If you are using the registry for the first time, enable it in the API Library.&lt;/p&gt;

&lt;p&gt;Authenticate to the registry with the following command, and then push the image:&lt;/p&gt;

&lt;p&gt;gcloud auth configure-docker&lt;/p&gt;

&lt;p&gt;docker push gcr.io/${DEVSHELL_PROJECT_ID}/hello-app:v1&lt;/p&gt;

&lt;p&gt;The last step packaged and uploaded the container image to the registry. Now, we are ready to create a Kubernetes cluster and distribute the hello-app application over the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Kubernetes Cluster in GKE
&lt;/h2&gt;

&lt;p&gt;Kubernetes originated in Google, and its cloud provider, GCP, has the most convenient and easy setup for creating new clusters. With a couple of clicks, you can create a managed Kubernetes cluster. Google Cloud Platform takes care of the cluster’s health, upgrades, and security. Open the Kubernetes Engine in the GCP control panel, and click on “Create cluster”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak87njqgcsme626313vo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak87njqgcsme626313vo.png" alt="image" width="800" height="345"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;In the cluster creation view, the cluster basics focus on names such as hello, node locations, and Kubernetes master version. You can go to the Automation, Networking, Security, Metadata, and Features tabs for the following additional options:&lt;/p&gt;

&lt;p&gt;Automation: Configuration of automatic maintenance, autoscaling, and auto-provisioning.&lt;/p&gt;

&lt;p&gt;Networking: Configuration of communication within the application and with the Kubernetes control plane and configuration of how clients reach the control plane. If you want to use a specific network, node subnet, or set network policy, you need to set it here.&lt;/p&gt;

&lt;p&gt;Security: Configuration of cluster authentication handled by IAM and Google-managed encryption.&lt;/p&gt;

&lt;p&gt;Metadata: Labels for and descriptions of the cluster.&lt;/p&gt;

&lt;p&gt;Features: These are the “extra toppings” for the otherwise “vanilla” Kubernetes cluster— serverless, telemetry, Kubernetes dashboard, and Istio installation.&lt;/p&gt;

&lt;p&gt;In this tutorial, we will go with the default setting, a single-zone three-node cluster, by simply clicking “Create”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy09jcccie9tz1qkujtf2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy09jcccie9tz1qkujtf2.png" alt="image" width="800" height="775"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In a couple of minutes, your new cluster will be created. A green check will appear in the Kubernetes cluster list, as seen below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ekp28uac543tgr5iecb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ekp28uac543tgr5iecb.png" alt="image" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the “Connect” button, and then click “Run in Cloud Shell” to continue in the terminal. In the terminal, run the following command to list the nodes:&lt;/p&gt;

&lt;p&gt;kubectl get nodes&lt;/p&gt;

&lt;p&gt;NAME STATUS ROLES  AGE   VERSION&lt;br&gt;
gke-hello-default-pool-d30403be-94dc Ready   3m31s v1.14.10-gke.36&lt;br&gt;
gke-hello-default-pool-d30403be-b2bv Ready   3m31s v1.14.10-gke.36&lt;br&gt;
gke-hello-default-pool-d30403be-v9xz Ready   3m28s v1.14.10-gke.36&lt;/p&gt;

&lt;p&gt;The default setup will create three nodes, as shown above. You can also run other kubectl commands and play around with your newly-created cluster.&lt;/p&gt;

&lt;p&gt;Next, let’s deploy our containerized app to the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Containers to the Cloud
&lt;/h2&gt;

&lt;p&gt;Kubernetes manages containers inside a logical grouping, called a pod, and encapsulates them with the network and storage to provide some level of isolation and segmentation. The pods are managed via Controllers which provide different capabilities to manage the life cycle of the pod, these include Deployments, StatefulSets, DaemonSets, and never other replication lifecycle types. Deployments focus on scalable and reliable applications. StatefulSets focus on stateful application management, such as databases. DaemonSets ensure that there is an instance of the application running on each node such as for observability needs.&lt;/p&gt;

&lt;p&gt;Create a deployment with the following command:&lt;/p&gt;

&lt;p&gt;kubectl create deployment hello-world --image=gcr.io/${DEVSHELL_PROJECT_ID}/hello-app:v1&lt;/p&gt;

&lt;p&gt;deployment.apps/hello-world created&lt;/p&gt;

&lt;p&gt;This command creates a 1-replica deployment from the image that we have already pushed. Now, scale it with the following command:&lt;/p&gt;

&lt;p&gt;kubectl scale --replicas=5 deployment/hello-world&lt;/p&gt;

&lt;p&gt;deployment.extensions/hello-world scaled&lt;/p&gt;

&lt;p&gt;This command updates the deployment to have five replicas of the pods. Kubernetes assigns these pods to the nodes. Inside the nodes, the Docker images will be pulled, and containers will be started. Let’s check the status of the pods:&lt;/p&gt;

&lt;p&gt;kubectl get pods&lt;/p&gt;

&lt;p&gt;NAME READY STATUS RESTARTS AGE&lt;br&gt;
hello-world-77cc5b59b6-4p5dc 1/1 Running 0 8m55s&lt;br&gt;
hello-world-77cc5b59b6-bkph9 1/1 Running 0 8m54s&lt;br&gt;
hello-world-77cc5b59b6-c8hc5 1/1 Running 0 8m52s&lt;br&gt;
hello-world-77cc5b59b6-g59z7 1/1 Running 0 8m51s&lt;br&gt;
hello-world-77cc5b59b6-qfnvr 1/1 Running 0 8m54s&lt;/p&gt;

&lt;p&gt;Since all the pods are “Running,” we know that Kubernetes has distributed the pods to the nodes. Currently, we have five pods running over the three nodes. You can change the number of replicas based on your needs, metrics, and usage with the help of the kubectl scale command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring the Applications
&lt;/h2&gt;

&lt;p&gt;Kubernetes distributes the containers to the nodes, making it critical to collect logs and metrics in a central location.&lt;/p&gt;

&lt;p&gt;There are currently two popular stacks used to collect logs from the clusters: Elasticsearch, Logstash, and Kibana (ELK) and Elasticsearch, Fluentd, and Kibana (EFK). &lt;/p&gt;

&lt;p&gt;It’s also possible to check the Kubernetes metrics in GKE with the kubectl top command. Let’s use it to look at the usage of the pods:&lt;/p&gt;

&lt;p&gt;kubectl top pods&lt;/p&gt;

&lt;p&gt;NAME CPU(cores) MEMORY(bytes)&lt;br&gt;
hello-world-77cc5b59b6-4p5dc 0m 1Mi&lt;br&gt;
hello-world-77cc5b59b6-bkph9 0m 1Mi&lt;br&gt;
hello-world-77cc5b59b6-c8hc5 0m 1Mi&lt;br&gt;
hello-world-77cc5b59b6-g59z7 0m 1Mi&lt;br&gt;
hello-world-77cc5b59b6-qfnvr 0m 1Mi&lt;/p&gt;

&lt;p&gt;Similarly, you can use the kubectl top nodes command to retrieve aggregate data about the nodes:&lt;/p&gt;

&lt;p&gt;kubectl top nodes&lt;/p&gt;

&lt;p&gt;NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%&lt;br&gt;
gke-hello-default-pool-d30403be-94dc 50m 5% 714Mi 27%&lt;br&gt;
gke-hello-default-pool-d30403be-b2bv 52m 5% 674Mi 25%&lt;br&gt;
gke-hello-default-pool-d30403be-v9xz 52m 5% 660Mi 25%&lt;/p&gt;

&lt;p&gt;Now, let’s open our application up to the internet and receive some Hello World responses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Containers to the Internet
&lt;/h2&gt;

&lt;p&gt;In addition to container management, Kubernetes provides resources to connect to applications from inside and outside the cluster. With the following command, you expose the deployment to the internet:&lt;/p&gt;

&lt;p&gt;kubectl expose deployment hello-world --type=LoadBalancer --port 80 --target-port 8080&lt;/p&gt;

&lt;p&gt;service/hello-world exposed&lt;/p&gt;

&lt;p&gt;This command creates a service resource in Kubernetes, and it provides networking with an IP attached to the application instances. It can be checked from the menu item Kubernetes Engine &amp;gt; Services &amp;amp; Ingress:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femfvvu43add5cw5xmggu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femfvvu43add5cw5xmggu.png" alt="image" width="800" height="517"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;The Service details page lists the networking configuration from the point of view of a Kubernetes cluster. In addition, there is an external IP assigned to the service enabling access from the internet. It’s created by GCP TCP Load Balancer by default for zonal and regional clusters. Let’s check the TCP load balancer in Network services &amp;gt; Load Balancer by clicking the load balancer in the previous view:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsnaap2wbd0gc2o16ofcj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsnaap2wbd0gc2o16ofcj.png" alt="image" width="800" height="419"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;In this screenshot, load balancer instances for all three nodes are listed along with their health status. If you create a multi-region cluster, you will need an ingress controller and global load balancer deployed to your cluster for routing.&lt;/p&gt;

&lt;p&gt;Check for the external IP with the following command:&lt;/p&gt;

&lt;p&gt;kubectl get service&lt;/p&gt;

&lt;p&gt;NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE&lt;br&gt;
hello-world LoadBalancer 10.0.2.152 35.232.168.243 80:30497/TCP 77s&lt;/p&gt;

&lt;p&gt;kubernetes ClusterIP 10.0.0.1  443/TCP 54m&lt;/p&gt;

&lt;p&gt;Now, open the external IP listed above for hello-world in your browser:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftub4grm2n3uk2cccyr11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftub4grm2n3uk2cccyr11.png" alt="image" width="800" height="191"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;In the output, the hostname indicates the name of the pod. You can see all of your pod names as hostnames if you reload the browser tab a couple of times. You can expect a change of hostnames with each reload, since we have created a LoadBalancer type of service to expose the application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;This article has examined the life cycle of a containerized app in Google GKE, starting from the source code. We created a Docker container image which was pushed to the registry for future use in scaling. Then, we created a Kubernetes cluster in GKE and deployed our application into it. We scaled the app with many replicas and checked its status. We reviewed the metrics and logs with potential extensions. Finally, we exposed our application to the internet. With this hands-on knowledge, you should now be able to package, deploy, and manage containerized applications inside a Kubernetes cluster in GKE.&lt;/p&gt;

&lt;p&gt;Hope this guide helps you understand how to Deploying a Containerized App in Google GKE, feel free to connect with me on &lt;a href="https://www.linkedin.com/in/adit-modi-2a4362191/" rel="noopener noreferrer"&gt;LinkedIn.&lt;/a&gt;&lt;br&gt;
You can view my badges &lt;a href="https://www.youracclaim.com/users/adit-modi/badges" rel="noopener noreferrer"&gt;here.&lt;/a&gt;&lt;br&gt;
If you are interested in learning more about AWS / GCP then follow me on &lt;a href="https://github.com/AditModi" rel="noopener noreferrer"&gt;github.&lt;/a&gt;&lt;br&gt;
If you liked this content then do clap and share it . Thank You .&lt;/p&gt;

</description>
      <category>docker</category>
      <category>cloud</category>
      <category>googlecloud</category>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
