<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: DaltonInCloud</title>
    <description>The latest articles on Forem by DaltonInCloud (@daltonincloud).</description>
    <link>https://forem.com/daltonincloud</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/daltonincloud"/>
    <language>en</language>
    <item>
      <title>#100DaysOfCloud | Day 10</title>
      <dc:creator>DaltonInCloud</dc:creator>
      <pubDate>Thu, 03 Sep 2020 16:08:18 +0000</pubDate>
      <link>https://forem.com/daltonincloud/100daysofcloud-day-10-4k75</link>
      <guid>https://forem.com/daltonincloud/100daysofcloud-day-10-4k75</guid>
      <description>&lt;p&gt;What Did I Learn -&lt;/p&gt;

&lt;p&gt;In an SQS environment I can configure an EC2 auto scaling group to work along side the messaging queue based on how many messages are in the queue. SQS is pull based used to decouple components. SNS is push based and is able to send messages either by SMS, email, SQS, or HTTP. SES is scale-able and designed for marketing email notifications, can be sent to an S3 bucket. For Elastic Beanstalk there are 4 deployments, all at once, which everything is updated at once and instances are out of service while update is taking place, rolling, where each batch of instances is taken out while the deployment happens, rolling with additional batches, where elastic beanstalk launches additional batch of instances, deploys new version in batches, no down time for performance needed applications.&lt;/p&gt;

&lt;p&gt;What Did I Do - &lt;/p&gt;

&lt;p&gt;We created a Kinesis stream, for this we will head to CloudFormation &amp;gt; Create a new stack, then specify the template URL as &lt;code&gt;https://s3.amazonaws.com/kinesis-demo-bucket/amazon-kinesis-data-visualization-sample/kinesis-data-vis-sample-app.template&lt;/code&gt; then we name the stack then create the stack. We head to EC2 and we can see the instance that was generated, then we head to Kinesis, and streams and we can see our shards that were generated, then head to DynamoDB and tables and you can see our Kinesis stream that was generated. Then immutable deployment updates, where whole new instances across the board are created in new ASG, when new instances pass checks then they are moved to existing ASG and old instances removed.&lt;/p&gt;

&lt;p&gt;For Tomorrow -&lt;/p&gt;

&lt;p&gt;Tomorrow we are going over Dev theory with CICD with CodeDeploy&lt;/p&gt;

</description>
      <category>aws</category>
      <category>sqs</category>
      <category>elasticbeanstalk</category>
      <category>100daysofcloud</category>
    </item>
    <item>
      <title>#100DaysOfCloud | Day 9</title>
      <dc:creator>DaltonInCloud</dc:creator>
      <pubDate>Thu, 03 Sep 2020 03:18:24 +0000</pubDate>
      <link>https://forem.com/daltonincloud/100daysofcloud-day-9-4dkc</link>
      <guid>https://forem.com/daltonincloud/100daysofcloud-day-9-4dkc</guid>
      <description>&lt;p&gt;What Did I Learn -&lt;/p&gt;

&lt;p&gt;I learned quite a bit about envelope encryption I did not know prior, I did know about it encrypting anything over 4 KB and encrypting the data key, I was not aware Envelope encryption utilizes CMK to encrypt the data key or envelope key. Or that we would use Envelope encryption to avoid sending data into KMS over the network. We found that Customer Managed CMK can be used to encrypt/decrypt files up to 4 KB and generate the data key. We also learned some meaning to new KMS API Calls, such as &lt;code&gt;aws kms re-encrypt&lt;/code&gt; for decrypting ciphertext then encrypting it again using a CMK that we specify, this can be used for manual key rotation, and &lt;code&gt;aws kms enable-key-rotation&lt;/code&gt; that enables automatic key rotation once a year.&lt;/p&gt;

&lt;p&gt;What Did I Do - &lt;/p&gt;

&lt;p&gt;Let us create a CMK, first we head to trusty IAM and make a Group for KMS and attach the Admin policy for our users. After this we create our users and attach our role. From here we head to our AWS Dashboard and head into KMS and click create a key in our region we will be using. We will just be selecting KMS and Symmetric for our settings, on the next stage we created our alias and description. For our administrator, select the user you want to administrate and manage your keys. For the key usage permissions select the user we want to be able to use the key by encrypting and decrypting the information. From here we can review the policy and finish creating the policy.&lt;/p&gt;

&lt;p&gt;For Tomorrow -&lt;/p&gt;

&lt;p&gt;Tomorrow (Actually tomorrow) is going to be all about messaging services, and fun stuff like Kinesis, and Elastic Beanstalk.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kms</category>
      <category>100daysofcloud</category>
    </item>
    <item>
      <title>#100DaysOfCloud | Day 8</title>
      <dc:creator>DaltonInCloud</dc:creator>
      <pubDate>Wed, 02 Sep 2020 11:31:44 +0000</pubDate>
      <link>https://forem.com/daltonincloud/100daysofcloud-day-8-53o5</link>
      <guid>https://forem.com/daltonincloud/100daysofcloud-day-8-53o5</guid>
      <description>&lt;p&gt;What Did I Learn -&lt;/p&gt;

&lt;p&gt;I learned, or re-learned quite a bit about Indexes for DynamoDB. For example, Local Secondary Indexes have to be generated when you create your table, unlike the Global secondary index, local uses same partition key as your table and global uses a different one, and how they both utilize different sort key, which goes with the whole concept of "Secondary Index" I think. I also learned within a query you can reverse the order results by setting the &lt;code&gt;ScanIndexForward parameter = false&lt;/code&gt;(It is also helpful to know this is for a query). In Postgres I would generally set to Asc or Desc when specifying Order by, so this was helpful to find out. Also that a Scan does not search a database, but grabs the entire database, and you have the option to filter information out based on what you need. Another helpful tip is that queries find info in a table only using primary keys. I also learned the calculations for Provisioned throughput | 1 Write Capacity Unit = 1KB WPS (Write per second), 1 Read Capacity Unit = 1 x4KB Strongly consistent Read OR 2x4KB Eventually Consistent reads per second.&lt;/p&gt;

&lt;p&gt;What Did I Do - &lt;/p&gt;

&lt;p&gt;We just kept it simple on this one, we went to DynamoDB, we were prompted to make a table, enter our table name, we just used NewTable and for our primary key we just used primary-key as the string. From here we use the create item tab to create new item key value pairs, such as Field as item value as socks, price:5, etc with the + append button, note that &lt;code&gt;+ insert&lt;/code&gt; will add above the line you are currently on, while the &lt;code&gt;+ append&lt;/code&gt; adds below it.&lt;/p&gt;

&lt;p&gt;For Tomorrow -&lt;/p&gt;

&lt;p&gt;Tomorrow, or rather today it is all about AWS KMS &lt;/p&gt;

</description>
      <category>dynamodb</category>
      <category>serverless</category>
      <category>aws</category>
      <category>100daysofcloud</category>
    </item>
    <item>
      <title>#100DaysOfCloud | Day 7</title>
      <dc:creator>DaltonInCloud</dc:creator>
      <pubDate>Tue, 01 Sep 2020 03:39:25 +0000</pubDate>
      <link>https://forem.com/daltonincloud/100daysofcloud-day-6-4k5n</link>
      <guid>https://forem.com/daltonincloud/100daysofcloud-day-6-4k5n</guid>
      <description>&lt;p&gt;What Did I Learn -&lt;/p&gt;

&lt;p&gt;When Developing a Lambda function, the default version is &lt;code&gt;$LATEST&lt;/code&gt; and you can create versions of your function, however when you create a version it becomes immutable. You can also create Alias's for your functions, for example Prod and Dev. You can assign either one Version to your alias, or similar to a weighted target group for an ALB, you can set a percentage for certain versions, this however, is not able to work with &lt;code&gt;$LATEST&lt;/code&gt;. When using an Alias and &lt;code&gt;$LATEST&lt;/code&gt; you can just use the one version. I also found that Step functions are generated in JSON format, you know I did want to do everything in YAML, but I see now that may not be as possible as I previously hoped. Step functions automatically trigger and track each step along with create logs for each step. We also went over the fact that X-Ray integrates with ELB, AWS Lambda, API Gateway, EC2, and AWS Elastic Beanstalk. I also learned you can import swagger files to API Gateway and that Amazon's default API Requests per sec are 10k and max concurrent requests is 5k across your account. If you bypass either of these you get a 429 Too Many Request error.&lt;/p&gt;

&lt;p&gt;What Did I Do -&lt;/p&gt;

&lt;p&gt;Made an Alexa skill, so this was a simple skill, we got an audio file from Text-to-speech from Amazon Polly, Synthesized to an S3 bucket. We then went to AWS Lambda, under AWS Serverless App Repo we selected &lt;code&gt;alexa-skills-kit-nodejs-factskill&lt;/code&gt; and deployed it. We selected and copied the ARN at the top of our alexaskill repo page. We made sure we were signed in on amazondeveloper page and selected Alexa. Named our app and chose Fact skill as a template, we made our invocation "testing" and went to endpoint and input our lambda ARN endpoint, and for Intents field for Utterances we just added testing again. Now to add our audio source from Amazon Polly we go back to Lambda, edit our serverless App repo template, we create a new dictionary value for our input and add &lt;code&gt;&amp;lt;audio src=\"https://s3.amazonaws.com/testinglink/123456789ABcdef.mp3\" /&amp;gt;&lt;/code&gt; I have completed this lab before but it is always fun playing around with Alexa skills, almost tempts me to go for the Alexa Specialty, but that is preferably after I gain the AWS Dev Associate as well as the Sec Specialty.&lt;/p&gt;

&lt;p&gt;For Tomorrow -&lt;/p&gt;

&lt;p&gt;We are onward to DynamoDB, Amazon's best database ever (Glory to DynamoDB).&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>100daysofcloud</category>
    </item>
    <item>
      <title>#100DaysOfCloud | Day 6</title>
      <dc:creator>DaltonInCloud</dc:creator>
      <pubDate>Mon, 31 Aug 2020 11:20:58 +0000</pubDate>
      <link>https://forem.com/daltonincloud/100daysofcloud-day-6-29b8</link>
      <guid>https://forem.com/daltonincloud/100daysofcloud-day-6-29b8</guid>
      <description>&lt;p&gt;What Did I Learn - &lt;/p&gt;

&lt;p&gt;So this article is a bit late, sorry in advance, We started opening The Phoenix Project and we are on Chapter 3, so far pretty engaging book, it is nice to see the integration of the various departments and how they are using troubleshooting steps they would normally build in the helpdesk role (usually everyones first step in IT) to determine an outage. As promised we are back to the AWS Console and today was S3 day! So re-learning S3 and CloudFront. The various storage classes, the availability that goes with it, how S3 is Eventual put consistency(Or Read over write consistency), flat files, etc.&lt;/p&gt;

&lt;p&gt;What Did I Do - &lt;/p&gt;

&lt;p&gt;So for what we did, we build a bucket policy with &lt;code&gt;https://awspolicygen.s3.amazonaws.com/policygen.html&lt;/code&gt; ensuring the effect we want, in this case deny put action. We also setup a website between two buckets by setting up Cross Region Resource Sharing, we did this by enabling the primary website url as the approved CORS site on the secondary bucket for our content. Last we setup a CloudFront delivery network so that we can have a bucket across the world, and use edge locations to cache our bucket information from when someone accesses that bucket.&lt;/p&gt;

&lt;p&gt;For Tomorrow -&lt;/p&gt;

&lt;p&gt;Tomorrow, or well today, we are getting into Lambda and Serverless Computing, and we may get a hands on building an Alexa Skill again, we will see.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>100daysofcloud</category>
    </item>
    <item>
      <title> #100DaysOfCloud | Day 5</title>
      <dc:creator>DaltonInCloud</dc:creator>
      <pubDate>Sun, 30 Aug 2020 05:15:20 +0000</pubDate>
      <link>https://forem.com/daltonincloud/100daysofcloud-day-5-4apa</link>
      <guid>https://forem.com/daltonincloud/100daysofcloud-day-5-4apa</guid>
      <description>&lt;p&gt;What Did I learn -&lt;/p&gt;

&lt;p&gt;Oh wow, okay, so today I learned about viewing image history, saving and loading images, this is especially useful when utilizing artifactory for storing your image and then distributing your docker server using Jenkins. As well as the different ways to use tagging, and I learned more about Docker Swarm and that is cool, but I was very happy playing around with Portainer to manage Docker. So for this I first created a volume for my portainer data, then I ran: &lt;br&gt;
&lt;code&gt;docker container run -d --name portainer -p 8080:9000 \&lt;br&gt;
--restart=always \&lt;br&gt;
-v /var/run/docker.sock:/var/run/docker.sock \&lt;br&gt;
-v portainer_data:/data portainer/portainer&lt;/code&gt;&lt;br&gt;
After this, I went to my IPAddress at port &lt;code&gt;8080&lt;/code&gt; and generated a login and got to play around with the Portainer console and I have to say, it was a blast! This may be because I have been so deep in the console the past couple days with little sleep but this was such fun!&lt;/p&gt;

&lt;p&gt;What Did I Do -&lt;/p&gt;

&lt;p&gt;We finally got to play around with Docker Swarm, yeah!!!!! So, we played around with the network settings, so to do this we created an overlay network with docker network create -d overlay  and then we created a service with our overlay network this was accomplished with: &lt;br&gt;
&lt;code&gt;docker service create -d --name &amp;lt;NAME&amp;gt; \&lt;br&gt;
--network &amp;lt;NETWORK&amp;gt; \&lt;br&gt;
-p &amp;lt;HOST_PORT&amp;gt;:&amp;lt;CONTAINER_PORT&amp;gt; \&lt;br&gt;
--replicas &amp;lt;REPLICAS&amp;gt; \&lt;br&gt;
&amp;lt;IMAGE&amp;gt; &amp;lt;CMD&amp;gt;&lt;/code&gt;&lt;br&gt;
Then we just add the service to a network with &lt;code&gt;docker service update --network-add &amp;lt;NETWORK&amp;gt; &amp;lt;SERVICE&amp;gt;&lt;/code&gt;&lt;br&gt;
But yes, before we can play around with networks in docker swarm we wanted to make our testbed of servers, so we made some worker nodes, then we had to get our worker nodes ready, to do this we pulled the docker repo for our workers:&lt;br&gt;
&lt;code&gt;sudo yum install -y yum-utils \&lt;br&gt;
  device-mapper-persistent-data \&lt;br&gt;
  lvm2&lt;br&gt;
made sure the repo was stable:&lt;br&gt;
sudo yum-config-manager \&lt;br&gt;
    --add-repo \&lt;br&gt;
    https://download.docker.com/linux/centos/docker-ce.repo&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then we install Docker on our workers:&lt;br&gt;
&lt;code&gt;sudo yum -y install docker-ce&lt;br&gt;
Enabled start:&lt;br&gt;
sudo systemctl start docker &amp;amp;&amp;amp; sudo systemctl enable docker&lt;br&gt;
Added our User to the docker group:&lt;br&gt;
sudo usermod -aG docker &amp;lt;USERNAME&amp;gt;&lt;/code&gt;&lt;br&gt;
Once this is complete on the Master Swarm server we initialized swarm manager:&lt;br&gt;
&lt;code&gt;docker swarm init \&lt;br&gt;
--advertise-addr &amp;lt;Server's Private IP&amp;gt;&lt;/code&gt;&lt;br&gt;
From hereafter it initializes your CLI should generate a prompt that you can input into all your worker nodes, it looks like this:&lt;br&gt;
&lt;code&gt;docker swarm join --token &amp;lt;TOKEN&amp;gt; \&lt;br&gt;
&amp;lt;PRIVATE_IP&amp;gt;:&amp;lt;PORT&amp;gt;&lt;/code&gt;&lt;br&gt;
This was by far one of the coolest dives I have done with docker and I think this is really fun to do for anyone interested and wants to spin up some test servers to play with.&lt;/p&gt;

&lt;p&gt;For tomorrow -&lt;/p&gt;

&lt;p&gt;What is on the menu for tomorrow? I am not sure, I actually just purchased The Phoenix Project audio book, so I will definately be "cracking that open" sort of speak. Aside from that as promised we are back to the AWS console and on our journey to gain the AWS Dev Associate (This is before we go for the AWS Solutions Architect Professional or the AWS Security Specialty.)&lt;/p&gt;

</description>
      <category>docker</category>
      <category>linux</category>
      <category>100daysofcloud</category>
    </item>
    <item>
      <title> #100DaysOfCloud | Day 4</title>
      <dc:creator>DaltonInCloud</dc:creator>
      <pubDate>Sat, 29 Aug 2020 05:06:33 +0000</pubDate>
      <link>https://forem.com/daltonincloud/100daysofcloud-day-4-5ch8</link>
      <guid>https://forem.com/daltonincloud/100daysofcloud-day-4-5ch8</guid>
      <description>&lt;p&gt;What Did I learn -&lt;/p&gt;

&lt;p&gt;Today I went over the process of creating Docker volumes with the &lt;code&gt;docker volume create &amp;lt;NAME&amp;gt;&lt;/code&gt; command as well as removing and pruning, similar to previous docker commands, they look like &lt;code&gt;docker volume prune&lt;/code&gt; or &lt;code&gt;docker volume rm &amp;lt;NAME&amp;gt;&lt;/code&gt;. I found the options I can add for creating a volume would be specifying a volume driver name with &lt;code&gt;-d&lt;/code&gt;, or setting the metadata for volume with &lt;code&gt;--label list&lt;/code&gt;, or set specific driver options with &lt;code&gt;-o&lt;/code&gt;. Also today I learned about the advantages to bind mounts is because it allows us to make a change to a file, restart the container then the container to pick up on that change, we do not have to go and rebuild the image and redeploy. Another thing I thought was a great takeaway was from Entrypoint and cmd differences in Dockerfiles, an entrypoint is making the container executable and concrete, then when you add cmd (commands) they are appended to the end of the entrypoint, if you want to write over the entrypoint you need a new entrypoint flag.&lt;/p&gt;

&lt;p&gt;What Did I Do -&lt;/p&gt;

&lt;p&gt;Today it was all labs, so I guess for what did I do, I can mention I took a dockerfile image and I created a &lt;code&gt;.dockerignore&lt;/code&gt; file, with this I specified specific directories as well as wildcard &lt;code&gt;*&lt;/code&gt; file types such as &lt;code&gt;.git&lt;/code&gt; that I want to cut out to ensure the image is as lean as possible. We verified this by running a &lt;code&gt;docker container exec &amp;lt;NAME&amp;gt; ls -la /WORKINGDIRECTORY&lt;/code&gt; and verified what content was missing.&lt;/p&gt;

&lt;p&gt;For tomorrow -&lt;/p&gt;

&lt;p&gt;Up next we are going to be working more on Docker (We will go back to AWS Console soon). I want to get to Docker Swarm tomorrow in my studies, so hopefully depending on the time I am allocated I get the chance to get there.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>linux</category>
      <category>100daysofcloud</category>
    </item>
    <item>
      <title>#100DaysOfCloud | Day 3 </title>
      <dc:creator>DaltonInCloud</dc:creator>
      <pubDate>Fri, 28 Aug 2020 07:10:03 +0000</pubDate>
      <link>https://forem.com/daltonincloud/100daysofcloud-day-3-59c</link>
      <guid>https://forem.com/daltonincloud/100daysofcloud-day-3-59c</guid>
      <description>&lt;p&gt;What Did I learn -&lt;/p&gt;

&lt;p&gt;We learned a bit about docker networking, adding and removing networks, with commands such as &lt;code&gt;docker network create&lt;/code&gt; and &lt;code&gt;docker network rm&lt;/code&gt; . We also went into how we would add a network to an existing container, as well as disconnecting it, and into pruning unused networks, this was with &lt;code&gt;docker network connect&lt;/code&gt;, &lt;code&gt;docker network disconnect&lt;/code&gt; and &lt;code&gt;docker network prune&lt;/code&gt;, in that order.&lt;/p&gt;

&lt;p&gt;What Did I Do -&lt;br&gt;
I finally got to my Minecraft Server lab (Sweet!) First, we are running this on a Running on CentOS 7. After we have our cloud server spun up we need to have Java installed, for this, we will search for the right java runtime env, the package we want is java-11-openjdk. Then we want a dedicated user for security with a dedicated user for our Minecraft application, for this we verify the package and dependencies are installed we &lt;code&gt;sudo useradd minecraft&lt;/code&gt;. Then we go to the root and ensure there is a subdirectory in &lt;code&gt;/opt&lt;/code&gt; that is for Minecraft, using &lt;code&gt;sudo mkdir /opt/minecraft&lt;/code&gt;. Then change ownership to our Minecraft user using &lt;code&gt;sudo chown minecraft:minecraft /opt/minecraft&lt;/code&gt; (or the user you made). Then download the latest MC Server jar file&lt;br&gt;
Run the Jar file, find the EULA file then change the file in VIM to agree. Last we want to create a Minecraft Service defining the &lt;br&gt;
&lt;code&gt;[Unit],[Service],&amp;amp;[Install]&lt;/code&gt;. Then reloaded our systemctl daemon via &lt;code&gt;sudo systemctl daemon-reload&lt;/code&gt;. Then made sure our service will start our service via &lt;code&gt;sudo systemctl start minecraft&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For tomorrow -&lt;br&gt;
I really want to work more on Docker Storage and Dockerfile.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>linux</category>
      <category>100daysofcloud</category>
    </item>
    <item>
      <title>#100DaysOfCloud | Day 2</title>
      <dc:creator>DaltonInCloud</dc:creator>
      <pubDate>Thu, 27 Aug 2020 03:29:11 +0000</pubDate>
      <link>https://forem.com/daltonincloud/100daysofcloud-day-2-bji</link>
      <guid>https://forem.com/daltonincloud/100daysofcloud-day-2-bji</guid>
      <description>&lt;p&gt;What Did I learn -&lt;/p&gt;

&lt;p&gt;Kubernetes is essentially a container management console or orchestration module that can deploy thousands of containers with one command and perform rolling updates and can scale up and down, similar to an application load balancer in AWS (You can use an ELB in EKS). It manages decisions, scheduling, and configs of the worker nodes, worker nodes deploy pods that contain our clusters. &lt;br&gt;
You can run EKS with a serverless option with managed EC2 Clusters, primarily for stateful or long-running app like a web server. Alternatively, you can run a serverless option with Fargate, this is for short-lived or stateless processes, preferably for volatile services. You can store your images in Elastic Container Registry, Artifactory, or Docker Hub. &lt;br&gt;
Primary tools with Kubernetes are&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;eksctl &amp;amp; kubctl&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 Tools to create, delete, and get info about clusters.&lt;/p&gt;

&lt;p&gt;What Did I Do -&lt;br&gt;
I created a CodeStar Project utilizing the HTML EC2 Template. From here I set my IDE environment as cloud 9 so I could work from the browser. From here I went to CodeCommit Repositories, went into the Webpage folder, and made an adjustment to the Index.html file. From here I watched the execution push my change from the ApplicationSource in CodeCommit, to Build the new export in CodeBuild, generating and executing the change in CloudFormation to Deploying the new change in CodeDeploy.&lt;/p&gt;

&lt;p&gt;For tomorrow -&lt;br&gt;
Well since I didn't get to my hands-on with the Minecraft/Dockercraft server I mentioned yesterday due to the CodeStar app build I will have that lab tomorrow.&lt;/p&gt;

</description>
      <category>100daysofcloud</category>
      <category>aws</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>#100DaysOfCloud | Day 1</title>
      <dc:creator>DaltonInCloud</dc:creator>
      <pubDate>Tue, 25 Aug 2020 23:35:18 +0000</pubDate>
      <link>https://forem.com/daltonincloud/100daysofcloud-4kd2</link>
      <guid>https://forem.com/daltonincloud/100daysofcloud-4kd2</guid>
      <description>&lt;p&gt;What Did I learn -&lt;br&gt;
Look at Software Development lifecycle:&lt;/p&gt;

&lt;p&gt;Planning, where we discover the client's issues they want to be resolved.&lt;br&gt;
Analysis, where we get into specifically what could be causing the clients issues and what could be implemented to solve this.&lt;br&gt;
Design, once we decide on a solution from the analysis phase, this is where we lay the framework for what we will implement.&lt;br&gt;
Implementation, after the design is laid out we start working on coding and spinning up our servers.&lt;br&gt;
Testing, QA looks for outstanding issues and bugs from the implementation phase.&lt;br&gt;
Deployment, where the product goes from Dev to a production env.&lt;br&gt;
(Next implementation cycle begins)&lt;/p&gt;

&lt;p&gt;What Did I Do -&lt;/p&gt;

&lt;p&gt;Spun up a centos server:&lt;br&gt;
Use&lt;br&gt;
&lt;br&gt;
&lt;code&gt;rpm -q centos-release&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 to find release/version was centos-release-7-8.2003.0.el7.centos.x86_64.&lt;br&gt;
Installed Docker, ran successful and failed applications looked at logs for successful pulls and unsuccessful container push, this was from alpine and nginx dockerfiles. I used&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;docker container logs &amp;lt;CONTAINER ID or NAME&amp;gt;&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 to find container logs.&lt;br&gt;
Use&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;docker container ls&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 to find running docker container images, and use&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;docker container ls&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 to find running and stopped docker container images on your machine.&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;Docker container inspect &amp;lt;CONTAINER ID&amp;gt;&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 to inspect the docker elements such as IP, image, name, and any other info for the dockerfile. To force removing all inactive dockerfiles you can use the syntax&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;docker container prune -f&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 to force them to be removed. To remove active docker images you can use&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;docker container rm -f &amp;lt;CONTAINER ID&amp;gt; &amp;lt;CONTAINER ID2&amp;gt; etc...&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 if you need to remove more than one active container.&lt;/p&gt;

&lt;p&gt;For tomorrow -&lt;/p&gt;

&lt;p&gt;I plan on continuing to learn more about Docker, getting into Networking, storage, and Dockerfiles more in-depth. I also plan on getting more hands-on with EKS as well as creating a Minecraft server as a Linux lab, hopefully getting into creating a Dockercraft server at one point.&lt;/p&gt;

</description>
      <category>100daysofcloud</category>
    </item>
  </channel>
</rss>
