<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: cner-smith</title>
    <description>The latest articles on Forem by cner-smith (@cnersmith).</description>
    <link>https://forem.com/cnersmith</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/cnersmith"/>
    <language>en</language>
    <item>
      <title>Working with AI on Long Software Projects</title>
      <dc:creator>cner-smith</dc:creator>
      <pubDate>Sun, 19 Apr 2026 05:01:33 +0000</pubDate>
      <link>https://forem.com/cnersmith/working-with-ai-on-long-software-projects-34e4</link>
      <guid>https://forem.com/cnersmith/working-with-ai-on-long-software-projects-34e4</guid>
      <description>&lt;h1&gt;
  
  
  Working with AI on Long Software Projects
&lt;/h1&gt;

&lt;p&gt;A practical guide for developers building real software with Claude (or similar AI coding tools) over months or years rather than single sessions.&lt;/p&gt;

&lt;p&gt;The advice here is not theoretical. It comes from a year of joint work on a single codebase with thousands of tests, hundreds of design decisions, and a working pattern that has had to evolve as the project grew. Most of the lessons were learned by hitting the failure modes — over-engineering, scope creep, mounting tech debt, design drift — then figuring out what structural changes actually prevented recurrence, not just patching the symptom.&lt;/p&gt;

&lt;p&gt;This is for any dev planning to spend serious time pairing with an AI on a single project, especially one that will outlive any individual conversation.&lt;/p&gt;




&lt;h2&gt;
  
  
  The premise
&lt;/h2&gt;

&lt;p&gt;AI coding tools are extraordinarily good at generating code. Given a clear request, modern models produce working implementations faster than most humans, often with reasonable structure and decent style. This is the easy part of software development.&lt;/p&gt;

&lt;p&gt;The hard parts are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Knowing what to build&lt;/li&gt;
&lt;li&gt;Knowing what NOT to build&lt;/li&gt;
&lt;li&gt;Maintaining coherence across a large system over time&lt;/li&gt;
&lt;li&gt;Making decisions that do not have to be undone in three months&lt;/li&gt;
&lt;li&gt;Keeping complexity from compounding&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AI tools do not do these things well. Not because the models are not smart, but because of structural limitations that compound silently over a long project. If you do not understand those limitations, you will end up with a codebase three times larger than it should be, thousands of tests that take many seconds to run, four parallel implementations of the same primitive, and design docs that describe a system no human (or AI) can hold in their head.&lt;/p&gt;

&lt;p&gt;This guide is about how to avoid that.&lt;/p&gt;




&lt;h2&gt;
  
  
  Limitation 1: No episodic memory
&lt;/h2&gt;

&lt;p&gt;The most important thing to understand: AI coding tools have no memory across sessions.&lt;/p&gt;

&lt;p&gt;When you start a new conversation, the model has no idea what you built last week, what convention you settled on, what approach you tried and abandoned, or what part of the codebase is load-bearing vs vestigial. Each session starts fresh. The only "memory" the model has is what is in its context window — what you tell it now plus whatever durable artifacts (project instruction files, design docs, etc.) get loaded automatically.&lt;/p&gt;

&lt;p&gt;This has profound consequences:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The model cannot trust what is already there.&lt;/strong&gt; If a function exists, the model does not know whether it was carefully designed or hacked together at 2am. Its safest move is "add a new helper rather than modify the existing one." Bloat compounds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The model re-derives conventions every session.&lt;/strong&gt; Without memory of "we decided last month that X is the pattern for Y," each new feature gets slightly different choices. Coherence drifts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The model cannot carry "we tried that, it failed."&lt;/strong&gt; Lessons learned have to be encoded somewhere durable or they do not persist. The model will happily repeat the same mistake six weeks later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix is not more memory artifacts — it is making the codebase itself teach.&lt;/strong&gt; The codebase, your project instruction file, your memory index, and your tooling are the ONLY things that survive between sessions. They have to do the work that an experienced colleague's memory would do.&lt;/p&gt;




&lt;h2&gt;
  
  
  Limitation 2: Default behavioral bias is "do more"
&lt;/h2&gt;

&lt;p&gt;When uncertain, AI models default to thoroughness. This sounds good. It is not.&lt;/p&gt;

&lt;p&gt;Real failure modes you will encounter:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Asked to fix a bug, the model also "cleans up nearby code" you did not ask about&lt;/li&gt;
&lt;li&gt;Asked to add a feature, the model adds a config var "for future flexibility"&lt;/li&gt;
&lt;li&gt;Asked for a function, the model adds three helper functions for "modularity"&lt;/li&gt;
&lt;li&gt;Asked to implement a feature, the model writes fifty tests when ten would catch the same bugs&lt;/li&gt;
&lt;li&gt;Asked to add a docstring, the model writes a paragraph on a function whose name was already self-explanatory&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each instance is harmless. The compound effect is a codebase that looks impressive but is impossible to maintain.&lt;/p&gt;

&lt;p&gt;This bias is exacerbated by:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adversarial review workflows.&lt;/strong&gt; Some setups put a review/QA agent between every implementation step, grading completeness. The QA finds gaps; gaps are observable. The QA cannot find excesses — "this is too much" requires design judgment the QA does not have. Every review pass adds; none subtract. The codebase grows monotonically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tooling that "should be used proactively."&lt;/strong&gt; When a powerful tool exists, the model wants to use it. Result: agent invocations and skill calls for tasks that would take ten seconds with a direct edit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The illusion of progress.&lt;/strong&gt; Generating five hundred lines in a session feels like productivity. Generating a twenty-line surgical edit and stopping does not feel like much. Both can solve the problem; the second usually solves it better.&lt;/p&gt;




&lt;h2&gt;
  
  
  Limitation 3: Specifications drift from reality
&lt;/h2&gt;

&lt;p&gt;In a long project, you will write a lot of design docs. Some will describe the system as you would like it to be. Some will describe the system as it currently is. After enough time, NOBODY knows which is which — including you.&lt;/p&gt;

&lt;p&gt;If your AI uses a stale design doc as the yardstick for new implementation work, it will faithfully reproduce a system that no longer exists. If the design doc over-specifies (e.g., includes implementation details that were design assumptions, not actual decisions), the AI ships those over-specifications.&lt;/p&gt;

&lt;p&gt;The safest principle: &lt;strong&gt;design docs describe, they do not prescribe.&lt;/strong&gt; A design doc captures intent, components, interactions, intended feel. It does NOT contain pseudocode, ordered priority lists, or "here is exactly how this should work" sections. Those belong in implementation, where they can be reviewed against reality.&lt;/p&gt;




&lt;h2&gt;
  
  
  What durable artifacts actually work
&lt;/h2&gt;

&lt;p&gt;Three artifact types survive between sessions in the Claude Code ecosystem (other tools have analogues):&lt;/p&gt;

&lt;h3&gt;
  
  
  1. A project instruction file (e.g. CLAUDE.md)
&lt;/h3&gt;

&lt;p&gt;Auto-loaded every session. This is your highest-leverage surface.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use it as imperatives, not descriptions.&lt;/strong&gt; A file that says "this project uses React with TypeScript" is informational. A file that says "always type-check before committing; never &lt;code&gt;as any&lt;/code&gt;; new components require stories" is operational. The second produces consistent behavior; the first does not.&lt;/p&gt;

&lt;p&gt;Specific imperatives that have proven necessary in practice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Default to NO.&lt;/strong&gt; When uncertain, ship LESS, not more. Minimum-diff wins.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Three-callers rule for helpers.&lt;/strong&gt; Do not extract a helper function unless it has 3+ callers OR the inline version is genuinely unreadable. Two similar lines are fine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tests catch real bugs.&lt;/strong&gt; A test exists if and only if its absence would let a real bug ship. Tests for getters, framework behavior, or trivial arithmetic do not exist.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docstrings only when WHY is non-obvious.&lt;/strong&gt; Function name + types document the WHAT. Add a docstring only when there is a hidden constraint, subtle invariant, or workaround for a specific bug.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No speculative APIs.&lt;/strong&gt; Do not expose a config var, parameter, hook, or option for a hypothetical future caller. Add it WHEN the caller exists.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No "for completeness" expansions.&lt;/strong&gt; Ship what was asked for. The user can request more.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Read ONE reference example before writing similar code.&lt;/strong&gt; Pick a canonical instance of the pattern you are about to reproduce. Read it end-to-end. Then write yours to match.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These rules override default thoroughness instincts. They have to be in the file, in imperative voice, near the top.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. A memory index
&lt;/h3&gt;

&lt;p&gt;A curated list of pointers to longer documents — design decisions, lessons learned, rules captured from past sessions. The index lives in the auto-loaded surface; the documents themselves load on demand.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keep entries one line each.&lt;/strong&gt; A good entry: &lt;code&gt;Auth tokens never logged — Why: 2024-Q3 incident exposed PII; How to apply: enforce in middleware, not at log call sites.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prune regularly.&lt;/strong&gt; Memory accumulates. Most entries from six months ago point to obsolete state. Stale memory is worse than no memory because it actively misleads. Read every entry quarterly; delete what is no longer true.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use it for surprising things, not obvious things.&lt;/strong&gt; "We use TypeScript" does not belong in memory. "Our TypeScript build is slow because X — workaround Y until Z lands" does.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Reference examples in the codebase itself
&lt;/h3&gt;

&lt;p&gt;Pick canonical files for each major pattern. Make them genuinely good. Then point at them from your project instruction file ("when implementing a new auth flow, read &lt;code&gt;auth/oauth_flow.ts&lt;/code&gt; end-to-end first").&lt;/p&gt;

&lt;p&gt;The codebase teaches when there is an obvious-good example to copy. If every implementation is slightly different, the codebase is silent.&lt;/p&gt;




&lt;h2&gt;
  
  
  Process anti-patterns to avoid
&lt;/h2&gt;

&lt;p&gt;Patterns that look productive but produce bloat:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Over-specification before validation.&lt;/strong&gt; Writing detailed implementation plans for features you have not tested. The plan becomes the yardstick; you ship the plan; you discover three weeks later the plan was wrong. Build a minimum spike, validate it works the way you thought, THEN write the implementation plan if you still need one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adversarial QA between every step.&lt;/strong&gt; A workflow with a review gate after every implementation step pressures completeness. Use review for catching real bugs, not for grading whether enough was shipped.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multiple parallel primitives for similar problems.&lt;/strong&gt; When you need a thing, search for an existing thing first. If two existing things partially fit, refactor one of them rather than building a third. A decision tree "use X if A, Y if B, Z if C" is a smell — it usually means you have three things that should be one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tests-as-coverage.&lt;/strong&gt; Coverage metrics reward writing tests, not catching bugs. A 95%-coverage suite full of trivial assertions is worse than a 60%-coverage suite that catches every regression you have ever seen. Track real bugs caught, not lines of test code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementing every flagged item.&lt;/strong&gt; When a review/QA agent (or a human reviewer) flags something, the right response is "is this actually a problem worth fixing?" — not "let me address every comment." Most flags are noise; the discipline is choosing.&lt;/p&gt;




&lt;h2&gt;
  
  
  A lightweight process that works
&lt;/h2&gt;

&lt;p&gt;After hitting all the above failure modes, a process that has actually held up over months:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;User describes goal.&lt;/strong&gt; Brief — what they want done, why.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI reads the relevant reference example + the relevant doc, in that order.&lt;/strong&gt; Not exhaustive research — one good reference, one relevant doc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI proposes minimum implementation in ≤200 words.&lt;/strong&gt; What it will change, what it will NOT change. Names files, names functions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User approves or redirects.&lt;/strong&gt; Cheap — usually one or two messages.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI executes.&lt;/strong&gt; Builds the minimum.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tests at the end.&lt;/strong&gt; User runs them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;If wrong, iterate.&lt;/strong&gt; Cheaper than a four-step gated review with QA agents.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The key shift: &lt;strong&gt;the user is QA.&lt;/strong&gt; Not because the user wants to be, but because the user is the only QA in the loop with judgment about what should ship vs. what should not. AI agents do not have that judgment — they have completeness checks.&lt;/p&gt;

&lt;p&gt;This process works because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It surfaces design tension EARLY (at the proposal step, before any code)&lt;/li&gt;
&lt;li&gt;It limits blast radius (minimum implementation = small revert if wrong)&lt;/li&gt;
&lt;li&gt;It keeps the user in the loop without burning their time on every detail&lt;/li&gt;
&lt;li&gt;It defaults to "stop and ask" when uncertain rather than "do more"&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The validation-tool principle
&lt;/h2&gt;

&lt;p&gt;The single most important shift after a year of joint work: &lt;strong&gt;if you cannot validate a feature by using it, do not over-specify it on paper.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The instinct when designing something complex is to specify everything in advance — every decision, every interaction, every edge case. That instinct produces design docs that describe systems too complex for any human to hold in their head, and that the AI then faithfully implements at full complexity.&lt;/p&gt;

&lt;p&gt;The opposite instinct works better: build the minimum scaffolding required to USE the feature, even if it is ugly. Use it. Discover what was actually important vs. what was design speculation. THEN refine.&lt;/p&gt;

&lt;p&gt;This applies at every scale:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do not write detailed user-flow specs until you can click through the flow&lt;/li&gt;
&lt;li&gt;Do not design a complex permissions system until you have users hitting permission boundaries&lt;/li&gt;
&lt;li&gt;Do not spec an admin UI before you have admins&lt;/li&gt;
&lt;li&gt;Do not write a CMS before you have content&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In each case, the absence of the validation tool is what makes over-specification feel safe. It is not safe — it is just invisible. Build the validation tool first.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;If you are starting a long project with an AI coding tool:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set up a project instruction file with imperative behavioral rules. Keep it small. Update it when patterns drift.&lt;/li&gt;
&lt;li&gt;Maintain a curated memory index. Prune quarterly.&lt;/li&gt;
&lt;li&gt;Pick reference examples in the codebase. Point at them from the instruction file.&lt;/li&gt;
&lt;li&gt;Default to NO. Minimum-diff wins. Three-callers rule for helpers. Tests catch real bugs.&lt;/li&gt;
&lt;li&gt;Do not use design docs as ship targets. Do not use adversarial QA as a gate. Do not extract abstractions early.&lt;/li&gt;
&lt;li&gt;Build validation tools BEFORE you over-specify the things they validate.&lt;/li&gt;
&lt;li&gt;The user is QA. AI agents do not have shipping judgment. Trust the model to generate; trust yourself to decide what to keep.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The best software engineers are not the ones who write the most code. They are the ones who write the least code that actually solves the problem. AI tools shift the cost of writing code toward zero. That makes the discipline of writing LESS code more important, not less.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>productivity</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Can the Cloud Resume Challenge Teach Me AWS?</title>
      <dc:creator>cner-smith</dc:creator>
      <pubDate>Wed, 12 Apr 2023 23:24:54 +0000</pubDate>
      <link>https://forem.com/cnersmith/can-the-cloud-resume-challenge-teach-me-aws-5c8</link>
      <guid>https://forem.com/cnersmith/can-the-cloud-resume-challenge-teach-me-aws-5c8</guid>
      <description>&lt;p&gt;Are you tired of sending out resumes and never hearing back from potential employers? Do you want to showcase your skills and experience in a unique and impressive way? Look no further! In this blog post, I will take you on a journey of creating a resume website as part of the AWS Cloud Resume Challenge, an innovative approach to standing out in the competitive job market.&lt;/p&gt;

&lt;p&gt;As a job seeker in today's digital age, having a traditional paper resume is no longer enough. Employers are increasingly looking for candidates who can demonstrate their technical expertise and creativity. That's where the AWS Cloud Resume Challenge comes in. It's a hands-on project that allows you to create a resume website using Amazon Web Services (AWS) and show off your skills in a practical and dynamic way.&lt;/p&gt;

&lt;p&gt;The first step in my journey was to plan out my resume website. I brainstormed ideas and thought about what sets me apart from other job candidates. I wanted my website to reflect my personality, highlight my professional achievements, and showcase my technical skills. I made a list of the AWS services I wanted to use, such as Amazon S3 for hosting my website, Amazon Route 53 for domain registration, and AWS Lambda for serverless computing.&lt;/p&gt;

&lt;p&gt;Once I had a clear vision for my resume website, I set up my AWS account and started building my website using the AWS Console.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hosting the Website with Amazon S3&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Amazon Simple Storage Service (S3) is a highly scalable and durable object storage service that can be used to host static websites. I created an S3 bucket to store my website's HTML, CSS, and JavaScript files. I then configured the bucket to act as a static website by enabling the static website hosting feature in the S3 console. I also set up permissions to make the website files publicly accessible. This allowed me to host my resume website and make it accessible to visitors via a website URL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Registering a Custom Domain with Amazon Route 53&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Amazon Route 53 is a domain registration and DNS management service that I used to register a custom domain name for my resume website. I followed the documentation provided by AWS to register a domain name of my choice, such as connersmith.net, using Route 53. Once the domain was registered, I configured the DNS settings to route traffic to my S3 bucket hosting the website. This involved creating a Route 53 hosted zone, adding a record set for the domain name, and specifying the S3 bucket as the target for the domain name. It's important to note that DNS changes can take some time to propagate globally, so I had to be patient and wait for the changes to take effect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementing Serverless Functions with AWS Lambda:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of the key components of my resume website was a dynamic visitor counter that keeps track of the number of visitors to my website. To implement this feature, I decided to leverage the power of serverless computing using AWS Lambda.&lt;/p&gt;

&lt;p&gt;AWS Lambda is a serverless compute service that allows you to run code in response to events without having to manage any servers. It can be used to create small, focused functions that can perform specific tasks and integrate with other AWS services, such as DynamoDB and API Gateway, to build serverless applications.&lt;/p&gt;

&lt;p&gt;To implement the visitor counter, I first created a DynamoDB table to store the visitor count. DynamoDB is a managed NoSQL database service provided by AWS that offers low-latency and scalable performance. I defined the table schema with a primary key for the visitor count and configured it for read and write capacity units to meet the expected workload.&lt;/p&gt;

&lt;p&gt;Next, I created an AWS Lambda function that would be triggered by an API Gateway endpoint whenever a visitor accessed my website. The Lambda function would increment the visitor count in the DynamoDB table and return the updated count to the API Gateway, which would then be displayed on the website.&lt;/p&gt;

&lt;p&gt;I used python 3.8 as the runtime for my Lambda function, and I wrote the function logic in JavaScript. The function code consisted of using the AWS SDK for JavaScript to interact with DynamoDB, specifically the update_item method to increment the visitor count in the DynamoDB table.&lt;/p&gt;

&lt;p&gt;I then created an API Gateway REST API with a custom route that would trigger the Lambda function when accessed. I configured the API Gateway to pass the request from the website to the Lambda function and return the response to the website.&lt;/p&gt;

&lt;p&gt;Once the Lambda function and API Gateway were set up, I tested the visitor counter by accessing my website and verifying that the count was incremented in the DynamoDB table and displayed correctly on the website. I also added error handling in the Lambda function to handle cases where the DynamoDB update fails, ensuring the reliability of the visitor counter.&lt;/p&gt;

&lt;p&gt;Throughout the implementation process, I faced some challenges, such as understanding the event-driven nature of Lambda and API Gateway, configuring the permissions and roles for the Lambda function to interact with DynamoDB securely, and troubleshooting issues related to API Gateway integration. However, with careful reading of documentation, experimenting with different configurations, and thorough testing, I was able to overcome these challenges and successfully implement the visitor counter as a serverless function using AWS Lambda.&lt;/p&gt;

&lt;p&gt;In conclusion, leveraging the power of serverless computing with AWS Lambda allowed me to implement a dynamic visitor counter for my resume website in a scalable, cost-effective, and serverless manner. It provided the flexibility to handle varying workloads, automatically scale based on demand, and offload the management of servers, allowing me to focus on the application logic rather than infrastructure management.&lt;/p&gt;

&lt;p&gt;Next, I focused on optimizing my resume website for performance and security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optimizing Performance with Amazon CloudFront:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon CloudFront is a content delivery network (CDN) that helps improve the performance of websites by caching and delivering content from edge locations around the world. I used CloudFront to distribute my website's content, such as images and CSS files, to edge locations, which are geographically distributed points of presence. This helped reduce the latency and improved the load times for visitors accessing my website from different locations. I configured CloudFront to use my S3 bucket as the origin, which is the source of the content that CloudFront caches and serves. I also enabled SSL encryption using AWS Certificate Manager to ensure that the content served by CloudFront is encrypted and secure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Design and User Experience:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While the technical implementations were crucial, I also paid close attention to the design and user experience of my resume website. I used modern web design principles, such as responsive design, clean layout, and visually appealing typography, to create a professional and polished website. I made sure that the website was accessible, following web accessibility guidelines such as providing alternative text for images, using semantic HTML tags, and ensuring proper color contrast for readability. I also optimized the website for different devices, such as desktops, tablets, and mobile devices, to ensure a consistent experience across different platforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DevOps Practices and CI/CD:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the final stages of creating my resume website, I implemented a robust CI/CD (Continuous Integration/Continuous Deployment) pipeline using GitHub and GitHub Actions. This allowed me to dive deeper into the intricacies of Git version control and gain further expertise in working with Linux and the command line. I set up GitHub Actions workflows to automatically build, test, and deploy my website whenever changes were pushed to the repository, ensuring that the latest version of my website was always deployed to production.&lt;/p&gt;

&lt;p&gt;In addition to the CI/CD pipeline, I took on the challenge of converting my entire website's infrastructure to Terraform, which is a widely-used Infrastructure-as-Code (IaC) service. This required me to define my entire infrastructure, including AWS resources like S3 buckets, CloudFront distributions, API Gateway, Lambda functions, and DynamoDB tables, as code using Terraform configuration files. I learned the ins and outs of Terraform, including its declarative syntax, resource lifecycle management, and state management. I also gained a deep understanding of the benefits of using IaC, such as versioning, reproducibility, and scalability, in managing cloud infrastructure.&lt;/p&gt;

&lt;p&gt;This extra challenge of implementing Terraform for my website's infrastructure proved to be highly rewarding. It gave me a comprehensive understanding of managing infrastructure as code, allowing me to treat my infrastructure as a software project with version control, automated testing, and continuous deployment. It also provided me with the ability to easily update and scale my infrastructure, and quickly recover from any issues using Terraform's infrastructure management capabilities. Overall, implementing CI/CD and converting my infrastructure to Terraform greatly enhanced the efficiency, reliability, and scalability of my resume website, and provided me with valuable skills in modern DevOps practices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges faced along the way:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Learning Curve:&lt;/em&gt; As with any new technology or service, there was a learning curve involved in getting familiar with the AWS services and their configuration. I had to spend time reading documentation, watching tutorials, and experimenting with the services to understand how they work and how they can be integrated to create a resume website. It required patience and perseverance to overcome the initial challenges and gain confidence in using the AWS services effectively.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Managing Permissions and Roles:&lt;/em&gt; AWS uses a fine-grained permissions model that requires careful management of permissions and roles to ensure that the right level of access is granted to different resources and services. Setting up the correct permissions for the S3 bucket, Lambda functions, API Gateway, and CloudFront distribution required careful planning and configuration to ensure that the website functions properly and securely. Managing IAM roles and policies, understanding resource policies, and troubleshooting permissions issues were some of the challenges I faced along the way.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Debugging and Troubleshooting:&lt;/em&gt; Building a resume website with multiple AWS services involves complex interactions and configurations. Debugging and troubleshooting issues can be challenging, especially when dealing with issues related to API Gateway, Lambda functions, DNS settings, and CloudFront configurations. I had to learn how to use AWS CloudWatch, logs, and other monitoring tools to diagnose and resolve issues effectively.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Deployment and Testing:&lt;/em&gt; Deploying changes to the website and testing the functionality required careful planning and coordination. I had to ensure that changes made in one service, such as updating Lambda function code, were properly tested and deployed without disrupting the overall website functionality. Testing the website on different devices, browsers, and network conditions to ensure a consistent user experience was also a challenge that required thorough testing and iteration.&lt;/p&gt;

&lt;p&gt;In conclusion, the AWS Cloud Resume Challenge was a challenging but fulfilling experience. It allowed me to showcase my skills and experience in a creative and innovative way, setting me apart from other job candidates. The journey of creating my resume website using AWS services was a valuable learning experience that not only improved my technical skills but also helped me better understand the capabilities of cloud computing. I am confident that my resume website will impress potential employers and open doors to exciting job opportunities. If you're looking to stand out in the competitive job market, I highly recommend taking on the AWS Cloud Resume Challenge and creating your own stellar resume website using AWS services. Good luck!&lt;/p&gt;

&lt;p&gt;Check out my website at &lt;a href="https://connersmith.net" rel="noopener noreferrer"&gt;https://connersmith.net&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;This blog post was edited with the assistance of generative AI for professionalism.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>challenge</category>
      <category>learning</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
