<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Taha Yağız Güler</title>
    <description>The latest articles on Forem by Taha Yağız Güler (@tahayagizguler).</description>
    <link>https://forem.com/tahayagizguler</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/tahayagizguler"/>
    <language>en</language>
    <item>
      <title>Claude Code'un Gücünü, Anthropic API Bağımlılığı Olmadan Ücretsiz Modellerle (OpenRouter) Kullanmak</title>
      <dc:creator>Taha Yağız Güler</dc:creator>
      <pubDate>Mon, 23 Mar 2026 15:40:29 +0000</pubDate>
      <link>https://forem.com/tahayagizguler/claude-codeun-gucunu-anthropic-api-bagimliligi-olmadan-ucretsiz-modellerle-openrouter-kullanmak-4fim</link>
      <guid>https://forem.com/tahayagizguler/claude-codeun-gucunu-anthropic-api-bagimliligi-olmadan-ucretsiz-modellerle-openrouter-kullanmak-4fim</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Anthropic aboneliği ödemeden Claude Code'u kullanmak mümkün mü? Evet — ve nasıl yapıldığını adım adım anlatıyorum.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Giriş: "Usage Limit Reached" Duvarına Çarpanlar İçin Kaçış Planı
&lt;/h2&gt;

&lt;p&gt;Kritik bir hatayı düzeltmeye çalışıyorsunuz. Derken Claude Code ekrana soğuk bir mesaj atıyor: "Claude usage limit reached".&lt;/p&gt;

&lt;p&gt;Tanıdık geldi değil mi?&lt;/p&gt;

&lt;p&gt;Ya da belki şu soruyla uyanıyorsunuz: &lt;em&gt;"Claude Code için aylık $20 ödemek zorunda mıyım? Başka model kullanamaz mıyım?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Evet, kullanabilirsin.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Claude Code, Anthropic API'sine bağlı bir araç değil. Arka planda hangi URL'ye istek attığını umursamıyor. O URL'yi değiştirirseniz, istekleri istediğiniz modele yönlendirebilirsiniz.&lt;/p&gt;




&lt;h2&gt;
  
  
  OpenRouter Nedir?
&lt;/h2&gt;

&lt;p&gt;OpenRouter, 400'den fazla yapay zeka modelini tek bir API üzerinden kullanmanızı sağlayan bir platform. Claude, GPT, Gemini, Llama, DeepSeek, Qwen… hepsi orada.&lt;/p&gt;

&lt;p&gt;Asıl güzelliği ise 39'dan fazla ücretsiz model barındırması. Günlük kotası sınırlı ama geliştirici projeleri, öğrenme ve yan projeler için fazlasıyla yeterli.&lt;/p&gt;




&lt;h2&gt;
  
  
  Hadi Başlayalım: 5 Dakikada Kurulum
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Adım 1 — Claude Code'u Kurun (henüz yoksa)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; @anthropic-ai/claude-code
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. OpenRouter'da Ücretsiz API Anahtarı Alın
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://openrouter.ai" rel="noopener noreferrer"&gt;openrouter.ai&lt;/a&gt; adresine gidin, ücretsiz hesap ve sonrasında ayarlar kısmından API Key’inizi oluşturun.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adım 3 — Ortam Değişkenlerini Ayarlayın
&lt;/h3&gt;

&lt;p&gt;Shell profil dosyanıza (&lt;code&gt;~/.zshrc&lt;/code&gt; veya &lt;code&gt;~/.bashrc&lt;/code&gt;) şu satırları ekleyin:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ANTHROPIC_BASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://openrouter.ai/api"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ANTHROPIC_AUTH_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"sk-or-ANAHTARINIZ_BURAYA"&lt;/span&gt; &lt;span class="c"&gt;#OpenRouter API Key&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ANTHROPIC_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;ANTHROPIC_API_KEY&lt;/code&gt;'i &lt;strong&gt;boş bırakmak&lt;/strong&gt; kasıtlı — bu Claude Code'un Anthropic'e değil, OpenRouter'a gitmesini sağlıyor.&lt;/p&gt;

&lt;p&gt;Ardından terminali yenileyin:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;source&lt;/span&gt; ~/.zshrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Adım 4 — Bir Model Seçin ve Başlatın
&lt;/h3&gt;

&lt;p&gt;OpenRouter’daki &lt;a href="https://openrouter.ai/models?q=free" rel="noopener noreferrer"&gt;ücretsiz&lt;/a&gt; modelleri inceleyin. Beğendiğiniz bir modeli seçin:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude &lt;span class="nt"&gt;--model&lt;/span&gt; &lt;span class="s2"&gt;"openrouter-model-name"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr66bes9ph7rq051zi2sq.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr66bes9ph7rq051zi2sq.PNG" width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Bu kadar. Claude Code artık OpenRouter üzerinden çalışıyor.&lt;/p&gt;




&lt;h2&gt;
  
  
  Çalıştığını Nereden Anlarım?
&lt;/h2&gt;

&lt;p&gt;Claude Code içinde /status komutunu çalıştırın. Ayrıca OpenRouter’ın Activity Dashboard’unda isteklerinizi canlı görebilirsiniz.&lt;/p&gt;

&lt;p&gt;Oturum içinde model değiştirmek isterseniz:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/model &amp;lt;model-name-here&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Dikkat Edilmesi Gereken Noktalar
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Ücretsiz modeller her zaman yeterli mi?&lt;/strong&gt;&lt;br&gt;
Kod okuma, şablon oluşturma, hata açıklama, test yazma gibi görevlerde genelde tatmin edici sonuç alırsınız. Kritik üretim işleri veya derin refaktör için premium modeller hâlâ önerimdir.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Her Model Her Şeyi Yapamaz&lt;/strong&gt;&lt;br&gt;
Claude Code’un dosya okuma/yazma, terminal komutları çalıştırma gibi özellikleri (tool calling) model desteği gerektirir. GPT-OSS, Qwen3, Gemini bu konuda güvenilir. Hafif modellerde bu özellik olmayabilir. Ayrıca gerçek zamanlı yanıt akışı (streaming) için de modelin desteği şart. OpenRouter’ın model sayfasından teyit edin, iyi alışkanlık.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sonuç
&lt;/h2&gt;

&lt;p&gt;Claude Code, sandığınızdan çok daha esnek bir araç. Hangi API’yle konuştuğunu bir URL ve bir anahtarla değiştirebiliyorsunuz. Geri kalan her şey aynı.&lt;/p&gt;

&lt;p&gt;Dört satır kod. Beş dakika. Artık herhangi bir modele erişiminiz var.&lt;/p&gt;

&lt;p&gt;Anthropic faturası ödemek istemiyorsanız veya farklı modelleri denemek istiyorsanız, işte başlangıç noktanız.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>cli</category>
      <category>tooling</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>DevOps Pipeline: Go Uygulamasından Kubernetes’e Sürekli Entegrasyon</title>
      <dc:creator>Taha Yağız Güler</dc:creator>
      <pubDate>Sat, 23 Nov 2024 16:25:03 +0000</pubDate>
      <link>https://forem.com/tahayagizguler/2ntech-proje-1ka6</link>
      <guid>https://forem.com/tahayagizguler/2ntech-proje-1ka6</guid>
      <description>&lt;p&gt;&lt;strong&gt;DevOps Pipeline: Go Uygulamasından Kubernetes’e Sürekli Entegrasyon&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Proje Amacı&lt;/strong&gt;&lt;br&gt;
Bu projede, Go dilinde yazılmış bir web uygulamasının geliştirilmesi, Docker kullanarak konteynerleştirilmesi, GitHub Actions ile CI/CD sürecinin kurulması, Kubernetes kullanılarak uygulamanın dağıtımının otomatik hale getirilmesi ve Argo CD ile sürekli dağıtım sürecinin entegrasyonu üzerine odaklandım.&lt;/p&gt;

&lt;p&gt;Projenin GitHub reposuna &lt;a href="https://github.com/tahayagizguler/devops-go-webapp/tree/main" rel="noopener noreferrer"&gt;buradan &lt;/a&gt;ulaşabilirsiniz.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Go Web Uygulaması Oluşturulması&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Aşağıda, basit bir Go web uygulamasının test edilmiş örneği yer almaktadır:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
    "net/http"
    "net/http/httptest"
    "testing"
)

func TestHomePage(t *testing.T) {
    req, err := http.NewRequest("GET", "/", nil)
    if err != nil {
        t.Fatal(err)
    }

    rr := httptest.NewRecorder()
    handler := http.HandlerFunc(homePage)

    handler.ServeHTTP(rr, req)

    if status := rr.Code; status != http.StatusOK {
        t.Errorf("homePage handler returned wrong status code: got %v want %v", status, http.StatusOK)
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func TestHomeEndpoint(t *testing.T) {
    req, err := http.NewRequest("GET", "/home", nil)
    if err != nil {
        t.Fatal(err)
    }

    rr := httptest.NewRecorder()
    handler := http.HandlerFunc(homePage)

    handler.ServeHTTP(rr, req)

    if status := rr.Code; status != http.StatusOK {
        t.Errorf("home handler returned wrong status code: got %v want %v", status, http.StatusOK)
    }
}

func TestAboutPage(t *testing.T) {
    req, err := http.NewRequest("GET", "/about", nil)
    if err != nil {
        t.Fatal(err)
    }

    rr := httptest.NewRecorder()
    handler := http.HandlerFunc(aboutPage)

    handler.ServeHTTP(rr, req)

    if status := rr.Code; status != http.StatusOK {
        t.Errorf("aboutPage handler returned wrong status code: got %v want %v", status, http.StatusOK)
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Konteynerleştirme - Dockerfile Kullanımı&lt;/strong&gt;&lt;br&gt;
Bu süreçte, çok aşamalı bir Dockerfile kullanarak uygulamanın boyutunu optimize ettim. Dockerfile aşağıdaki gibi hazırlanmıştır:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM golang:1.23 AS base

WORKDIR /app

COPY go.mod .

RUN go mod download

COPY . .

RUN go build -o main .

# Final stage - Distroless image.

FROM gcr.io/distroless/base

COPY --from=base /app/main .

COPY --from=base /app/static ./static

# Expose the port on which the application will run
EXPOSE 8080

# Command to run the application
CMD ["./main"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Kubernetes ile Dağıtım&lt;/strong&gt;&lt;br&gt;
Uygulamanın Docker ile konteynerleştirilmesinin ardından, Kubernetes (K8s) kullanarak dağıtım yaptım. EKS (Amazon Elastic Kubernetes Service) kullanarak, Kubernetes kümesi oluşturdum ve Go uygulamasını bu kümeye dağıttım. AWS Kubernetes kümemi oluştururken eksctl kullandım ve Terraform ile yapmamın sebebi Bu kadar kısa bir işlem için buna gerek duymamam.&lt;br&gt;
”Eğer Terraform ile ilgili projelerimi incelemek isterseniz, &lt;a href="https://github.com/tahayagizguler" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; profilimi inceleyebilirsiniz.”&lt;/p&gt;

&lt;p&gt;Aşağıda uygulamanın Kubernetes dağıtımı için kullanılan YAML dosyaları örnek olarak verilmiştir:&lt;/p&gt;

&lt;p&gt;Deployment: Uygulamanın dağıtımını ve yönetimini sağlar.&lt;br&gt;
Service: Uygulamanın dış dünyaya açılmasını sağlar.&lt;br&gt;
Ingress: Trafiğin doğru yönlendirilmesini ve dış erişimi sağlar.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: go-web-app
  labels:
    app: go-web-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: go-web-app
  template:
    metadata:
      labels:
        app: go-web-app
    spec:
      containers:
      - name: go-web-app
        image: tyguler/go-web-app:v1
        ports:
        - containerPort: 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: go-web-app
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
  - host: go-web-app.local
    http:
      paths: 
      - path: /
        pathType: Prefix
        backend:
          service:
            name: go-web-app
            port:
              number: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Service
apiVersion: v1
kind: Service
metadata:
  name: go-web-app
  labels:
    app: go-web-app
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
  selector:
    app: go-web-app
  type: ClusterIP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Helm ile Uygulama Yönetimi&lt;/strong&gt;&lt;br&gt;
Kubernetes üzerinde uygulama yönetimini kolaylaştırmak için Helm kullanmaya karar verdim. Helm, Kubernetes için bir paket yöneticisidir ve uygulamanın dağıtımını, yapılandırmasını ve sürüm yönetimini basitleştirir.&lt;/p&gt;

&lt;p&gt;Helm chart'ları, Kubernetes kaynaklarını yönetmek için şablonlar sağlar ve bu şablonlar, her ortamda uygulanabilir hale gelir.&lt;/p&gt;

&lt;p&gt;İlk olarak bir chart oluşturdum:&lt;br&gt;
&lt;code&gt;helm create go-web-app-chart&lt;/code&gt;&lt;br&gt;
Sonrasında K8s YAML dosyalarını templates klasörüne kopyaladım ve imajları yönetebilmek için bir değişken atadım:&lt;br&gt;
&lt;code&gt;image: tyguler/go-web-app:{{ .Values.image.tag }}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Bu sayede chart içerisindeki Values.yaml dosyasından kontrol sağladım.&lt;br&gt;
&lt;code&gt;helm install go-web-app ./go-web-app-chart&lt;/code&gt; komutu ile tüm yapıyı test ettim ve sonrasında &lt;code&gt;helm uninstall go-web-app&lt;/code&gt; komutu ile kaynakları temizledim.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. GitHub Actions ile CI/CD Süreci&lt;/strong&gt;&lt;br&gt;
Uygulamanın otomatik olarak test edilmesi, derlenmesi ve dağıtımı için GitHub Actions kullandım. GitHub Actions, her commit sonrası otomatik olarak testlerin çalıştırılmasını, Docker imajlarının oluşturulmasını ve bunların DockerHub'a yüklenmesini sağlar. Bu işlem, sürekli entegrasyon (CI) sürecinin temelini oluşturur.&lt;/p&gt;

&lt;p&gt;GitHub Actions iş akışı şu şekildedir:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: CI

on:
  push:
    branches:
      - main
    paths-ignore:
      - 'README.md'
      - 'helm/**'

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
    - name: Checkout
      uses: actions/checkout@v4

    - name: Set up Go 1.23
      uses: actions/setup-go@v2
      with:
        go-version: 1.23

    - name: Build
      run: go build -o go-web-app

    - name: Test
      run: go test ./... # ./... means all subdirectories

  code_quality:
    runs-on: ubuntu-latest

    steps:
    - name: Checkout
      uses: actions/checkout@v4

    - name: GolangCI-Lint
      uses: golangci/golangci-lint-action@v6
      with:
        version: latest

  push:
    runs-on: ubuntu-latest

    steps:
    - name: Checkout
      uses: actions/checkout@v4

    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v1

    - name: Login to DockerHub
      uses: docker/login-action@v2
      with:
        username: ${{ secrets.DOCKER_USERNAME }}
        password: ${{ secrets.DOCKER_PASSWORD }}

    - name: Build and push
      uses: docker/build-push-action@v6
      with:
        context: .
        file: ./Dockerfile
        push: true
        tags: ${{ secrets.DOCKER_USERNAME }}/go-web-app:${{ github.run_id }}

  update-newtag-in-helm-chart:
    runs-on: ubuntu-latest

    needs: push

    steps:
    - name: Checkout repository
      uses: actions/checkout@v4

    - name: Update Helm chart
      run: |
        # Update the Helm chart tag
        sed -i 's/tag: .*/tag: ${GITHUB_RUN_ID}/' ./go-web-app-chart/values.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6. Argo CD ile Sürekli Dağıtım Süreci&lt;/strong&gt;&lt;br&gt;
Son olarak, Argo CD ile sürekli dağıtım sürecini tamamladım. Argo CD, Kubernetes kümelerine otomatik dağıtım yaparak uygulamaların doğru sürümlerinin her zaman çalışmasını sağlar. Argo CD'nin kullanımını otomatikleştirdim ve Helm chart'ım ile sürekli dağıtımı Argo CD üzerinden gerçekleştirdim.&lt;/p&gt;

&lt;p&gt;Argo CD'yi kurduktan sonra, Helm chart'ımı kaynak olarak ekledim ve Argo CD üzerinden manuel veya otomatik dağıtımlar gerçekleştirdim.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pjrmikvmd4nthcfxdyo.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pjrmikvmd4nthcfxdyo.PNG" alt=" " width="800" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sonuç&lt;/strong&gt;&lt;br&gt;
Proje, Docker, Kubernetes, Helm, GitHub Actions ve Argo CD entegrasyonuyla uygulama geliştirme ve dağıtım süreçlerini otomatikleştirmeme olanak tanıdı. Bu süreçlerin her birini doğru şekilde entegre etmek, sürekli entegrasyon (CI) ve sürekli dağıtım (CD) süreçlerini başarıyla kurmak, yazılım geliştirme ve dağıtımında verimliliği artırarak daha hızlı ve güvenilir sonuçlar elde edilmesini sağladı. Bu projede öğrendiğim en önemli şeylerden biri, farklı araç ve teknolojilerin birbirine nasıl entegre olacağını anlamak ve bu entegrasyonu etkin bir şekilde yönetebilmektir.&lt;/p&gt;

&lt;p&gt;Karşılaşılan Zorluklar&lt;br&gt;
Her ne kadar proje başarılı bir şekilde tamamlanmış olsa da, bazı zorluklarla karşılaştım. Bu zorluklar, proje sürecinin bazı noktalarında benim için önemli öğrenme fırsatları sundu.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;GitHub Actions ile CI/CD sürecini kurarken, özellikle test aşamalarında karşılaştım. Docker imajlarının doğru bir şekilde yapılandırılması ve DockerHub'a yüklenmesi için gerekli olan adımlar zaman zaman hatalı gerçekleşti. Bu durumun üstesinden gelmek için GitHub Actions iş akışını dikkatlice optimize ettim ve her bir adımı doğru sırayla çalışacak şekilde yapılandırdım.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Argo CD ile sürekli dağıtım süreçlerini otomatikleştirmeye çalışırken, Helm chart'ları ile doğru entegrasyonu sağlamak bazen karmaşık hale geldi. Bu durumu çözmek için Argo CD'nin Helm entegrasyonunu dikkatlice araştırdım ve doğru Helm chart sürümünü kullandım. Ayrıca, Argo CD'nin düzgün çalışması için kaynakların doğru şekilde tanımlanması gerektiği konusunda birçok deneme yaptım.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubernetes kümesinin yönetimi, özellikle manuel yapılandırmalar ve değişikliklerin yapılması gerektiğinde zorlayıcı olabiliyor. EKS üzerinde kümeyi yönetmek, özellikle altyapı hataları veya ağ yapılandırma sorunları nedeniyle zaman zaman karmaşık hale geldi. Bu zorlukları aşmak için AWS belgelerine başvurdum ve en iyi uygulama örneklerini takip ettim.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Mastadon Bot with AWS Lambda, S3, CloudWatch, and SSM</title>
      <dc:creator>Taha Yağız Güler</dc:creator>
      <pubDate>Thu, 23 Mar 2023 13:08:57 +0000</pubDate>
      <link>https://forem.com/tahayagizguler/mastadon-bot-with-aws-lambda-s3-cloudwatch-and-ssm-2bmf</link>
      <guid>https://forem.com/tahayagizguler/mastadon-bot-with-aws-lambda-s3-cloudwatch-and-ssm-2bmf</guid>
      <description>&lt;p&gt;In this post I show you how I set up the MastadonBot to share random frames from the movie Taxi Driver.&lt;br&gt;
This bot was created by taking inspiration from &lt;a href="https://www.cameronezell.com/creating-a-mastodon-bot-with-aws-lambda/" rel="noopener noreferrer"&gt;Cameron Ezell&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I also created this bot using Terraform. You can browse &lt;a href="https://github.com/tahayagizguler/MastadonBot_AWS" rel="noopener noreferrer"&gt;Github MastadonBot_AWS&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;How I got the frames?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Python script "&lt;a href="https://github.com/tahayagizguler/ExtractImagesFromVideoFile" rel="noopener noreferrer"&gt;ExtractImagesFromVideoFile&lt;/a&gt;"  I wrote for capturing screenshots at specified intervals from a video file using ffmpeg. This Python script captures screenshots from a video file at specified intervals (e.g., 1 second) using ffmpeg.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Uploading frames!&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;First of all, I created an S3 bucket and uploaded the screenshots I wanted to share to this bucket and used the following path while uploading the screenshots.&lt;/p&gt;

&lt;p&gt;After saving about 7000 screenshots it would take quite a while to upload them to the S3 bucket so I created an EC2 instance and uploaded the files via FTP then copied the S3 bucket via the aws cli &amp;gt; &lt;code&gt;aws s3 cp ./ s3://your-bucket/ --recursive&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Create your Application&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;For anyone interested in building a Mastodon bot, make sure the instance you create your bot on is fine with automated accounts.&lt;/p&gt;

&lt;p&gt;After signing up, create an app from Preferences &amp;gt; Development.&lt;br&gt;&lt;br&gt;
 (&lt;a href="https://botsin.space/about" rel="noopener noreferrer"&gt;botsin.space&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1vefcuicab5zsofxg1c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1vefcuicab5zsofxg1c.png" alt="App" width="800" height="400"&gt;&lt;/a&gt;&lt;br&gt;
Enter a name for your application and default values for now.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zjdeqlnimla4aqswpa5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zjdeqlnimla4aqswpa5.jpg" alt="Keys" width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I keep the access token as SecureString type using the AWS Systems Manager Parameter Store. This is an important step for security.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Lambda Function&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Boto3(AWS Lambda) script for randomly selects screenshots stored in S3 and enables sharing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import mastodon
from mastodon import Mastodon
import boto3
import os
import random
import re

def lambda_handler(event, context):
    BASE_URL = "https://botsin.space"
    BUCKET_NAME = "movieframebucket" # XXXXXXX
    keyArray = []
    ssm = boto3.client("ssm")

    auth_keys = ssm.get_parameters(
        Names=["my_consumer_key"], WithDecryption=True) # XXXXXXX
    access_token = auth_keys["Parameters"][0]["Value"]

    m = Mastodon(access_token=access_token, api_base_url=BASE_URL)

    s3 = boto3.resource("s3")
    s3bucket = s3.Bucket(BUCKET_NAME)

    try:
        for obj in s3bucket.objects.filter(Prefix="taxi_driver_frames/"): # XXXXXXX
            # add all frame key values from episode to an array
            keyArray.append("{0}".format(obj.key))
    except Exception as e:
        # If there is an error, raise the exception and stop the function
        raise e

    numFrames = len(keyArray)
    randomFrame = random.randint(0, numFrames)
    KEY = keyArray[randomFrame]
    print("frame from second # " + str(randomFrame))
    s3.Bucket(BUCKET_NAME).download_file(KEY, "/tmp/local.jpg")

    random_objectNumber = re.search(r'\d+', KEY).group()
    time_in_seconds = int(random_objectNumber)
    minutes, seconds = divmod(time_in_seconds, 60)
    hours, minutes = divmod(minutes, 60)
    time_format = "{:02d}:{:02d}:{:02d}".format(hours, minutes, seconds)
    frameInfo = "Taxi Driver | {:02d}:{:02d}:{:02d}".format(hours, minutes, seconds) 

    # We have to first create a media ID when uploading an image
    media = m.media_post("/tmp/local.jpg")
    # Then we can reference this media ID in our status post
    m.status_post(status=frameInfo, media_ids=media)


    os.remove("/tmp/local.jpg")
    return None
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I installed the Mastadon package locally and zipped it with this code I wrote then uploaded it to Lambda.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;CloudWatch Event for Lambda&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I created a CloudWatch Event for Lambda to publish a post every 30 minutes.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs5utxgrcw7xzfbv3mre.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs5utxgrcw7xzfbv3mre.png" alt="Im" width="800" height="272"&gt;&lt;/a&gt;&lt;strong&gt;And here is our &lt;a href="https://botsin.space/@taxidriverframes" rel="noopener noreferrer"&gt;Mastadon bot&lt;/a&gt; ready!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I may have missed a few parts, don't hesitate to reach out to me if you need help. I would like to help you as much as I can!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>devops</category>
      <category>cloud</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Cloud Resume Challenge (AWS)</title>
      <dc:creator>Taha Yağız Güler</dc:creator>
      <pubDate>Sun, 19 Feb 2023 15:17:33 +0000</pubDate>
      <link>https://forem.com/tahayagizguler/cloud-resume-challenge-aws-4ghf</link>
      <guid>https://forem.com/tahayagizguler/cloud-resume-challenge-aws-4ghf</guid>
      <description>&lt;p&gt;Cloud Resume Challenge is a project that helps us learn to use the cloud and some essential tools. You can access the outline of the project here &lt;a href="https://cloudresumechallenge.dev/" rel="noopener noreferrer"&gt;Cloud Resume Challenge&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this blog post, I will tell you about the steps  I completed and challenges I went through the Cloud Resume Challenge. This challenge made me more familiar with using AWS services with Terraform. I learned about how the services work. I also learned and experienced a lot of things that I can't think of right now.&lt;/p&gt;

&lt;p&gt;You can see the final result &lt;a href="//tahayagizguler.tech"&gt;here&lt;/a&gt; and see the &lt;a href="https://github.com/tahayagizguler/cloudresumeAWS" rel="noopener noreferrer"&gt;GitHub code&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You can complete this challenge as part of the AWS Free tier.&lt;/strong&gt;&lt;br&gt;
You only need to buy a domain name, but if you are a student there are many ways to buy it for free. In the next part of the article, I will talk about how to get a free domain name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2d4uxrz59qw6s7qqt4ah.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2d4uxrz59qw6s7qqt4ah.jpg" alt="Projecy Diagram" width="800" height="513"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now, let's take a look at which stages we need to complete.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Challenge Steps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Build a website in HTML/CSS.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This step is very easy. Simply create a resume page for your resume website using html and css.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Host website with S3 Bucket.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm build with Terraform, but you can use AWS SAM if you want.&lt;/p&gt;

&lt;p&gt;I created an S3 bucket and determined the necessary CORS rule, and at the end, I ensured that the objects were uploaded to the S3 bucket collectively.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket" "cloud-resume-bucket" {
  bucket = var.bucket_name
  acl    = "public-read"
  policy = file("website/policy.json")

  website {
    index_document = "index.html"
    error_document = "error.html"
  }
}


resource "aws_s3_bucket_cors_configuration" "s3_bucket_cors" {
  bucket = aws_s3_bucket.cloud-resume-bucket.id

  cors_rule {
    allowed_headers = ["*"]
    allowed_methods = ["GET", "POST"]
    allowed_origins = ["*"]
    max_age_seconds = 10
  }
}


resource "aws_s3_object" "test" {
  for_each = fileset("${path.module}/html", "**/*.*")
  acl    = "public-read"
  bucket = var.bucket_name
  key    = each.value
  source = "${path.module}/html/${each.value}"
  content_type  = lookup(var.mime_types, split(".", each.value)[length(split(".", each.value)) - 1])
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;CloudFront for routing HTTP/S traffic.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;S3 website URL should use HTTPS for security. I did this with Cloudfront.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_cloudfront_distribution" "s3_cf" {
  origin {
    domain_name              = "${aws_s3_bucket.cloud-resume-bucket.bucket_regional_domain_name}"
    origin_id                = "${local.s3_origin_id}"
  }

  enabled             = true
  is_ipv6_enabled     = true
  default_root_object = "index.html"


  custom_error_response {
      error_caching_min_ttl = 0
      error_code = 404
      response_code = 200
      response_page_path = "/error.html"
  }

  aliases = [var.domain_name]

  default_cache_behavior {
    allowed_methods  = ["GET", "HEAD"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "${local.s3_origin_id}"

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }

    viewer_protocol_policy = "redirect-to-https" #redirect-to-https
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }

  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }

  # viewer_certificate {
  #   cloudfront_default_certificate = true
  # }

  viewer_certificate {
    acm_certificate_arn = aws_acm_certificate_validation.acm_val.certificate_arn
    ssl_support_method = "sni-only"
    minimum_protocol_version = "TLSv1.2_2021"
  }

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Route53 for custom DNS.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this step, I registered my domain name to the Route53 service as seen below.&lt;br&gt;
(With the Github student package, you can get a free domain name.)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route53_zone" "main" {
  name = var.domain_name
}

resource "aws_route53_record" "domain" {
  zone_id = "${aws_route53_zone.main.zone_id}"
  name = "${var.domain_name}"
  type = "A"

  alias {
    name = "${aws_cloudfront_distribution.s3_cf.domain_name}"
    zone_id = "${aws_cloudfront_distribution.s3_cf.hosted_zone_id}"
    evaluate_target_health = false
  }
}

resource "aws_route53_record" "cert_validation" {
  for_each = {
    for dvo in aws_acm_certificate.cert.domain_validation_options : dvo.domain_name =&amp;gt; {
      name   = dvo.resource_record_name
      record = dvo.resource_record_value
      type   = dvo.resource_record_type
    }
  }

  allow_overwrite = true
  name            = each.value.name
  records         = [each.value.record]
  ttl             = 60
  type            = each.value.type
  zone_id         = aws_route53_zone.main.zone_id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Certificate Manager for enabling secure access with SSL Certificate.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I set SSL certificate with the ACM service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_acm_certificate" "cert" {
  domain_name       = var.domain_name
  validation_method = "DNS"

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_acm_certificate_validation" "acm_val" {
  certificate_arn         = aws_acm_certificate.cert.arn
  validation_record_fqdns = [for record in aws_route53_record.cert_validation : record.fqdn]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;DynamoDB for database, storing website visitor count.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I created a DynamoDB database to retrieve and update the visitor counter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_dynamodb_table" "visiters" {
  name           = var.dynamodb_table
  billing_mode   = "PROVISIONED"
  read_capacity  = 1
  write_capacity = 1
  hash_key       = "id"

  attribute {
    name = "id"
    type = "N"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Lambda function (python) to read/write website visitor count to DynamoDB.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After setting it up to work with database. I added the Lambda function I wrote with Python(boto3) and the necessary IAM policy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "archive_file" "lambda_zip" {
  type = "zip"

  source_dir  = "${path.module}/src"
  output_path = "${path.module}/src.zip"
}    

resource "aws_s3_object" "this" {
  bucket = aws_s3_bucket.cloud-resume-bucket.id

  key    = "src.zip"
  source = data.archive_file.lambda_zip.output_path

  etag = filemd5(data.archive_file.lambda_zip.output_path)
}

//Define lambda function
resource "aws_lambda_function" "apigw_lambda_ddb" {
  function_name = "app"
  description = "visiter counter"

  s3_bucket = aws_s3_bucket.cloud-resume-bucket.id
  s3_key    = aws_s3_object.this.key

  runtime = "python3.8"
  handler = "app.lambda_handler"

  source_code_hash = data.archive_file.lambda_zip.output_base64sha256

  role = aws_iam_role.lambda_exec.arn

  environment {
    variables = {
      DDB_TABLE = var.dynamodb_table
    }
  }


} 

resource "aws_iam_role" "lambda_exec" {
  name_prefix = "LambdaDdbPost"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Sid    = ""
      Principal = {
        Service = "lambda.amazonaws.com"
      }
      }
    ]
  })
}

resource "aws_iam_policy" "lambda_exec_role" {
  name_prefix = "lambda-tf-pattern-ddb-post"

  policy = &amp;lt;&amp;lt;POLICY
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "dynamodb:GetItem",
                "dynamodb:UpdateItem"
            ],
            "Resource": "arn:aws:dynamodb:*:*:table/${var.dynamodb_table}"
        }
    ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "lambda_policy" {
  role       = aws_iam_role.lambda_exec.name
  policy_arn = aws_iam_policy.lambda_exec_role.arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;API Gateway to trigger Lambda function.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I set up API Gateway to trigger Lambda Func.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# resource "random_string" "random" {
#   length           = 4
#   special          = false
# }

resource "aws_apigatewayv2_api" "http_lambda" {
  # name          = "${var.apigw_name}-${random_string.random.id}"
  name          = "${var.apigw_name}"
  protocol_type = "HTTP"
}

resource "aws_apigatewayv2_stage" "default" {
  api_id = aws_apigatewayv2_api.http_lambda.id

  name        = "$default"
  auto_deploy = true
}

resource "aws_apigatewayv2_integration" "apigw_lambda" {
  api_id = aws_apigatewayv2_api.http_lambda.id

  integration_uri    = aws_lambda_function.apigw_lambda_ddb.invoke_arn
  integration_type   = "AWS_PROXY"
  integration_method = "POST"
}

resource "aws_apigatewayv2_route" "get" {
  api_id = aws_apigatewayv2_api.http_lambda.id

  route_key = "GET /" 
  target    = "integrations/${aws_apigatewayv2_integration.apigw_lambda.id}"
}

# Gives an external source permission to access the Lambda function.
resource "aws_lambda_permission" "api_gw" {                            
  statement_id  = "AllowExecutionFromAPIGateway"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.apigw_lambda_ddb.function_name
  principal     = "apigateway.amazonaws.com"

  source_arn = "${aws_apigatewayv2_api.http_lambda.execution_arn}/*/*"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Then, after setting up the CI/CD process with Github actions, I launched the website. This blog post is just a summary of my work. I faced many difficulties while completing this. The hardest step for me was connecting the Lambda - APIGW - DynamoDB services, but after some thought and research I was able to get around this. Thank you for reading, if you want to reach the codes of the project in detail, you can visit my GitHub account.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//tahayagizguler.tech"&gt;My Resume Site&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/tahayagizguler/cloudresumeAWS" rel="noopener noreferrer"&gt;GitHub Link of The Project&lt;/a&gt;&lt;/p&gt;

</description>
      <category>watercooler</category>
    </item>
  </channel>
</rss>
