<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Abdulai Yorli Iddrisu</title>
    <description>The latest articles on Forem by Abdulai Yorli Iddrisu (@abdulai_yorliiddrisu_f5b).</description>
    <link>https://forem.com/abdulai_yorliiddrisu_f5b</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/abdulai_yorliiddrisu_f5b"/>
    <language>en</language>
    <item>
      <title>Why I Structured MemoryMesh Across 3 CDK Stacks — Every Decision Explained</title>
      <dc:creator>Abdulai Yorli Iddrisu</dc:creator>
      <pubDate>Wed, 11 Mar 2026 11:01:50 +0000</pubDate>
      <link>https://forem.com/abdulai_yorliiddrisu_f5b/how-google-maps-why-i-structured-memorymesh-across-3-cdk-stacks-every-decision-explained-118h</link>
      <guid>https://forem.com/abdulai_yorliiddrisu_f5b/how-google-maps-why-i-structured-memorymesh-across-3-cdk-stacks-every-decision-explained-118h</guid>
      <description>&lt;p&gt;When I started building MemoryMesh I had a choice to make early on. Throw everything into one CDK stack and move fast, or split it properly and build something I could actually maintain and explain. I went with three stacks. Here's why each decision was made the way it was.&lt;/p&gt;

&lt;p&gt;The Three Stacks&lt;br&gt;
MemoryMeshDynamoDB handles the data layer — two tables, memorymesh-context (PK: userId, SK: createdAt) and memorymesh-profile (PK: userId).&lt;br&gt;
MemoryMeshLambda handles compute — five Lambda functions on Node.js 20 plus the IAM role and Bedrock permissions.&lt;br&gt;
MemoryMeshApi handles the HTTP API — API Gateway with routes wired to each Lambda.&lt;/p&gt;

&lt;p&gt;Why Three Stacks Instead of One&lt;br&gt;
These layers have completely different deployment cycles. If I update a Lambda function I don't want to risk touching the database stack. If I change an API route I don't need to redeploy compute. Keeping them separate means each piece deploys independently without putting the others at risk.&lt;/p&gt;

&lt;p&gt;Why PAY_PER_REQUEST on DynamoDB&lt;br&gt;
This is a personal tool. Traffic is low volume and unpredictable. Provisioned capacity would mean paying for read and write units I'm mostly not using. PAY_PER_REQUEST means the bill reflects what I actually use, which at this scale is close to nothing.&lt;/p&gt;

&lt;p&gt;Why HTTP API Gateway Over REST API&lt;br&gt;
REST API has more features. HTTP API is simpler, cheaper and faster for Lambda proxy calls. For a tool making straightforward requests to Lambda functions the additional features of REST API weren't needed. HTTP API was the right tool for this use case.&lt;/p&gt;

&lt;p&gt;The Most Interesting Decision: Two Different Access Paths&lt;br&gt;
The Chrome extension goes through API Gateway. The MCP server bypasses API Gateway entirely and talks to DynamoDB directly via the AWS SDK.&lt;br&gt;
The reason is trust. The MCP server runs as a local Node.js process on your machine with real AWS credentials in the config. It's a trusted local process. Hitting DynamoDB directly is faster and simpler.&lt;br&gt;
The Chrome extension runs inside a browser. It can't hold AWS credentials the same way. So it goes through API Gateway, which is the right entry point for an untrusted external client.&lt;br&gt;
Same data. Two different trust models. Two different access paths.&lt;/p&gt;

&lt;p&gt;CORS Scoped to Three Origins&lt;br&gt;
API Gateway CORS is configured for exactly three origins: claude.ai, chatgpt.com and gemini.google.com. No wildcard. The API only accepts requests from those specific domains.&lt;/p&gt;

&lt;p&gt;The full CDK code for all three stacks is in the repo if you want to see how it's structured: github.com/yorliabdulai/contextbridge&lt;br&gt;
And if you missed the full technical deep-dive from launch day: dev.to/abdulai_yorliiddrisu_f5b/i-built-a-portable-ai-memory-layer-with-mcp-aws-bedrock-and-a-chrome-extension-18de&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>machinelearning</category>
      <category>programming</category>
      <category>discuss</category>
    </item>
    <item>
      <title>I Built a Portable AI Memory Layer with MCP, AWS Bedrock, and a Chrome Extension</title>
      <dc:creator>Abdulai Yorli Iddrisu</dc:creator>
      <pubDate>Sun, 08 Mar 2026 17:14:11 +0000</pubDate>
      <link>https://forem.com/abdulai_yorliiddrisu_f5b/i-built-a-portable-ai-memory-layer-with-mcp-aws-bedrock-and-a-chrome-extension-18de</link>
      <guid>https://forem.com/abdulai_yorliiddrisu_f5b/i-built-a-portable-ai-memory-layer-with-mcp-aws-bedrock-and-a-chrome-extension-18de</guid>
      <description>&lt;p&gt;AI tools have memory now. Claude remembers your projects. ChatGPT has built a profile of how you work. Open a new conversation and the tool already has context - you don't have to re-explain yourself from zero every time.&lt;br&gt;
The problem is that this memory is platform-locked.&lt;br&gt;
Switch from ChatGPT to Claude and you lose six months of built-up context. The new tool doesn't know your projects, your preferences, your ongoing work. Technically it might be the better model for what you need right now - but it performs worse because it's starting blind. So you go back to your old tool. Not because it's better. Because it knows you.&lt;br&gt;
That's the lock-in. Not pricing, not features - context. And it's the problem MemoryMesh solves.&lt;br&gt;
MemoryMesh is a portable context layer: a Chrome extension + MCP server + AWS serverless backend that captures your context from any AI tool and injects it into any other. Your context travels with you when you switch.&lt;br&gt;
This article walks through how it's built.&lt;br&gt;
GitHub: &lt;a href="https://github.com/yorliabdulai/contextbridge" rel="noopener noreferrer"&gt;github.com/yorliabdulai/contextbridge&lt;br&gt;
&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Architecture Overview&lt;br&gt;
architecture diagram&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pn4vlws61dtub8ednjx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pn4vlws61dtub8ednjx.png" alt=" " width="800" height="427"&gt;&lt;/a&gt;&lt;br&gt;
Part 1: The MCP Server&lt;br&gt;
MCP (Model Context Protocol) is Anthropic's open standard for giving Claude tools that run locally. The MemoryMesh MCP server exposes four tools to Claude Desktop via stdio transport - no HTTP, no browser required.&lt;br&gt;
`&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
// packages/mcp-server/src/server.ts&lt;br&gt;
server.setRequestHandler(ListToolsRequestSchema, async () =&amp;gt; ({&lt;br&gt;
  tools: [&lt;br&gt;
    {&lt;br&gt;
      name: "save_context",&lt;br&gt;
      description: "Save a context entry to MemoryMesh memory",&lt;br&gt;
      inputSchema: {&lt;br&gt;
        type: "object",&lt;br&gt;
        properties: {&lt;br&gt;
          content: { type: "string", description: "The context to save" },&lt;br&gt;
          source: { type: "string", description: "Where this came from" }&lt;br&gt;
        },&lt;br&gt;
        required: ["content"]&lt;br&gt;
      }&lt;br&gt;
    },&lt;br&gt;
    {&lt;br&gt;
      name: "get_context",&lt;br&gt;
      description: "Retrieve recent context entries from MemoryMesh",&lt;br&gt;
      inputSchema: {&lt;br&gt;
        type: "object",&lt;br&gt;
        properties: {&lt;br&gt;
          limit: { type: "number" }&lt;br&gt;
        }&lt;br&gt;
      }&lt;br&gt;
    },&lt;br&gt;
    {&lt;br&gt;
      name: "search_memory",&lt;br&gt;
      description: "Search stored context by keyword",&lt;br&gt;
      inputSchema: {&lt;br&gt;
        type: "object",&lt;br&gt;
        properties: { query: { type: "string" } },&lt;br&gt;
        required: ["query"]&lt;br&gt;
      }&lt;br&gt;
    },&lt;br&gt;
    {&lt;br&gt;
      name: "get_user_profile",&lt;br&gt;
      description: "Get the current user profile",&lt;br&gt;
      inputSchema: { type: "object", properties: {} }&lt;br&gt;
    }&lt;br&gt;
  ]&lt;br&gt;
}));&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;br&gt;
`&lt;br&gt;
The stdio entry point:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// packages/mcp-server/src/index.ts
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { createServer } from "./server.js";
const server = createServer();
const transport = new StdioServerTransport();
await server.connect(transport);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude Desktop config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "mcpServers": {
    "memorymesh": {
      "command": "node",
      "args": ["path/to/mcp-server/dist/index.js"],
      "env": {
        "AWS_REGION": "eu-west-2",
        "CONTEXT_TABLE": "memorymesh-context",
        "PROFILE_TABLE": "memorymesh-profile",
        "MEMORYMESH_USER_ID": "mm-your-uuid",
        "AWS_ACCESS_KEY_ID": "...",
        "AWS_SECRET_ACCESS_KEY": "..."
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key architectural decision: the MCP server bypasses API Gateway and talks directly to DynamoDB via the AWS SDK. The API Gateway exists for the Chrome extension, which runs in the browser and can't use the AWS SDK natively. The MCP server is a local Node.js process - direct SDK access is faster and simpler.&lt;/p&gt;

&lt;p&gt;Part 2: The AWS Backend (CDK)&lt;br&gt;
Three CDK stacks, deployed in order.&lt;br&gt;
DynamoDB Stack&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// packages/infrastructure/lib/dynamodb-stack.ts
this.contextTable = new dynamodb.Table(this, "ContextTable", {
  tableName: "memorymesh-context",
  partitionKey: { name: "userId", type: dynamodb.AttributeType.STRING },
  sortKey: { name: "createdAt", type: dynamodb.AttributeType.STRING },
  billingMode: dynamodb.BillingMode.PAY_PER_REQUEST,
});
this.profileTable = new dynamodb.Table(this, "ProfileTable", {
  tableName: "memorymesh-profile",
  partitionKey: { name: "userId", type: dynamodb.AttributeType.STRING },
  billingMode: dynamodb.BillingMode.PAY_PER_REQUEST,
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;PAY_PER_REQUEST - traffic is low and bursty. No need to provision capacity.&lt;br&gt;
Lambda Stack&lt;br&gt;
All five functions share the same deployment package. The Lambda IAM role gets bedrock:InvokeModel explicitly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// packages/infrastructure/lib/lambda-stack.ts
lambdaRole.addToPolicy(new iam.PolicyStatement({
  actions: ["bedrock:InvokeModel"],
  resources: ["*"],
}));
props.contextTable.grantReadWriteData(lambdaRole);
props.profileTable.grantReadWriteData(lambdaRole);
const createFn = (name: string, handler: string) =&amp;gt;
  new lambda.Function(this, name, {
    functionName: `memorymesh-${name}`,
    runtime: lambda.Runtime.NODEJS_20_X,
    handler,
    code: lambda.Code.fromAsset("../mcp-server/lambda-package.zip"),
    role: lambdaRole,
    environment: {
      CONTEXT_TABLE: "memorymesh-context",
      PROFILE_TABLE: "memorymesh-profile",
    },
    timeout: Duration.seconds(30),
  });
this.saveFn      = createFn("save-context",     "lambda/saveContext.handler");
this.getFn       = createFn("get-context",       "lambda/getContext.handler");
this.searchFn    = createFn("search-memory",     "lambda/searchMemory.handler");
this.profileFn   = createFn("get-user-profile",  "lambda/getUserProfile.handler");
this.summarizeFn = createFn("summarize",         "lambda/summarize.handler");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;API Gateway Stack&lt;br&gt;
CORS origins are scoped to the three AI tool domains - no wildcard:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// packages/infrastructure/lib/api-stack.ts
const api = new apigateway.HttpApi(this, "MemoryMeshApi", {
  corsPreflight: {
    allowOrigins: [
      "https://claude.ai",
      "https://chatgpt.com",
      "https://gemini.google.com"
    ],
    allowMethods: [CorsHttpMethod.GET, CorsHttpMethod.POST],
    allowHeaders: ["Content-Type"],
  },
});
api.addRoutes({ path: "/context",          methods: [HttpMethod.POST], integration: new HttpLambdaIntegration("Save",     props.saveFn) });
api.addRoutes({ path: "/context/{userId}", methods: [HttpMethod.GET],  integration: new HttpLambdaIntegration("Get",      props.getFn) });
api.addRoutes({ path: "/search/{userId}",  methods: [HttpMethod.GET],  integration: new HttpLambdaIntegration("Search",   props.searchFn) });
api.addRoutes({ path: "/profile/{userId}", methods: [HttpMethod.GET],  integration: new HttpLambdaIntegration("Profile",  props.profileFn) });
api.addRoutes({ path: "/summarize",        methods: [HttpMethod.POST], integration: new HttpLambdaIntegration("Summarize",props.summarizeFn) });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Part 3: Bedrock Summarisation&lt;br&gt;
Raw conversation text is never stored directly. Every save goes through the summarise Lambda first, which calls Amazon Bedrock and stores the structured output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// packages/mcp-server/src/lambda/summarize.ts
const MODEL_ID = "eu.anthropic.claude-haiku-4-5-20251001-v1:0";
export const handler = async (event: APIGatewayProxyEvent) =&amp;gt; {
  const { content } = JSON.parse(event.body!);
  const prompt = `You are a context summariser for an AI memory system.
Analyse the following conversation and return ONLY a valid JSON object with these fields:
- summary: a dense paragraph capturing the main topic, key decisions, and outcomes
- tags: an array of 5-10 semantic keywords for search
- projects: an array of project names or identifiers mentioned
Conversation:
${content}
Return only the JSON object. No preamble, no markdown.`;
  const command = new InvokeModelCommand({
    modelId: MODEL_ID,
    body: JSON.stringify({
      anthropic_version: "bedrock-2023-05-31",
      max_tokens: 1024,
      messages: [{ role: "user", content: prompt }],
    }),
    contentType: "application/json",
    accept: "application/json",
  });
  const response = await bedrock.send(command);
  const body = JSON.parse(new TextDecoder().decode(response.body));
  const structured = JSON.parse(body.content[0].text);
  return { statusCode: 200, body: JSON.stringify(structured) };
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One thing that will catch you out: in eu-west-2, you must use the EU cross-region inference profile ID - eu.anthropic.claude-haiku-4-5-20251001-v1:0 - not the standard Haiku model ID. Standard model IDs return a ValidationException in that region. The EU prefix routes through Bedrock's cross-region inference system.&lt;br&gt;
What gets stored in DynamoDB is always the structured { summary, tags, projects } object - never a raw transcript. This is what makes the context injection useful rather than noisy. When you sync into a new tool, the AI gets dense, structured information about your work history - not a wall of raw dialogue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fzd8lrqs4grxl1qf15r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fzd8lrqs4grxl1qf15r.png" alt=" " width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;memorymesh in action&lt;/p&gt;

&lt;p&gt;Part 4: The Chrome Extension&lt;br&gt;
Content scripts are injected into Claude.ai, ChatGPT, and Gemini. Each injects a floating banner with two controls: Save Context and Sync to AI.&lt;br&gt;
Event Delegation&lt;br&gt;
The trickiest implementation detail is how AI tool pages re-render parts of the DOM as you interact with them. Naive event listeners attached directly to injected buttons get orphaned when the surrounding DOM updates. The fix is a single delegated listener on the banner container itself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// packages/extension/src/content/claude.ts
function injectBanner(userId: string) {
  const banner = document.createElement("div");
  banner.id = "memorymesh-banner";
  banner.innerHTML = `
    &amp;lt;div class="mm-controls"&amp;gt;
      &amp;lt;button data-action="save"&amp;gt;⬡ Save Context&amp;lt;/button&amp;gt;
      &amp;lt;button data-action="sync"&amp;gt;↺ Sync to AI&amp;lt;/button&amp;gt;
    &amp;lt;/div&amp;gt;
  `;
  document.body.appendChild(banner);
  // Single delegated listener - survives DOM re-renders
  banner.addEventListener("click", async (e) =&amp;gt; {
    const btn = (e.target as HTMLElement).closest("[data-action]");
    if (!btn) return;
    const action = btn.getAttribute("data-action");
    if (action === "save") await saveContext(userId);
    if (action === "sync") await syncToAI(userId);
  });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Context Injection&lt;br&gt;
Each AI tool has a different DOM structure for its chat input. The injection targets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const SELECTORS = {
  claude:  '[data-testid="chat-input"] [contenteditable]',
  chatgpt: '#prompt-textarea',
  gemini:  '.ql-editor[contenteditable]',
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function syncToAI(userId: string) {
  const entries = await getContext(userId); // fetches up to 1000 entries
  const contextText = entries.map(e =&amp;gt; e.summary).join("\n\n---\n\n");
  const input = document.querySelector(SELECTORS[currentTool]);
  if (!input) return;
  input.textContent = contextText;
  input.dispatchEvent(new Event("input", { bubbles: true }));
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The limit was an important fix. An earlier version defaulted to fetching only 5 entries - fine for basic use, but completely insufficient after a bulk import. The default is 1000, which covers any realistic history size without a noticeable API response time difference given DynamoDB's read performance.&lt;br&gt;
The History Importer&lt;br&gt;
The importer accepts ChatGPT and Claude data export ZIPs. Format detection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function detectAndParse(json: any[]): Promise&amp;lt;Conversation[]&amp;gt; {
  // Claude export: flat array, sender field is "human" or "assistant"
  if (json[0]?.chat_messages !== undefined) {
    return parseClaude(json);
  }
  // ChatGPT export: mapping object with author.role and content.parts
  if (json[0]?.mapping !== undefined) {
    return parseChatGPT(json);
  }
  throw new Error("Unrecognised export format");
}
// 300ms throttle between API calls
async function processAll(conversations: Conversation[], userId: string) {
  for (const conv of conversations) {
    await summarizeAndSave(conv, userId);
    await delay(300);
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The 300ms throttle between calls is not optional. Without it, Bedrock starts returning throttling errors around the 10th–15th consecutive request. With it, 58 conversations import cleanly with zero errors.&lt;/p&gt;

&lt;p&gt;Deployment&lt;br&gt;
The Lambda handlers and MCP server share the same TypeScript source. One build produces dist/, used both by the MCP server locally and packaged as lambda-package.zip for AWS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Build
cd packages/mcp-server &amp;amp;&amp;amp; npm run build
# Package
Copy-Item package.json dist\
cd dist &amp;amp;&amp;amp; npm install --production &amp;amp;&amp;amp; cd ..
Compress-Archive -Path ".\dist\*" -DestinationPath ".\lambda-package.zip" -Force
# Deploy all 5 functions
@("save-context","get-context","search-memory","get-user-profile","summarize") | ForEach-Object {
  aws lambda update-function-code --function-name "memorymesh-$_" `
    --region eu-west-2 --zip-file fileb://lambda-package.zip
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Does It Actually Work?&lt;br&gt;
After importing 58 Claude + ChatGPT conversations through the bulk importer and syncing into Gemini - a tool that had never seen any of that history - Gemini responded:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&amp;gt; "It looks like we've been working through a dense sprint involving the LandLedger platform, quantum neural network optimizations, and various AWS infrastructure labs… I'm ready to pick up exactly where we left off."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That's the point. Gemini at that moment was technically the better model for what I needed. MemoryMesh made it actually useful - not just capable.&lt;/p&gt;

&lt;p&gt;What's Next&lt;br&gt;
Local SQLite backend - for users who don't want AWS infrastructure&lt;br&gt;
Firefox port - Manifest V3 is largely compatible, mostly a manifest diff + testing&lt;br&gt;
Gemini export - no native export exists today; DOM scraper is the only option&lt;br&gt;
Selective sync - currently all entries inject; a picker UI would give more control&lt;/p&gt;

&lt;p&gt;Source Code&lt;br&gt;
&lt;a href="https://github.com/yorliabdulai/contextbridge" rel="noopener noreferrer"&gt;github.com/yorliabdulai/contextbridge&lt;/a&gt;&lt;br&gt;
Full CDK infrastructure, MCP server, Chrome extension, Lambda handlers, and setup guide. Contributions welcome - especially the Firefox port and local SQLite backend.&lt;/p&gt;

&lt;p&gt;For the non-technical take on why AI context lock-in matters, read the companion piece:&lt;/p&gt;

&lt;p&gt;Abdulai Yorli is a software developer based in Ghana, currently an IT Support Engineer at KPMG.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>machinelearning</category>
      <category>programming</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
