<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Yosuke Sakurai</title>
    <description>The latest articles on Forem by Yosuke Sakurai (@codeyourreality).</description>
    <link>https://forem.com/codeyourreality</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/codeyourreality"/>
    <language>en</language>
    <item>
      <title>I wired Claude to my flashcard app via MCP. Here's what that actually looks like.</title>
      <dc:creator>Yosuke Sakurai</dc:creator>
      <pubDate>Sat, 25 Apr 2026 11:14:31 +0000</pubDate>
      <link>https://forem.com/codeyourreality/i-wired-claude-to-my-flashcard-app-via-mcp-heres-what-that-actually-looks-like-46h0</link>
      <guid>https://forem.com/codeyourreality/i-wired-claude-to-my-flashcard-app-via-mcp-heres-what-that-actually-looks-like-46h0</guid>
      <description>&lt;h1&gt;
  
  
  Automating Flashcards with Claude, Cursor, and Deckbase (via MCP)
&lt;/h1&gt;

&lt;p&gt;I spend a lot of time reading papers, docs, and codebases. I used to manually copy key concepts into a flashcard app afterward. Now I do not. Claude does it while I work.&lt;/p&gt;

&lt;p&gt;This post walks through how I connected Claude and Cursor to Deckbase using its MCP server, and what you can actually do once it is wired up.&lt;/p&gt;




&lt;h2&gt;
  
  
  What MCP gives you here
&lt;/h2&gt;

&lt;p&gt;Model Context Protocol lets Claude call external tools mid conversation. In this case, those tools are your actual flashcard library: create a deck, add cards, generate audio, export to Anki. All without leaving your editor or chat window.&lt;/p&gt;

&lt;p&gt;The Deckbase MCP server exposes about 30 tools over a hosted HTTP endpoint. You authenticate with a Bearer token, point your client at the URL, and Claude can read and write your decks directly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Setup (5 minutes)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Get your API key
&lt;/h3&gt;

&lt;p&gt;Sign up at deckbase.co, go to &lt;strong&gt;Settings → API Keys&lt;/strong&gt;, and generate one.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Add to Claude Desktop
&lt;/h3&gt;

&lt;p&gt;Edit:&lt;/p&gt;

&lt;p&gt;~/Library/Application Support/Claude/claude_desktop_config.json&lt;/p&gt;

&lt;p&gt;Add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"deckbase"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://www.deckbase.co/api/mcp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"headers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Authorization"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Bearer YOUR_API_KEY"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart Claude Desktop. You should see Deckbase appear in the tools list.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Add to Cursor
&lt;/h3&gt;

&lt;p&gt;In &lt;strong&gt;Cursor → Settings → MCP&lt;/strong&gt;, add:&lt;/p&gt;

&lt;p&gt;Name: deckbase&lt;br&gt;&lt;br&gt;
URL: &lt;a href="https://www.deckbase.co/api/mcp" rel="noopener noreferrer"&gt;https://www.deckbase.co/api/mcp&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Authorization: Bearer YOUR_API_KEY  &lt;/p&gt;


&lt;h2&gt;
  
  
  What you can do
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Create a deck and fill it from a document
&lt;/h3&gt;

&lt;p&gt;Paste a block of text such as a paper abstract, a README section, or a chapter and ask Claude to card it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Create a deck called "Transformer Architecture".
Generate 10 flashcards from this text.
One concept per card.
Front = concept.
Back = explanation.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude calls &lt;code&gt;create_deck&lt;/code&gt;, then &lt;code&gt;create_cards&lt;/code&gt; with up to 50 cards in a single batch. Your deck is live in Deckbase before you finish reading the thread.&lt;/p&gt;




&lt;h3&gt;
  
  
  Add audio to any card
&lt;/h3&gt;

&lt;p&gt;If you are learning a language and want native pronunciation on the back of each card:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;List the voices available in Japanese, then add female audio to every card in my Japanese Vocab deck.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude calls &lt;code&gt;list_elevenlabs_voices&lt;/code&gt; with &lt;code&gt;{ language: "ja", gender: "female" }&lt;/code&gt;, picks a voice, then runs &lt;code&gt;generate_audio_for_card&lt;/code&gt; for each card.&lt;/p&gt;




&lt;h3&gt;
  
  
  Generate AI images for visual memory
&lt;/h3&gt;

&lt;p&gt;For anatomy, geography, or anything visual:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;For each card in my Anatomy deck, generate an illustration matching the front text.
Use a clean medical diagram style.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude calls &lt;code&gt;list_ai_image_models&lt;/code&gt;, then &lt;code&gt;generate_image_for_card&lt;/code&gt; per card with a prompt derived from the card front.&lt;/p&gt;




&lt;h3&gt;
  
  
  Export to Anki
&lt;/h3&gt;

&lt;p&gt;If you still review in Anki and just want to use Deckbase for creation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Export my JavaScript Concepts deck as an Anki file.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude calls &lt;code&gt;export_deck&lt;/code&gt; with &lt;code&gt;export_type: "full"&lt;/code&gt; and returns a &lt;code&gt;.apkg&lt;/code&gt; file you can import directly.&lt;/p&gt;




&lt;h3&gt;
  
  
  Normalize a messy deck
&lt;/h3&gt;

&lt;p&gt;When your cards drift from the template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Check my Python Fundamentals deck for layout drift, then normalize all cards to the current template.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude first calls &lt;code&gt;validate_deck_layout&lt;/code&gt;, then &lt;code&gt;normalize_cards_to_template&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The workflow that actually changed how I work
&lt;/h2&gt;

&lt;p&gt;When I am reading a technical doc in Cursor, I now do this at the end of any section I want to remember:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the relevant text
&lt;/li&gt;
&lt;li&gt;Open the AI chat panel
&lt;/li&gt;
&lt;li&gt;Type: &lt;code&gt;Card this.&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Claude reads the selection, creates 3 to 5 cards in my current deck with clean front/back separation, and confirms what it added.&lt;/p&gt;

&lt;p&gt;The key difference is structure. One cue, one answer, no overstuffing. The cards come out usable without editing. That was not true when using generic prompts before.&lt;/p&gt;




&lt;h2&gt;
  
  
  What it does not do yet
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;No automatic PDF parsing. You paste the text and Claude cards it. A folder watcher pipeline is possible via the API, just not built into MCP tools yet.
&lt;/li&gt;
&lt;li&gt;The hosted MCP requires a Deckbase account and API key. There is also a local stdio server (&lt;code&gt;node mcp-server/index.js&lt;/code&gt;), but it only exposes document tools, not deck and card write operations.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Worth trying if
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;You read a lot of technical material and want a way to retain it
&lt;/li&gt;
&lt;li&gt;You are building an AI study workflow and want a real API instead of maintaining your own schema
&lt;/li&gt;
&lt;li&gt;You already live in Cursor and want your flashcard system one prompt away
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The MCP docs are at:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://deckbase.co/mcp" rel="noopener noreferrer"&gt;https://deckbase.co/mcp&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>mcp</category>
      <category>claude</category>
    </item>
    <item>
      <title>How I Built a 3D Vision Board in React Native Using Three.js</title>
      <dc:creator>Yosuke Sakurai</dc:creator>
      <pubDate>Fri, 24 Apr 2026 10:33:20 +0000</pubDate>
      <link>https://forem.com/codeyourreality/how-i-built-a-3d-vision-board-in-react-native-using-threejs-5a6o</link>
      <guid>https://forem.com/codeyourreality/how-i-built-a-3d-vision-board-in-react-native-using-threejs-5a6o</guid>
      <description>&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;I was building a manifestation app in React Native and needed a vision board feature. Not a static collage. An immersive 3D space users could actually step into.&lt;/p&gt;

&lt;p&gt;The challenge: Three.js is built for the web, not React Native. I needed a bridge that did not destroy performance or bundle size.&lt;/p&gt;

&lt;p&gt;## The Architecture&lt;/p&gt;

&lt;p&gt;I chose a WebView-based approach rather than a native Three.js port.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;React Native&lt;/strong&gt; handles the shell, navigation, auth, and Firebase sync&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WebView&lt;/strong&gt; renders the 3D scene using Three.js&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostMessage API&lt;/strong&gt; handles communication between RN and the WebView&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Firebase Storage&lt;/strong&gt; serves optimized images for the vision board&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why not &lt;code&gt;react-three-fiber&lt;/code&gt;? At the time of building, R3F's React Native support was experimental and the bundle size exceeded 15MB. A WebView let me lazy-load the 3D scene only when&lt;br&gt;
  needed.&lt;/p&gt;

&lt;p&gt;## The 3D Scene&lt;/p&gt;

&lt;p&gt;The scene is a simple cylindrical panorama. Users upload images, which are mapped to floating planes in 3D space. An affirmation text overlay follows the camera.&lt;/p&gt;

&lt;p&gt;Here is the basic Three.js setup:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
javascript
  // Inside the WebView
  const scene = new THREE.Scene();
  const camera = new THREE.PerspectiveCamera(75, width / height, 0.1, 1000);
  const renderer = new THREE.WebGLRenderer({ alpha: true, antialias: true });

  // Ambient lighting
  const ambientLight = new THREE.AmbientLight(0xffffff, 0.6);
  scene.add(ambientLight);

  // Floating image planes
  const geometry = new THREE.PlaneGeometry(3, 2);
  const texture = new THREE.TextureLoader().load(imageUrl);
  const material = new THREE.MeshBasicMaterial({ map: texture });
  const plane = new THREE.Mesh(geometry, material);
  plane.position.set(x, y, z);
  scene.add(plane);

  Performance Lessons

  1. Lazy load everything
  The WebView is mounted only when the user opens the vision board. The initial bundle does not include Three.js.

  2. Image optimization
  Users upload high-res photos. I compress them to 1024px width on the client before upload using react-native-image-resizer. This cut load times from 4 seconds to under 1 second.

  3. Limit the scene complexity
  I cap the board at 12 images. More than that and frame rate drops below 50fps on mid-range Android devices.

  4. Handle WebView memory
  On Android, WebViews are notorious for memory leaks. I explicitly destroy the WebView instance when the user navigates away:

  useEffect(() =&amp;gt; {
    return () =&amp;gt; {
      if (webViewRef.current) {
        webViewRef.current.reload(); // forces cleanup
      }
    };
  }, []);

  The Communication Bridge

  The React Native shell sends commands to the WebView via URL hashes, and the WebView sends events back via window.ReactNativeWebView.postMessage.

  // React Native sends a command
  const sendCommand = (command) =&amp;gt; {
    webViewRef.current?.injectJavaScript(`
      window.dispatchEvent(new MessageEvent('message', {
        data: ${JSON.stringify(command)}
      }));
      true;
    `);
  };

  // WebView sends back an event
  const handleMessage = (event) =&amp;gt; {
    const data = JSON.parse(event.nativeEvent.data);
    if (data.type === 'planeClicked') {
      // Handle image selection
    }
  };

  The Unlock-Screen Mechanic

  The real insight was not the 3D board. It was the affirmation delivery mechanism.

  Instead of push notifications (70% swipe-away rate), the app shows an affirmation every time the user unlocks their phone. This piggybacks on an existing habit rather than creating a new
  one.

  Technically, this uses React Native's AppState API. When the app transitions from background to active, we render a full-screen modal with the affirmation. The user must hold for 3 seconds
   while feeling the emotion before they can dismiss it.

  Results

  - Day 7 retention: 34%
  - Day 30 retention: 18%
  - Average daily unlock engagements: 23
  - Bundle size: 12MB (without the lazy-loaded WebView assets)

  What I Would Do Differently

  1. Use Expo Modules sooner. I ejected from Expo too early and spent weeks fixing native module issues.
  2. Test on mid-range Android first. I developed on an iPhone 15 Pro and was shocked by Android performance.
  3. Start with a simpler 3D scene. The first version had particle effects and fog. It looked great on my device and crashed on a Samsung A51.

  Source

  The app is free on iOS and Android. If you are building with React Native and Three.js, feel free to reach out with questions.

  Have you tried mixing WebViews with native React Native? I would love to hear about your approach.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>reactnative</category>
      <category>threejs</category>
      <category>javascript</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
