<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Jonathan Wong</title>
    <description>The latest articles on Forem by Jonathan Wong (@jonathan78wong).</description>
    <link>https://forem.com/jonathan78wong</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jonathan78wong"/>
    <language>en</language>
    <item>
      <title>From Machine Learning to Production: A Practical Walkthrough Using My Vancouver Traffic Accident Risk Predictor</title>
      <dc:creator>Jonathan Wong</dc:creator>
      <pubDate>Mon, 11 May 2026 22:21:41 +0000</pubDate>
      <link>https://forem.com/jonathan78wong/from-machine-learning-to-production-a-practical-walkthrough-using-my-vancouver-traffic-accident-50nj</link>
      <guid>https://forem.com/jonathan78wong/from-machine-learning-to-production-a-practical-walkthrough-using-my-vancouver-traffic-accident-50nj</guid>
      <description>&lt;p&gt;Artificial intelligence has many branches, but in real projects the most important question is simple: &lt;strong&gt;what tool solves the problem with the least complexity and the highest reliability&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
This article walks through that question using my &lt;a href="https://github.com/jonanata/vancouver-traffic-risk-predictor-mlops" rel="noopener noreferrer"&gt;GitHub project, &lt;em&gt;Vancouver Traffic Accident Risk Predictor&lt;/em&gt;,&lt;/a&gt; as a real example. Along the way, it explains when to use machine learning (ML) instead of large language models, how an ML pipeline works, what MLOps means, and how a model becomes a production ready service.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Modern Habit of Using LLMs for Everything
&lt;/h2&gt;

&lt;p&gt;There is a new pattern in the industry. Whenever a team faces a data problem, someone eventually says, “Why not just use an LLM for this?”&lt;br&gt;&lt;br&gt;
It sounds modern. It sounds powerful. It feels like a universal solution.&lt;/p&gt;

&lt;p&gt;But this instinct hides a deeper issue.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;LLMs are not designed for structured data prediction.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;They can reason across messy text, generate explanations, and handle unstructured inputs. They are excellent at language. But when the task is numerical, statistical, or based on clean tabular data, an LLM behaves like a very smart person guessing instead of a model trained precisely for the job.&lt;/p&gt;

&lt;p&gt;This is where the Vancouver project becomes a perfect example.&lt;br&gt;&lt;br&gt;
The goal is to predict accident risk based on weather and traffic conditions.&lt;br&gt;&lt;br&gt;
This is not a language problem.&lt;br&gt;&lt;br&gt;
This is a structured prediction problem.&lt;br&gt;&lt;br&gt;
This is exactly where classical ML shines.&lt;/p&gt;




&lt;h1&gt;
  
  
  Why Business Users Often Think LLMs Can Solve Everything
&lt;/h1&gt;

&lt;p&gt;This misconception is extremely common, and it is not the fault of business users.&lt;br&gt;&lt;br&gt;
It comes from the experience of interacting with LLMs, not from their underlying capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  LLMs feel magical
&lt;/h3&gt;

&lt;p&gt;A business user types a question.&lt;br&gt;&lt;br&gt;
Claude answers instantly.&lt;br&gt;&lt;br&gt;
It sounds smart.&lt;br&gt;&lt;br&gt;
It sounds confident.&lt;br&gt;&lt;br&gt;
It sounds like it understands the business context.&lt;/p&gt;

&lt;p&gt;From their perspective, this feels like general intelligence.&lt;br&gt;&lt;br&gt;
So the natural conclusion becomes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“If it can talk about anything, it can probably do anything.”&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Industry messaging reinforces the illusion
&lt;/h3&gt;

&lt;p&gt;Marketing language often says things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“Analyze your data with AI”&lt;/li&gt;
&lt;li&gt;“AI that understands your business”&lt;/li&gt;
&lt;li&gt;“AI that learns from your documents”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Business users interpret this literally.&lt;br&gt;&lt;br&gt;
They imagine the LLM &lt;em&gt;training&lt;/em&gt; on their data.&lt;br&gt;&lt;br&gt;
In reality, the LLM is only &lt;em&gt;summarizing&lt;/em&gt; or &lt;em&gt;sampling&lt;/em&gt; it.&lt;/p&gt;

&lt;h3&gt;
  
  
  LLMs hide complexity
&lt;/h3&gt;

&lt;p&gt;A classical ML pipeline exposes its steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data cleaning&lt;/li&gt;
&lt;li&gt;Feature engineering&lt;/li&gt;
&lt;li&gt;Model training&lt;/li&gt;
&lt;li&gt;Evaluation&lt;/li&gt;
&lt;li&gt;Deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LLMs hide all of this behind a single prompt.&lt;br&gt;&lt;br&gt;
So business users assume the complexity is gone.&lt;br&gt;&lt;br&gt;
But the complexity is still there — just invisible.&lt;/p&gt;

&lt;h3&gt;
  
  
  The professional explanation
&lt;/h3&gt;

&lt;p&gt;The most effective way to explain this to business stakeholders is simple:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“Claude is excellent at understanding and generating language.&lt;br&gt;&lt;br&gt;
But price prediction, risk scoring, and forecasting are mathematical problems, not language problems.”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This keeps the conversation respectful, clear, and aligned with business outcomes.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Reality Check: Is “LLM Everything” Acceptable in Terms of Results and Costs
&lt;/h2&gt;

&lt;p&gt;The short answer is no.&lt;br&gt;&lt;br&gt;
But the reasons matter.&lt;/p&gt;

&lt;h3&gt;
  
  
  The results problem
&lt;/h3&gt;

&lt;p&gt;LLMs can approximate patterns in structured data, but they cannot match the precision of a model trained directly on the dataset.&lt;br&gt;&lt;br&gt;
Classical ML consistently delivers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Higher accuracy&lt;/li&gt;
&lt;li&gt;Better calibration&lt;/li&gt;
&lt;li&gt;More stable predictions&lt;/li&gt;
&lt;li&gt;Clearer evaluation metrics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LLMs, by contrast, introduce variability and guesswork.&lt;/p&gt;

&lt;h3&gt;
  
  
  The cost problem
&lt;/h3&gt;

&lt;p&gt;Even small LLMs are expensive compared to classical ML.&lt;br&gt;&lt;br&gt;
They require more compute, more memory, and often GPU acceleration.&lt;br&gt;&lt;br&gt;
A simple logistic regression or random forest runs on a tiny CPU with millisecond latency and almost zero cost.&lt;br&gt;&lt;br&gt;
An LLM introduces unnecessary overhead.&lt;/p&gt;

&lt;h3&gt;
  
  
  The engineering problem
&lt;/h3&gt;

&lt;p&gt;LLMs are harder to test, harder to monitor, and harder to guarantee deterministic behavior.&lt;br&gt;&lt;br&gt;
For structured prediction, this is unnecessary complexity.&lt;/p&gt;

&lt;h3&gt;
  
  
  So is “LLM everything” acceptable?
&lt;/h3&gt;

&lt;p&gt;Only if you do not care about accuracy, cost, latency, interpretability, or operational simplicity.&lt;br&gt;&lt;br&gt;
Real projects always care about these things.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparison of ML vs LLM for Structured Prediction
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Dimension&lt;/th&gt;
&lt;th&gt;Classical Machine Learning&lt;/th&gt;
&lt;th&gt;Large Language Models&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Accuracy on structured data&lt;/td&gt;
&lt;td&gt;High accuracy with stable, well calibrated predictions&lt;/td&gt;
&lt;td&gt;Lower accuracy, pattern guessing rather than statistical learning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Latency&lt;/td&gt;
&lt;td&gt;Milliseconds on CPU&lt;/td&gt;
&lt;td&gt;Tens to hundreds of milliseconds, often requires GPU&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cost per prediction&lt;/td&gt;
&lt;td&gt;Extremely low&lt;/td&gt;
&lt;td&gt;Significantly higher, especially at scale&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scalability&lt;/td&gt;
&lt;td&gt;Scales cheaply on commodity hardware&lt;/td&gt;
&lt;td&gt;Scaling requires more compute and higher operational cost&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Interpretability&lt;/td&gt;
&lt;td&gt;Clear metrics, feature importance, reproducible behavior&lt;/td&gt;
&lt;td&gt;Hard to interpret, non deterministic, difficult to validate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Operational complexity&lt;/td&gt;
&lt;td&gt;Simple to test, monitor, and deploy&lt;/td&gt;
&lt;td&gt;Harder to test, monitor, and guarantee consistent outputs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best suited for&lt;/td&gt;
&lt;td&gt;Risk scoring, forecasting, classification, anomaly detection&lt;/td&gt;
&lt;td&gt;Text reasoning, summarization, multi modal understanding&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Overall fit for structured prediction&lt;/td&gt;
&lt;td&gt;Excellent&lt;/td&gt;
&lt;td&gt;Acceptable only with compromises in cost and accuracy&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h1&gt;
  
  
  Real World Comparison
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Scikit Learn ML vs Claude 4.7 LLM for a 10 GB Price Prediction Dataset
&lt;/h2&gt;

&lt;p&gt;In real enterprise environments, teams often ask whether an LLM can replace a classical ML model for large scale prediction tasks.&lt;br&gt;&lt;br&gt;
So let us take a concrete scenario:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A 10 GB Excel dataset for price prediction.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Which tool performs better?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;A well defined scikit learn pipeline wins every time.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Claude 4.7 is slower, less accurate, and dramatically more expensive.&lt;/p&gt;




&lt;h1&gt;
  
  
  Why ML Wins
&lt;/h1&gt;

&lt;p&gt;When the task is structured prediction on a large dataset, classical ML does not just win — it wins decisively. And the reasons become even clearer when you look at the actual tools used in real projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  ML can train on the full dataset
&lt;/h3&gt;

&lt;p&gt;A scikit learn pipeline can load and process the entire 10 GB dataset using the Python data stack that enterprises rely on every day:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;pandas&lt;/strong&gt; for ingestion and cleaning

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;pandas.read_csv&lt;/code&gt; to load large files in chunks&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DataFrame.merge&lt;/code&gt; to join datasets&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DataFrame.fillna&lt;/code&gt; to handle missing values&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;numpy&lt;/strong&gt; for vectorized numerical operations&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;scikit learn&lt;/strong&gt; for modeling

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;RandomForestRegressor&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;GradientBoostingRegressor&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;train_test_split&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Pipeline&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;StandardScaler&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;These libraries are built for structured data at scale.&lt;br&gt;&lt;br&gt;
They learn real statistical relationships instead of guessing patterns.&lt;/p&gt;

&lt;h3&gt;
  
  
  ML produces stable, reproducible predictions
&lt;/h3&gt;

&lt;p&gt;With scikit learn, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set &lt;code&gt;random_state&lt;/code&gt; for deterministic behavior&lt;/li&gt;
&lt;li&gt;Evaluate models with &lt;code&gt;cross_val_score&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Inspect feature importance&lt;/li&gt;
&lt;li&gt;Tune hyperparameters with &lt;code&gt;GridSearchCV&lt;/code&gt; or &lt;code&gt;RandomizedSearchCV&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This gives you a model that behaves the same way every time.&lt;/p&gt;

&lt;h3&gt;
  
  
  ML runs cheaply and efficiently
&lt;/h3&gt;

&lt;p&gt;A trained scikit learn model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runs on CPU&lt;/li&gt;
&lt;li&gt;Responds in milliseconds&lt;/li&gt;
&lt;li&gt;Costs almost nothing per prediction&lt;/li&gt;
&lt;li&gt;Scales horizontally with minimal infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is why ML is used in production systems where cost and latency matter.&lt;/p&gt;

&lt;h3&gt;
  
  
  ML integrates cleanly into production
&lt;/h3&gt;

&lt;p&gt;With Python’s ecosystem, you can deploy the model using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;FastAPI&lt;/strong&gt; for serving predictions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker&lt;/strong&gt; for packaging the environment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This gives you a clean, maintainable architecture that fits naturally into modern DevOps and MLOps workflows.&lt;/p&gt;




&lt;h1&gt;
  
  
  Why Claude 4.7 Loses
&lt;/h1&gt;

&lt;p&gt;Claude 4.7 is powerful, but it is not built for this category of problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  It cannot train on 10 GB of structured data
&lt;/h3&gt;

&lt;p&gt;Claude can only &lt;em&gt;sample&lt;/em&gt; or &lt;em&gt;summarize&lt;/em&gt; chunks of the dataset.&lt;br&gt;&lt;br&gt;
It cannot compute gradients, optimize a loss function, or learn the full distribution.&lt;/p&gt;

&lt;h3&gt;
  
  
  It guesses patterns instead of learning them
&lt;/h3&gt;

&lt;p&gt;LLMs are language models, not regression engines.&lt;br&gt;&lt;br&gt;
They infer trends from text, not from numerical relationships.&lt;/p&gt;

&lt;h3&gt;
  
  
  It is slower and more expensive
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Higher latency&lt;/li&gt;
&lt;li&gt;Higher compute cost&lt;/li&gt;
&lt;li&gt;Requires chunking and repeated calls&lt;/li&gt;
&lt;li&gt;Cannot be cached effectively&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  It introduces non deterministic behavior
&lt;/h3&gt;

&lt;p&gt;Even with the same prompt, outputs can vary.&lt;br&gt;&lt;br&gt;
This is unacceptable for financial, operational, or regulatory workloads.&lt;/p&gt;




&lt;h1&gt;
  
  
  When Claude 4.7 Is Still Useful
&lt;/h1&gt;

&lt;p&gt;LLMs are excellent assistants for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Exploratory analysis&lt;/li&gt;
&lt;li&gt;Explaining trends&lt;/li&gt;
&lt;li&gt;Suggesting features&lt;/li&gt;
&lt;li&gt;Cleaning messy text columns&lt;/li&gt;
&lt;li&gt;Generating documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But &lt;strong&gt;not&lt;/strong&gt; for the core predictive model.&lt;/p&gt;

&lt;p&gt;For large structured datasets and numerical prediction tasks:&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Use ML for the model.&lt;br&gt;&lt;br&gt;
Use LLMs for support.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the architecture that delivers accuracy, cost efficiency, and operational stability.&lt;/p&gt;




&lt;h2&gt;
  
  
  Machine Learning as Part of AI
&lt;/h2&gt;

&lt;p&gt;Machine learning is one of the foundational pillars of AI. It learns patterns from structured data and uses those patterns to make predictions.&lt;/p&gt;

&lt;h3&gt;
  
  
  When ML is the right tool
&lt;/h3&gt;

&lt;p&gt;Use ML when the problem involves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Numerical prediction&lt;/li&gt;
&lt;li&gt;Classification on structured data&lt;/li&gt;
&lt;li&gt;Statistical relationships&lt;/li&gt;
&lt;li&gt;Low latency inference&lt;/li&gt;
&lt;li&gt;Clear evaluation metrics&lt;/li&gt;
&lt;li&gt;Reproducible behavior&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  When LLMs are the right tool
&lt;/h3&gt;

&lt;p&gt;Use LLMs when the problem involves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understanding or generating text&lt;/li&gt;
&lt;li&gt;Summarizing documents&lt;/li&gt;
&lt;li&gt;Reasoning across unstructured information&lt;/li&gt;
&lt;li&gt;Conversational interfaces&lt;/li&gt;
&lt;li&gt;Multi modal inputs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A simple rule of thumb:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the question is &lt;strong&gt;“Given these numbers, what is the probability of X?”&lt;/strong&gt; , use ML.&lt;/li&gt;
&lt;li&gt;If the question is &lt;strong&gt;“Given this text, what does it mean?”&lt;/strong&gt; , use an LLM.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What an ML Pipeline Really Is
&lt;/h2&gt;

&lt;p&gt;An ML pipeline is the journey from raw data to a working model.&lt;br&gt;&lt;br&gt;
It is not a single script. It is a repeatable, structured process.&lt;/p&gt;

&lt;p&gt;A complete ML pipeline includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data ingestion&lt;/li&gt;
&lt;li&gt;Data cleaning and preparation&lt;/li&gt;
&lt;li&gt;Exploratory data analysis&lt;/li&gt;
&lt;li&gt;Feature engineering&lt;/li&gt;
&lt;li&gt;Model training and evaluation&lt;/li&gt;
&lt;li&gt;Model packaging&lt;/li&gt;
&lt;li&gt;Deployment and monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This pipeline ensures that the work is reproducible, traceable, and ready for automation.&lt;/p&gt;




&lt;h2&gt;
  
  
  What MLOps Means
&lt;/h2&gt;

&lt;p&gt;MLOps is the operational discipline that keeps machine learning systems healthy in production.&lt;br&gt;&lt;br&gt;
It brings together DevOps, data engineering, and model lifecycle management.&lt;/p&gt;

&lt;p&gt;MLOps focuses on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Versioning of data, code, and models&lt;/li&gt;
&lt;li&gt;Automated training and retraining&lt;/li&gt;
&lt;li&gt;Continuous integration and delivery&lt;/li&gt;
&lt;li&gt;Monitoring model drift and performance&lt;/li&gt;
&lt;li&gt;Scalable deployment patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If ML is the engine, MLOps is the system that keeps the engine running safely at scale.&lt;/p&gt;




&lt;h2&gt;
  
  
  Example: The Vancouver Traffic Accident Risk Predictor
&lt;/h2&gt;

&lt;p&gt;This project analyzes weather and traffic accident data in Vancouver and builds a predictive model to estimate accident risk under different conditions.&lt;br&gt;&lt;br&gt;
It follows a complete ML pipeline from exploration to deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction to Weather and Traffic Accident Analysis in Vancouver
&lt;/h3&gt;

&lt;p&gt;The project begins with a simple question:&lt;br&gt;&lt;br&gt;
How does weather influence accident risk in the city?&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Sources and Analytical Tools
&lt;/h3&gt;

&lt;p&gt;The project uses:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Traffic accident records:&lt;/em&gt; Traffic accident data is sourced from the City of Vancouver’s Strategic Plan Dashboard for reliability and accuracy. The dashboard provides detailed and regularly updated records essential for comprehensive traffic analysis.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Historical weather data:&lt;/em&gt; Weatherstats.ca aggregates data from Environment and Climate Change Canada for accurate meteorological information. The data encompasses a wide range of weather variables ensuring thorough climate analysis.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Tools include Python, Pandas, Matplotlib, Boken, Scikit Learn, FastAPI, and Docker.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Cleaning and Preparation
&lt;/h3&gt;

&lt;p&gt;This stage merges datasets, handles missing values, and prepares the final analytical table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5welmdvicg97a4hz60l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5welmdvicg97a4hz60l.png" width="800" height="223"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0xu0elr365nrmsfnu83.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0xu0elr365nrmsfnu83.png" width="800" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Exploratory Data Analysis
&lt;/h3&gt;

&lt;p&gt;EDA reveals patterns such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Higher accident frequency during snow &lt;/li&gt;
&lt;li&gt;Seasonal variations&lt;/li&gt;
&lt;li&gt;Time of day risk levels&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1i750dejm0wj1p4fhtmv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1i750dejm0wj1p4fhtmv.png" width="800" height="115"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzr5d91mjgb48ltzxt1u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzr5d91mjgb48ltzxt1u.png" width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8psrv6q8cg3vemlkgii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8psrv6q8cg3vemlkgii.png" width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Visualizing Key Trends
&lt;/h2&gt;

&lt;p&gt;Charts help identify correlations and guide feature selection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxa9bdsyzx66lzl6ll7o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxa9bdsyzx66lzl6ll7o.png" width="800" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16z3xqz8w7xr682c05hx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16z3xqz8w7xr682c05hx.png" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0akkdtcpph7rf84x8bpi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0akkdtcpph7rf84x8bpi.png" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwy1ngszy89azqjxhsgt5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwy1ngszy89azqjxhsgt5.png" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Predictive Modeling
&lt;/h3&gt;

&lt;p&gt;The dataset is split into training and testing sets.&lt;br&gt;&lt;br&gt;
Models such as random forest are trained to predict accident risk.&lt;br&gt;&lt;br&gt;
Evaluation metrics confirm generalization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv639pz8hclfesgdabbvm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv639pz8hclfesgdabbvm.png" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fml96vpbw7fjx0dkh9nca.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fml96vpbw7fjx0dkh9nca.png" width="800" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Accuracy Metric:&lt;/em&gt; Accuracy measures the overall correctness of the predictive model by comparing true results to total predictions.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Precision Metric:&lt;/em&gt; Precision indicates how many of the positive predictions made by the model are actually correct.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Recall Metric:&lt;/em&gt; Recall assesses the model’s ability to identify all relevant positive cases in the dataset.&lt;/p&gt;

&lt;p&gt;To increase the accuracy of Our ML module, we can tune the hyperparameters:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnh1srad0muuwwx2yyzp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnh1srad0muuwwx2yyzp2.png" width="800" height="813"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Insights and Conclusions
&lt;/h3&gt;

&lt;p&gt;The analysis shows how weather patterns influence accident probability and demonstrates the value of structured ML for public safety insights.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. Productionalization and Deployment
&lt;/h2&gt;

&lt;p&gt;A model becomes valuable only when it can be used by real applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  FastAPI Model Server
&lt;/h3&gt;

&lt;p&gt;The trained model is wrapped in a FastAPI application that exposes a prediction endpoint.&lt;br&gt;&lt;br&gt;
An in memory prediction cache provides low latency responses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4eb1pmhphe5vykdtsla9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4eb1pmhphe5vykdtsla9.png" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7ae9ak9hlov02tjopul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7ae9ak9hlov02tjopul.png" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Dockerized Environment
&lt;/h3&gt;

&lt;p&gt;The entire environment is packaged in Docker, ensuring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consistent runtime&lt;/li&gt;
&lt;li&gt;Easy local testing&lt;/li&gt;
&lt;li&gt;Seamless deployment to any container platform&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This closes the loop from exploration to production.&lt;/p&gt;




&lt;h1&gt;
  
  
  Why This Matters to Your Business
&lt;/h1&gt;

&lt;p&gt;Every organization today is under pressure to adopt AI, but the real advantage comes from choosing the right tool for the right problem.&lt;br&gt;&lt;br&gt;
This article highlights a simple but often overlooked truth:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not every AI problem needs an LLM. Many business problems are solved faster, cheaper, and more reliably with classical ML.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For business leaders, this means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lower operational cost&lt;/li&gt;
&lt;li&gt;Faster time to value&lt;/li&gt;
&lt;li&gt;More predictable performance&lt;/li&gt;
&lt;li&gt;Easier compliance and governance&lt;/li&gt;
&lt;li&gt;Clearer ROI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Vancouver project is not just a technical exercise. It is a demonstration of how disciplined ML engineering can deliver practical, measurable outcomes without unnecessary complexity.&lt;/p&gt;




&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Machine learning is not a relic from the pre LLM era.&lt;br&gt;&lt;br&gt;
It is a precise, efficient, and reliable discipline that solves structured prediction problems better than anything else.&lt;br&gt;&lt;br&gt;
The Vancouver Traffic Accident Risk Predictor demonstrates how ML pipelines, MLOps practices, and lightweight deployment patterns come together in a real project.&lt;/p&gt;

&lt;p&gt;If your team is exploring AI adoption, modernizing analytics, or evaluating where ML and LLMs fit into your roadmap, I am always open to meaningful conversations.&lt;br&gt;&lt;br&gt;
Whether you are building your first predictive model or scaling AI across the organization, the right architecture and the right tool choice make all the difference. Feel free to connect.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;About the Author&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;em&gt;Jonathan Wong&lt;/em&gt; is an IT and AI consultant with 20+ years of experience leading engineering teams across Vancouver and Hong Kong. He specializes in modernizing legacy platforms, cloud security, and building AI-ready systems for startups and large enterprises while advising leadership on using strategic technology to drive business growth.&lt;br&gt;&lt;br&gt;
&lt;em&gt;&lt;a href="https://www.linkedin.com/in/jonanata/" rel="noopener noreferrer"&gt;Connect with me on LinkedIn&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://blog.jonanata.com/from-machine-learning-to-production-a-practical-walkthrough-using-my-vancouver-traffic-accident-risk-predictor/" rel="noopener noreferrer"&gt;From Machine Learning to Production: A Practical Walkthrough Using My Vancouver Traffic Accident Risk Predictor&lt;/a&gt; appeared first on &lt;a href="https://blog.jonanata.com" rel="noopener noreferrer"&gt;Behind the Build&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>When AI Agents Transact: How Interaction Surface Mobility Redefines the Future of Payments </title>
      <dc:creator>Jonathan Wong</dc:creator>
      <pubDate>Sat, 09 May 2026 03:46:22 +0000</pubDate>
      <link>https://forem.com/jonathan78wong/when-ai-agents-transact-how-interaction-surface-mobility-redefines-the-future-of-payments-4gof</link>
      <guid>https://forem.com/jonathan78wong/when-ai-agents-transact-how-interaction-surface-mobility-redefines-the-future-of-payments-4gof</guid>
      <description>&lt;p&gt;The story of digital payments has always been a story about movement.&lt;br&gt;Not just the movement of money, but the movement of the place where intention begins, where authentication happens, and where the transaction is finally executed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/machine-learning/agents-that-transact-introducing-amazon-bedrock-agentcore-payments-built-with-coinbase-and-stripe/" rel="noopener noreferrer"&gt;Amazon’s introduction of Amazon Bedrock AgentCore Payments&lt;/a&gt; marks the beginning of a new chapter in that story. It is a chapter where AI agents no longer wait for users to initiate payments. Instead, they transact autonomously, safely, and with full governance. And to understand why this matters, we need to look at how the interaction surface has been moving for more than two decades.&lt;/p&gt;




&lt;h2&gt;&lt;strong&gt;A New Foundation for Agent Payments&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;AgentCore Payments is a fully managed payment layer that allows AI agents to pay for APIs, data, MCP servers, and even other agents. It integrates Coinbase and Stripe to support microtransactions, stablecoin payments, identity bound wallets, spending guardrails, and full observability.&lt;/p&gt;

&lt;p&gt;In the past, building this kind of payment capability required months of engineering work. Wallet management. Compliance. Guardrails. Billing logic. Error handling. AgentCore Payments removes all of that complexity. Payments become part of the agent execution loop, not a separate system bolted on the side.&lt;/p&gt;

&lt;p&gt;This shift becomes clear when we look at how an agent handles a simple request such as analyzing Amazon stock.&lt;/p&gt;




&lt;h2&gt;&lt;strong&gt;How It Works&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;Once the agent realizes it needs paid data, the rest of the process is handled entirely by AgentCore Payments. The system provides a complete payment foundation inside the agent execution loop. It connects wallets, executes microtransactions, enforces spending rules, and records every event for governance and audit.&lt;/p&gt;

&lt;p&gt;AgentCore Payments includes five core capabilities that work together as a single runtime layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Payment orchestration&lt;/strong&gt;&lt;br&gt;The platform manages wallet connections, establishes secure sessions with providers such as Coinbase and Stripe, and executes payments on behalf of the agent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Payment guardrails&lt;/strong&gt;&lt;br&gt;Every transaction is checked against authorization rules and spending limits. This prevents runaway costs and ensures the agent stays within the boundaries defined by the user or the organization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unified identity for agents&lt;/strong&gt;&lt;br&gt;Each agent operates under a consistent identity that ties together permissions, wallet access, and spending policies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Observability across all payment events&lt;/strong&gt;&lt;br&gt;Every payment attempt, success, failure, and retry is logged. This gives teams full visibility into how agents are spending and why.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Native integration with agent execution loops&lt;/strong&gt;&lt;br&gt;Payments are not an external system. They are part of the agent’s reasoning and tool calling cycle. This allows agents to autonomously discover, evaluate, and pay for resources as part of completing a task. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.jonanata.com%2Fwp-content%2Fuploads%2F2026%2F05%2Fimage-3-1024x665.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="520" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.jonanata.com%2Fwp-content%2Fuploads%2F2026%2F05%2Fimage-3-1024x665.png" alt=""&gt;&lt;/a&gt;Source: &lt;a href="https://aws.amazon.com/blogs/machine-learning/agents-that-transact-introducing-amazon-bedrock-agentcore-payments-built-with-coinbase-and-stripe/" rel="noopener noreferrer"&gt;Agents that transact: Introducing Amazon Bedrock AgentCore payments, built with Coinbase and Stripe | Artificial Intelligence&lt;/a&gt;  &lt;/p&gt;




&lt;h2&gt;&lt;strong&gt;Example: Analyze Amazon Stock Enquiry&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;A user asks the agent to analyze Amazon stock.&lt;br&gt;The agent determines that real time financial data is required, and that the data source is paid.&lt;br&gt;It reaches out to the provider.&lt;br&gt;At that moment, AgentCore Payments takes over.&lt;/p&gt;

&lt;p&gt;It authenticates the wallet.&lt;br&gt;It executes the microtransaction.&lt;br&gt;It checks spending guardrails.&lt;br&gt;It logs the entire event for observability.&lt;/p&gt;

&lt;p&gt;Once the payment clears, the agent receives the data.&lt;br&gt;It completes the analysis and returns the result to the user.&lt;/p&gt;

&lt;p&gt;This example demonstrates the core value of AgentCore Payments.&lt;br&gt;Agents can autonomously transact inside a single execution loop without any custom payment logic.&lt;/p&gt;




&lt;h2&gt;&lt;strong&gt;Interaction Surface Mobility&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;A Short Explanation&lt;/p&gt;

&lt;p&gt;To understand why this shift is so significant, we need a new lens.&lt;br&gt;Historically, people talked about device mobility. Desktop to laptop to mobile to wearables. But the real story is not about devices. It is about the interaction surface. The place where intention originates, where authentication happens, and where payments are triggered.&lt;/p&gt;

&lt;p&gt;Interaction Surface Mobility describes how this surface keeps moving closer to the user’s life.&lt;br&gt;From the desk.&lt;br&gt;To the pocket.&lt;br&gt;To the environment.&lt;br&gt;And now into the cloud, where AI agents act on our behalf.&lt;/p&gt;

&lt;p&gt;This mobility shapes how payments work, how businesses design services, and how value flows across digital ecosystems.&lt;/p&gt;




&lt;h2&gt;&lt;strong&gt;The Four Eras of Interaction Surface Mobility&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;Before we explore the eras, it helps to understand the underlying structure.&lt;br&gt;Payment flows follow a consistent pattern:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interaction surface → service starting point → intention → authentication → channel&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This formula becomes the backbone for understanding how payments evolve as the interaction surface becomes more mobile and more embedded in daily life.&lt;/p&gt;

&lt;p&gt;But what is actually changing in this new era?&lt;br&gt;The primary shift is the movement of the interaction surface itself.&lt;br&gt;As the interaction surface moves, everything downstream changes with it.&lt;br&gt;The service starting point moves.&lt;br&gt;The intention model changes.&lt;br&gt;The authentication model changes.&lt;br&gt;The payment channel changes.&lt;br&gt;The interaction surface is the driver.&lt;br&gt;The rest of the flow is the consequence.&lt;/p&gt;

&lt;p&gt;With this causal structure in mind, the four eras become clear.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table: The Evolution of Interaction Surface Mobility&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Era&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Interaction surface&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Service starting point&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Intention&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Auth&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Channel&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Desktop Web&lt;/td&gt;
&lt;td&gt;Desktop&lt;/td&gt;
&lt;td&gt;Website&lt;/td&gt;
&lt;td&gt;User initiated&lt;/td&gt;
&lt;td&gt;Manual login&lt;/td&gt;
&lt;td&gt;Web payment page&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mobile App&lt;/td&gt;
&lt;td&gt;Mobile phone&lt;/td&gt;
&lt;td&gt;App&lt;/td&gt;
&lt;td&gt;User initiated&lt;/td&gt;
&lt;td&gt;Biometric&lt;/td&gt;
&lt;td&gt;Mobile wallet&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloud AI Services&lt;/td&gt;
&lt;td&gt;Cloud agents&lt;/td&gt;
&lt;td&gt;Cloud workflows&lt;/td&gt;
&lt;td&gt;AI interpreted&lt;/td&gt;
&lt;td&gt;Pre authorized agent identity&lt;/td&gt;
&lt;td&gt;Agent to agent payment service&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ambient AI&lt;/td&gt;
&lt;td&gt;Ambient compute&lt;/td&gt;
&lt;td&gt;Autonomous AI workflows&lt;/td&gt;
&lt;td&gt;AI reasoning&lt;/td&gt;
&lt;td&gt;Identity bound spending guardrails&lt;/td&gt;
&lt;td&gt;Autonomous payment protocols&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;p&gt;&lt;strong&gt;Era 1: The Desktop Web&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the beginning, the interaction surface was fixed.&lt;br&gt;People sat at a desk, opened a browser, and intentionally navigated to a payment page. Every action was explicit. Every step was manual. Payments were a destination, not a flow.&lt;/p&gt;

&lt;p&gt;This era shaped the first generation of online commerce. But it was limited by the immobility of the interaction surface. The user had to go to the computer. The computer never followed the user.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Era 2: The Mobile App&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Then the interaction surface moved into the pocket.&lt;br&gt;The phone became the center of digital life.&lt;br&gt;Apps replaced websites.&lt;br&gt;Biometrics replaced passwords.&lt;br&gt;Wallets replaced card forms.&lt;/p&gt;

&lt;p&gt;Payments became faster, more personal, and more contextual.&lt;br&gt;This era created ride hailing, food delivery, and mobile commerce.&lt;br&gt;It also marked the beginning of lifestyle mobility.&lt;br&gt;People no longer went to the payment interface.&lt;br&gt;The payment interface went with them.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Era 3: Cloud Based AI Services&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;The next shift was subtle but profound.&lt;br&gt;The interaction surface moved off the device entirely and into the cloud.&lt;br&gt;AI agents began performing tasks on behalf of users.&lt;br&gt;They interpreted intention.&lt;br&gt;They initiated workflows.&lt;br&gt;They accessed paid resources.&lt;/p&gt;

&lt;p&gt;But payments were still a problem.&lt;br&gt;Agents could not pay for anything without custom engineering.&lt;br&gt;Wallets were not agent native.&lt;br&gt;Guardrails were not standardized.&lt;br&gt;Governance was fragmented.&lt;/p&gt;

&lt;p&gt;AgentCore Payments solves this.&lt;br&gt;It gives agents a native way to transact, with identity, guardrails, and observability built in.&lt;br&gt;This is the first real payment system designed for autonomous agents.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Era 4: Ambient AI&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;This is the era we are entering now.&lt;br&gt;The interaction surface becomes the environment itself.&lt;br&gt;Homes, cars, offices, glasses, wearables, sensors, and cloud agents all become part of a continuous ambient layer.&lt;/p&gt;

&lt;p&gt;Intention is no longer expressed.&lt;br&gt;It is reasoned.&lt;br&gt;Sometimes even anticipated through context.&lt;/p&gt;

&lt;p&gt;Authentication becomes a set of identity bound spending rules.&lt;br&gt;Channels become autonomous payment protocols.&lt;br&gt;Transactions become micro events inside larger workflows.&lt;/p&gt;

&lt;p&gt;In this world, payments are not actions.&lt;br&gt;They are side effects of intelligent systems doing their work.&lt;/p&gt;




&lt;h2&gt;&lt;strong&gt;Why This Matters for Business&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;Interaction Surface Mobility is not just a technical evolution.&lt;br&gt;It is a business transformation.&lt;/p&gt;

&lt;p&gt;Payments become invisible.&lt;br&gt;Intention becomes fluid.&lt;br&gt;Authentication becomes ambient.&lt;br&gt;Channels become agent native.&lt;br&gt;Business models shift from subscriptions to usage based to agent based.&lt;/p&gt;

&lt;p&gt;Agents will buy data.&lt;br&gt;Agents will buy compute.&lt;br&gt;Agents will buy services.&lt;br&gt;Agents will buy from other agents.&lt;/p&gt;

&lt;p&gt;The companies that understand this shift will design products for a world where the user is no longer the primary actor in the payment flow.&lt;br&gt;The agent is.&lt;/p&gt;




&lt;h2&gt;&lt;strong&gt;Closing Thought&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;The movement of the interaction surface has always reshaped the payment landscape.&lt;br&gt;From the desk.&lt;br&gt;To the pocket.&lt;br&gt;To the environment.&lt;br&gt;And now into the cloud, where agents transact on our behalf.&lt;/p&gt;

&lt;p&gt;AgentCore Payments is not just a new feature.&lt;br&gt;It is the infrastructure for the next era of commerce.&lt;br&gt;An era defined by Interaction Surface Mobility, where payments become autonomous, contextual, and woven into the fabric of intelligent systems.  &lt;/p&gt;




&lt;p&gt;&lt;strong&gt;About the Author&lt;/strong&gt;&lt;br&gt;&lt;em&gt;Jonathan Wong&lt;/em&gt; is an IT and AI consultant with 20+ years of experience leading engineering teams across Vancouver and Hong Kong. He specializes in modernizing legacy platforms, cloud security, and building AI-ready systems for startups and large enterprises while advising leadership on using strategic technology to drive business growth. &lt;br&gt;&lt;em&gt;&lt;a href="https://www.linkedin.com/in/jonanata/" rel="noopener noreferrer"&gt;Connect with me on LinkedIn&lt;/a&gt;&lt;/em&gt;  &lt;/p&gt;



&lt;p&gt;The post &lt;a href="https://blog.jonanata.com/when-ai-agents-transact-how-interaction-surface-mobility-redefines-the-future-of-payments/" rel="noopener noreferrer"&gt;When AI Agents Transact: How Interaction Surface Mobility Redefines the Future of Payments &lt;/a&gt; appeared first on &lt;a href="https://blog.jonanata.com" rel="noopener noreferrer"&gt;Behind the Build&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
    </item>
    <item>
      <title>A Pre‑AI Lesson for the AI Era: Scrum in a Cross‑Regional Cloud Migration Delivered in Ten Weeks</title>
      <dc:creator>Jonathan Wong</dc:creator>
      <pubDate>Wed, 06 May 2026 23:23:33 +0000</pubDate>
      <link>https://forem.com/jonathan78wong/a-pre-ai-lesson-for-the-ai-era-scrum-in-a-cross-regional-cloud-migration-delivered-in-ten-weeks-d2p</link>
      <guid>https://forem.com/jonathan78wong/a-pre-ai-lesson-for-the-ai-era-scrum-in-a-cross-regional-cloud-migration-delivered-in-ten-weeks-d2p</guid>
      <description>&lt;p&gt;A few years ago, I stepped into one of the most challenging and transformative projects of my career as a Cloud Architect. I was responsible for leading a cross‑regional team of more than twenty developers across Hong Kong and China. With Scrum, Atlassian JIRA, and Confluence as our backbone, we rebuilt an AWS‑based JEE Spring microservices platform using Kafka and PostgreSQL and migrated it fully to Azure. The original engineering estimate was six months. We delivered it in ten weeks.&lt;/p&gt;

&lt;p&gt;This is the story of how alignment, structure, and communication reshaped the entire organisation.&lt;/p&gt;




&lt;h1&gt;&lt;strong&gt;Background&lt;/strong&gt;&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Situation&lt;/strong&gt;&lt;br&gt;The project began with a strong team but a fragmented operating model. Everyone worked hard, but the lack of shared structure created delays and misunderstandings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Points&lt;/strong&gt;&lt;br&gt;• Cross‑regional team of more than twenty developers across Hong Kong and China&lt;br&gt;• Many engineers were domain experts with strong CI and CD foundations&lt;br&gt;• Multiple teams involved including product, engineering, sales, and customer success&lt;br&gt;• Hardworking culture but no unified workflow&lt;br&gt;• Required to migrate a cloud product to Azure due to client request&lt;br&gt;• Engineering estimated six months, which the business could not accept&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;&lt;br&gt;The team had the talent and capability, but without alignment, the project was heading toward an unacceptable timeline.&lt;/p&gt;




&lt;h1&gt;&lt;strong&gt;My Role in the Transformation&lt;/strong&gt;&lt;/h1&gt;

&lt;p&gt;As the Cloud Architect leading this initiative, I became the bridge between product, engineering, sales, and customer success. My role was not only technical but also organisational. I facilitated communication, explained Scrum practices in simple and practical ways, and guided the teams to adopt a structured, transparent workflow. By aligning expectations, enforcing clarity, and coaching the teams through Agile execution, I helped transform the project from fragmented chaos into a predictable, collaborative delivery model.&lt;/p&gt;




&lt;h1&gt;
&lt;strong&gt;Problems and the Real Original&lt;/strong&gt; &lt;strong&gt;Situation&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Situation&lt;/strong&gt;&lt;br&gt;The delays were not caused by technical difficulty. They were caused by fragmented communication, inconsistent requirement handling, and a workflow that depended heavily on verbal instructions and individual memory. The actual “before” situation was far more chaotic than a simple misalignment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Points&lt;/strong&gt;&lt;br&gt;• Product team sometimes gave requirements verbally through phone calls, WhatsApp or WeChat  &lt;br&gt;• Product features were told directly to individual engineers or salespeople&lt;br&gt;• Different parts of the same requirement were distributed through different channels such as email, phone, and chat&lt;br&gt;• Customer success often contacted individual engineers or product members suddenly&lt;br&gt;• No one had a complete picture of the requirement&lt;br&gt;• Engineering teams lacked visibility into each other’s status and technical availability&lt;br&gt;• Developers were frequently switched between tasks, losing focus&lt;br&gt;• Management had no visibility into progress or bottlenecks&lt;br&gt;• CS handled customer issues without knowing what was released&lt;br&gt;• Feedback rarely reached product or engineering in a structured way&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;&lt;br&gt;The organisation operated on tribal knowledge. Misunderstandings were common, rework was frequent, and timelines were unpredictable. The six‑month estimate reflected organisational misalignment, not technical complexity.&lt;/p&gt;




&lt;h1&gt;&lt;strong&gt;How I Solved It Using JIRA, Confluence, and Scrum&lt;/strong&gt;&lt;/h1&gt;

&lt;h2&gt;&lt;strong&gt;Inside the Engineering Teams&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Situation&lt;/strong&gt;&lt;br&gt;The engineering team needed structure, clarity, and a predictable rhythm.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Points&lt;/strong&gt;&lt;br&gt;• All development work documented on the tracker&lt;br&gt;• Engineers only worked on items listed in the tracker&lt;br&gt;• Requirements treated as negotiable conversations&lt;br&gt;• PM updated the tracker after every discussion&lt;br&gt;• Engineers picked the first available item&lt;br&gt;• Story status updated continuously&lt;br&gt;• One engineer worked on one item at a time&lt;br&gt;• Requirements broken into technical tasks&lt;br&gt;• Development discussions moved into JIRA&lt;br&gt;• Work delivered into testing story by story&lt;br&gt;• Tasks created for POC, technical debt, and research&lt;br&gt;• Tasks broken down with dependencies and blockers&lt;br&gt;• One task assigned to one engineer&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;&lt;br&gt;Engineering gained clarity, focus, and a stable delivery rhythm that accelerated progress.&lt;/p&gt;




&lt;h2&gt;&lt;strong&gt;Inside the Product Team&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Situation&lt;/strong&gt;&lt;br&gt;The product team needed a consistent way to express requirements so engineering could execute without ambiguity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Points&lt;/strong&gt;&lt;br&gt;• Features and bugs written with proper structure&lt;br&gt;• Requirements documented from the user perspective&lt;br&gt;• Simple English, point form, short sentences&lt;br&gt;• Stories tested and verified quickly&lt;br&gt;• Backlog groomed regularly and prioritized top to bottom&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;&lt;br&gt;The product team became a source of clarity instead of confusion, reducing rework and saving time.&lt;/p&gt;




&lt;h1&gt;&lt;strong&gt;Knowledge and Task Management&lt;/strong&gt;&lt;/h1&gt;

&lt;h2&gt;&lt;strong&gt;Product, Knowledge, and Requirement Management&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Situation&lt;/strong&gt;&lt;br&gt;Information lived in too many places. Teams needed a single source of truth.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Points&lt;/strong&gt;&lt;br&gt;• Confluence used as a centralized panel&lt;br&gt;• Structure followed Space to Pages to Contents&lt;br&gt;• Pages included product requirements, technical documents, and notes&lt;br&gt;• Product requirement pages included goals, milestones, and voting&lt;br&gt;• Technical documents included architecture, installation, and account details&lt;br&gt;• Notes captured meetings, research, and decisions&lt;br&gt;• Confluence pages linked directly to JIRA issues&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;&lt;br&gt;Knowledge became shared, searchable, and consistent across all teams.&lt;/p&gt;




&lt;h2&gt;&lt;strong&gt;Issue Types and Their Purpose&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Situation&lt;/strong&gt;&lt;br&gt;Teams needed a common language to describe work, track progress, and estimate timelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Points&lt;/strong&gt;&lt;br&gt;• Bug for previously working functions that broke&lt;br&gt;• Note for behaviours that became new requirements&lt;br&gt;• Story for the smallest unit of user value&lt;br&gt;• Task for feasibility studies, POC, and technical debt&lt;br&gt;• Epic for large initiatives&lt;br&gt;• Subtask for breaking down work&lt;br&gt;• Enabled velocity tracking and release estimation&lt;br&gt;• Tracked internal and external dependencies&lt;br&gt;• Smart Commits connected code changes to tasks&lt;br&gt;• JIRA release notes notified sales and CS automatically&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;&lt;br&gt;The organisation gained predictable delivery, accurate planning, and better cross‑team alignment.&lt;/p&gt;




&lt;h1&gt;&lt;strong&gt;Team Meetings to Improve Communication&lt;/strong&gt;&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Situation&lt;/strong&gt;&lt;br&gt;Tools alone were not enough. Teams needed real‑time alignment and shared understanding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Points&lt;/strong&gt;&lt;br&gt;• Daily morning standups with engineering&lt;br&gt;• Weekly Monday meeting with all team heads&lt;br&gt;• Customer success joined standups when client feedback needed clarification&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;&lt;br&gt;Communication became continuous, reducing misunderstandings and last‑minute surprises.&lt;/p&gt;




&lt;h1&gt;Example: From Product Design through Engineering to CS and Back &lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Situation&lt;/strong&gt;&lt;br&gt;To show the impact of the transformation, here is a real example of how a feature moved through the organisation before and after the new workflow.&lt;/p&gt;




&lt;h2&gt;&lt;strong&gt;Before&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Situation&lt;/strong&gt;&lt;br&gt;The original workflow was chaotic, fragmented, and heavily dependent on verbal communication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Points&lt;/strong&gt;&lt;br&gt;• Product team sometimes gave requirements verbally through phone calls or WhatsApp&lt;br&gt;• Product features were told directly to individual engineers or salespeople&lt;br&gt;• Different parts of the same requirement were distributed through different channels&lt;br&gt;• Customer success often contacted individual engineers suddenly&lt;br&gt;• No one had a complete picture of the requirement&lt;br&gt;• Engineering built based on partial or outdated information&lt;br&gt;• Sales promised features based on verbal conversations&lt;br&gt;• CS handled customer issues without knowing what was released&lt;br&gt;• Feedback rarely reached product or engineering&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;&lt;br&gt;The organisation operated on tribal knowledge. Misunderstandings were common, rework was frequent, and timelines were unpredictable.&lt;/p&gt;




&lt;h2&gt;&lt;strong&gt;After&lt;/strong&gt;&lt;/h2&gt;

&lt;h3&gt;&lt;strong&gt;Product Design&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;• Product team created a clear Confluence page with goal, user story, acceptance criteria, and diagrams&lt;br&gt;• Page linked directly to a JIRA story and tasks&lt;br&gt;• All teams saw the same requirement at the same time&lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;Engineering Development&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;• Engineers broke the story into tasks and subtasks&lt;br&gt;• Dependencies and blockers were defined&lt;br&gt;• Work delivered story by story into testing&lt;br&gt;• PM verified each story immediately&lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;Release to CS&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;• JIRA release function automatically pushed release notes to Microsoft Teams&lt;br&gt;• CS team received a clear list of new features, fixes, and version numbers&lt;br&gt;• Sales also received the same release notes&lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;CS Feedback Loop&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;• CS tested the new feature with real customers&lt;br&gt;• Feedback added as comments on the Confluence page&lt;br&gt;• Product team reviewed and updated requirements&lt;br&gt;• Engineering received new tasks or improvements linked to the original story&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;&lt;br&gt;The entire lifecycle became a closed loop. Every team saw the same truth, reacted quickly, and aligned their actions.&lt;/p&gt;




&lt;h1&gt;&lt;strong&gt;Before and After Comparison&lt;/strong&gt;&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Situation&lt;/strong&gt;&lt;br&gt;The transformation changed the culture, speed, and clarity of the entire organisation. Presenting the contrast as a table makes the improvement immediately clear.  &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;&lt;tr&gt;
&lt;th&gt;Phase&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After&lt;/th&gt;
&lt;th&gt;Result&lt;/th&gt;
&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Product Design&lt;/td&gt;
&lt;td&gt;Requirements delivered verbally through phone or WhatsApp&lt;/td&gt;
&lt;td&gt;Requirements documented clearly in Confluence&lt;/td&gt;
&lt;td&gt;Clear documentation replaced verbal memory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Product Design&lt;/td&gt;
&lt;td&gt;Different parts of the same requirement sent through different channels&lt;/td&gt;
&lt;td&gt;Single source of truth for all requirements&lt;/td&gt;
&lt;td&gt;One place for all information&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Product Design&lt;/td&gt;
&lt;td&gt;Product features communicated directly to individual engineers or sales&lt;/td&gt;
&lt;td&gt;Structured JIRA stories and tasks shared with all teams&lt;/td&gt;
&lt;td&gt;Everyone received the same message&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Knowledge Management&lt;/td&gt;
&lt;td&gt;No documentation and no shared visibility&lt;/td&gt;
&lt;td&gt;Centralized product and technical knowledge&lt;/td&gt;
&lt;td&gt;Teams aligned on the same information&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Engineering &lt;br&gt;Execution&lt;/td&gt;
&lt;td&gt;Frequent rework and misaligned expectations&lt;/td&gt;
&lt;td&gt;Predictable engineering workflow with clear dependencies&lt;/td&gt;
&lt;td&gt;Work became stable and predictable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Release &lt;br&gt;Management&lt;/td&gt;
&lt;td&gt;No release notes for CS or sales&lt;/td&gt;
&lt;td&gt;Automated release notes through JIRA to Microsoft Teams&lt;/td&gt;
&lt;td&gt;All teams stayed informed on every release&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Customer Success&lt;/td&gt;
&lt;td&gt;CS feedback delivered suddenly to individuals&lt;/td&gt;
&lt;td&gt;CS feedback captured in Confluence and linked to JIRA&lt;/td&gt;
&lt;td&gt;Feedback became structured and trackable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Delivery Timeline&lt;/td&gt;
&lt;td&gt;Six month timeline estimate&lt;/td&gt;
&lt;td&gt;Ten week delivery&lt;/td&gt;
&lt;td&gt;Alignment accelerated delivery dramatically&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;&lt;br&gt;The organisation shifted from reactive chaos to proactive alignment, with every team operating from the same truth.&lt;/p&gt;




&lt;h1&gt;&lt;strong&gt;What This Solved and the Result&lt;/strong&gt;&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Situation&lt;/strong&gt;&lt;br&gt;Once structure, communication, and knowledge alignment were in place, the entire organisation began to move faster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Points&lt;/strong&gt;&lt;br&gt;• Centralized visibility improved planning and saved time&lt;br&gt;• Clear product goals reduced misunderstanding&lt;br&gt;• Early validation reduced downstream rework&lt;br&gt;• Engineering timelines became predictable&lt;br&gt;• Product page and task linkage aligned business and engineering&lt;br&gt;• JIRA auto‑release notes empowered sales and CS&lt;br&gt;• Engineering teams understood each other’s status and availability&lt;br&gt;• Defined blockers prevented last‑minute surprises&lt;br&gt;• Centralized documentation reduced search time and improved support&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;&lt;br&gt;The six‑month estimate collapsed into ten weeks because every team finally moved in the same direction.&lt;/p&gt;




&lt;h1&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Product success is built on teamwork across all functions.&lt;/strong&gt; Improving the product development timeline required alignment from product design to feature validation, business requirement transformation, testing, release, customer success, post delivery support, and customer feedback. When every team shares the same truth, the entire organisation accelerates.&lt;/p&gt;

&lt;p&gt;This experience, even though it happened a few years ago, remains fully relevant in today’s &lt;strong&gt;AI driven business era.&lt;/strong&gt; It is a clear example of how &lt;strong&gt;Scrum and Agile principles&lt;/strong&gt; solve real operational problems. The story shows that &lt;strong&gt;delays rarely come from tools or technical expertise.&lt;/strong&gt; They come from the &lt;strong&gt;absence of a disciplined operational methodology.&lt;/strong&gt; When teams follow a repeatable Agile process supported by a single source of truth, technology becomes an accelerator rather than a bottleneck.  &lt;/p&gt;




&lt;h2&gt;&lt;strong&gt;What’s Next&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;This article presents the high level view of how Scrum and operational alignment improved a cross regional cloud migration timeline. In the next article, I will walk through the detailed implementation across each phase, and explain the specific problems the team encountered, how those challenges were solved, and how the Scrum workflow kept the project moving with clarity from design to delivery.  &lt;/p&gt;




&lt;p&gt;&lt;strong&gt;About the Author&lt;/strong&gt;&lt;br&gt;&lt;em&gt;Jonathan Wong&lt;/em&gt; is an IT and AI consultant with 20+ years of experience leading engineering teams across Vancouver and Hong Kong. He specializes in modernizing legacy platforms, cloud security, and building AI-ready systems for startups and large enterprises while advising leadership on using strategic technology to drive business growth. &lt;br&gt;&lt;em&gt;&lt;a href="https://www.linkedin.com/in/jonanata/" rel="noopener noreferrer"&gt;Connect with me on LinkedIn&lt;/a&gt;&lt;/em&gt;  &lt;/p&gt;



&lt;p&gt;The post &lt;a href="https://blog.jonanata.com/project-management-story-leading-a-cross-regional-cloud-migration/" rel="noopener noreferrer"&gt;A Pre‑AI Lesson for the AI Era: Scrum in a Cross‑Regional Cloud Migration Delivered in Ten Weeks&lt;/a&gt; appeared first on &lt;a href="https://blog.jonanata.com" rel="noopener noreferrer"&gt;Behind the Build&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>leadership</category>
      <category>management</category>
      <category>agile</category>
    </item>
    <item>
      <title>Cloud Summit 2026</title>
      <dc:creator>Jonathan Wong</dc:creator>
      <pubDate>Mon, 04 May 2026 19:56:54 +0000</pubDate>
      <link>https://forem.com/jonathan78wong/cloud-summit-2026-1opf</link>
      <guid>https://forem.com/jonathan78wong/cloud-summit-2026-1opf</guid>
      <description>&lt;p&gt;I spent the day at &lt;a href="https://awsday.ca/?city=vancouver" rel="noopener noreferrer"&gt;Cloud Summit&lt;/a&gt; and it turned into one of the busiest days I have had in the Vancouver tech scene. The event pulled together a wide mix of local engineers and builders, and most of the sessions leaned heavily into AI. The conversation has clearly shifted from abstract excitement to real architectural patterns and operational challenges.&lt;/p&gt;

&lt;p&gt;One of the most interesting sessions was the deep dive into the Kubernetes &lt;a href="https://github.com/kubernetes-sigs/wg-ai-gateway" rel="noopener noreferrer"&gt;&lt;em&gt;AI Gateway&lt;/em&gt;&lt;/a&gt; work. The working group is shaping a consistent way to handle AI‑specific traffic at the gateway layer, including protocol awareness, egress controls, payload inspection, routing and guardrails. It is still early and mostly proposals and prototypes, but the direction is promising for teams trying to standardize how they expose and secure inference workloads.&lt;/p&gt;

&lt;p&gt;Another standout was the talk on cloud billing complexity, &lt;em&gt;&lt;a href="https://gist.github.com/nikosmeds/bdefa715d068e981a8fd402bf1388501" rel="noopener noreferrer"&gt;The Cloud Bill Nobody Could Explain&lt;/a&gt;&lt;/em&gt;. The speaker walked through a real incident investigation where even experienced teams struggled to explain unexpected cost behaviour. The supporting tools were practical, from VPC flow log analyzers that map traffic to namespaces, to exporters that surface per‑namespace cloud cost metrics, to utilities for finding orphaned disks across providers. It was a reminder that cost transparency is still one of the hardest engineering problems in cloud.&lt;/p&gt;

&lt;p&gt;The free snacks and pizza made the long day easier, and if you also joined the AWS Workshop on Introduction to Claude Code on AWS and stayed for the after party, you probably felt the same mix of learning, networking and exhaustion.&lt;/p&gt;

&lt;p&gt;Vancouver’s cloud and AI community is moving fast, and days like this make it clear how much is happening across the ecosystem.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;About the Author&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;em&gt;Jonathan Wong&lt;/em&gt; is an IT and AI consultant with 20+ years of experience leading engineering teams across Vancouver and Hong Kong. He specializes in modernizing legacy platforms, cloud security, and building AI-ready systems for startups and large enterprises while advising leadership on using strategic technology to drive business growth.&lt;br&gt;&lt;br&gt;
&lt;em&gt;&lt;a href="https://www.linkedin.com/in/jonanata/" rel="noopener noreferrer"&gt;Connect with me on LinkedIn&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://blog.jonanata.com/cloud-summit-2026/" rel="noopener noreferrer"&gt;Cloud Summit 2026&lt;/a&gt; appeared first on &lt;a href="https://blog.jonanata.com" rel="noopener noreferrer"&gt;Behind the Build&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>networking</category>
    </item>
    <item>
      <title>Autodata and the New Data Pipeline: Why Meta’s Agentic Data Scientist Matters More Than the Model</title>
      <dc:creator>Jonathan Wong</dc:creator>
      <pubDate>Sun, 03 May 2026 01:52:55 +0000</pubDate>
      <link>https://forem.com/jonathan78wong/autodata-and-the-new-data-pipeline-why-metas-agentic-data-scientist-matters-more-than-the-model-48i0</link>
      <guid>https://forem.com/jonathan78wong/autodata-and-the-new-data-pipeline-why-metas-agentic-data-scientist-matters-more-than-the-model-48i0</guid>
      <description>&lt;p&gt;The industry has spent years debating model size, architectures, and inference tricks. But Meta’s latest research makes something very clear: &lt;strong&gt;AI success is still determined by four elements: prompt, grounding, training, and fine tuning, and three of them are fundamentally data problems.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Which means the real bottleneck isn’t compute. It’s data quality, data structure, and data digestion.&lt;/p&gt;

&lt;p&gt;Meta’s new &lt;em&gt;Autodata&lt;/em&gt; framework reframes this bottleneck entirely. Instead of treating data as a static asset that humans must continuously curate, Autodata turns the model itself into an &lt;strong&gt;autonomous data scientist&lt;/strong&gt; , capable of generating, analyzing, and iterating on its own training data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2acg31m8e2j0h5qk5k34.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2acg31m8e2j0h5qk5k34.png" width="800" height="391"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure: Autodata pipeline. The framework employs an autonomous agent that emulates the role of a data scientist, iteratively generating data, conducting qualitative inspection and quantitative performance evaluation, synthesizing insights, and updating the data-generation recipe. The agent itself can be trained to be better at the data scientist task using the same criteria used in the inner loop. This cyclical process aims to progressively enhance data quality; the diagram depicts the general workflow underlying possible instantiations. RAM @ Meta AI | A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is not “synthetic data 2.0.”&lt;br&gt;&lt;br&gt;
This is a shift in how data pipelines operate.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;1. Why Data Quality Still Determines AI Success&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Prompting and grounding matter, but they sit on top of the real foundation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the &lt;strong&gt;training data&lt;/strong&gt; that shapes the model’s baseline&lt;/li&gt;
&lt;li&gt;the &lt;strong&gt;fine‑tuning data&lt;/strong&gt; that aligns it&lt;/li&gt;
&lt;li&gt;the &lt;strong&gt;evaluation data&lt;/strong&gt; that determines whether it’s improving&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Three of the four levers that determine AI performance are data‑centric.&lt;br&gt;&lt;br&gt;
And historically, all three required &lt;strong&gt;human data scientists&lt;/strong&gt; — expensive, slow, and difficult to scale.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;2. The Traditional Data Scientist Bottleneck&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Data scientists have always played the critical role of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;curating high‑quality examples&lt;/li&gt;
&lt;li&gt;grounding tasks in real documents&lt;/li&gt;
&lt;li&gt;designing evaluation rubrics&lt;/li&gt;
&lt;li&gt;iterating based on model failures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This work is high‑cost because it requires &lt;strong&gt;human judgment&lt;/strong&gt; , domain knowledge, and careful harness engineering. Even synthetic data pipelines still depended on humans to design prompts, filters, and quality checks.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;3. Meta’s Autodata: A Model That Trains Itself With Data It Creates&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Autodata changes the loop.&lt;br&gt;&lt;br&gt;
Instead of single‑pass synthetic generation, the model now:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Data Creation.&lt;/strong&gt; The agent grounds on the provided documents and uses its existing skills and compute to create training or evaluation data. It can repeat this step after each analysis cycle to incorporate new learnings and improve the data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Analysis.&lt;/strong&gt; The agent reviews the data it created to understand correctness, quality, difficulty, and diversity. These learnings feed directly back into the next creation cycle until the data reaches the required standard.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Scientist Loop.&lt;/strong&gt; The agent cycles between creation and analysis until it is satisfied with the final dataset. Guardrails can be applied to prevent reward hacking, and later generations of agents can build on earlier learnings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Meta Optimization.&lt;/strong&gt; The agent itself can be improved through autoresearch or meta‑harness optimization so it becomes better at performing the data scientist role over time.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Meta’s implementation uses a multi agent setup with a Challenger, a Weak Solver, a Strong Solver, and a Verifier to ensure the generated data is neither trivial nor impossible. The result is &lt;strong&gt;higher quality training data than classical Self Instruct or CoT Self Instruct methods.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the first time we’ve seen a closed‑loop, feedback‑driven data creation system that mirrors how human data scientists work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwwtgje6nljce2oqcot2a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwwtgje6nljce2oqcot2a.png" width="800" height="447"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure: Example agent trajectory on a CS research paper, showing the final accepted round (round 6) after 5 failed attempts. The Main Agent reflects on prior failures and prompts the Challenger Agent to generate a new question. The example is evaluated by Weak (4B) and Strong (397B) solvers, scored by a Verifier/Judge across 12 rubric criteria. Round 6 achieves a 45% gap (weak 48% vs. strong 93%) and is accepted. Learnings from rounds 1–5 feed back into the Main Agent’s refinement strategy. RAM @ Meta AI | A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM)&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;4. Reducing the Cost of Human‑Grounded Data&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The cost of training data has always come from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;human annotation&lt;/li&gt;
&lt;li&gt;human‑designed prompts&lt;/li&gt;
&lt;li&gt;human‑designed evaluation rubrics&lt;/li&gt;
&lt;li&gt;human‑driven iteration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Autodata reduces all four.&lt;/p&gt;

&lt;p&gt;Meta’s meta‑optimization layer even shows that &lt;strong&gt;the agent can improve its own instructions&lt;/strong&gt; , discovering better harness logic without human intervention.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faaw0i2pd9vtedw9gxt2h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faaw0i2pd9vtedw9gxt2h.png" width="800" height="591"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure: Meta-optimization of the data scientist agent. An outer optimization loop evaluates the agent’s harness on training papers, analyzes failure trajectories to identify systematic weaknesses (e.g., context leakage), implements harness modifications via a code-editing agent, and re-evaluates on held-out validation papers. Changes are accepted only if they improve the weak-strong separation rate. This process improved validation pass rate from 12.8% to 42.4% over 126 accepted iterations out of 233 total. RAM @ Meta AI | A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is the part that matters:&lt;br&gt;&lt;br&gt;
&lt;strong&gt;The model is not just generating data, it is improving the rules for generating data.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;5. AI as the Data Scientist&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Meta’s results show that an AI data scientist can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;enforce paper‑specific insights&lt;/li&gt;
&lt;li&gt;prevent context leakage&lt;/li&gt;
&lt;li&gt;design structured rubrics&lt;/li&gt;
&lt;li&gt;tune difficulty levels&lt;/li&gt;
&lt;li&gt;widen capability gaps between weak and strong solvers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All without manual harness engineering.&lt;/p&gt;

&lt;p&gt;This is the beginning of &lt;strong&gt;agentic data operations&lt;/strong&gt; , where the model becomes an active participant in its own training pipeline.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;6. A Shift in the Data Operations Pipeline&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Autodata changes the relationship between:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;data creation&lt;/li&gt;
&lt;li&gt;data evaluation&lt;/li&gt;
&lt;li&gt;model training&lt;/li&gt;
&lt;li&gt;model alignment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of a linear pipeline, we now have a &lt;strong&gt;self‑improving loop&lt;/strong&gt; where the model continuously refines the data that refines the model.&lt;/p&gt;

&lt;p&gt;This transforms data operations from a human‑driven workflow into a &lt;strong&gt;compute‑driven optimization problem&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;7. Rethinking the Four Elements of AI Success&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If prompt, grounding, training, and fine‑tuning determine AI success, and three of them are data‑centric, then Autodata forces us to rethink how these elements interact.&lt;/p&gt;

&lt;p&gt;We now have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;new external grounding&lt;/strong&gt; (the model grounds itself on documents)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;new internal grounding&lt;/strong&gt; (the model evaluates its own reasoning)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;new training loops&lt;/strong&gt; (data improves as compute increases)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;new fine‑tuning strategies&lt;/strong&gt; (agent‑generated datasets outperform human‑designed ones)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The implication is simple:&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Data pipelines are becoming agentic systems, not manual processes.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Where This Leads&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Autodata is not just a research milestone.&lt;br&gt;&lt;br&gt;
It signals a future where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;models generate their own training curriculum&lt;/li&gt;
&lt;li&gt;data scientists supervise strategy, not samples&lt;/li&gt;
&lt;li&gt;data quality scales with compute, not headcount&lt;/li&gt;
&lt;li&gt;grounding becomes dynamic, not static&lt;/li&gt;
&lt;li&gt;fine‑tuning becomes continuous, not episodic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The next wave of AI performance will come from &lt;strong&gt;agentic data pipelines&lt;/strong&gt; , not larger models.&lt;/p&gt;

&lt;p&gt;And Meta just opened the door.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source:&lt;/strong&gt; Meta Autodata research &lt;a href="https://facebookresearch.github.io/RAM/blogs/autodata/?utm_source=copilot.com" rel="noopener noreferrer"&gt;https://facebookresearch.github.io/RAM/blogs/autodata/&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;About the Author&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;em&gt;Jonathan Wong&lt;/em&gt; is an IT and AI consultant with 20+ years of experience leading engineering teams across Vancouver and Hong Kong. He specializes in modernizing legacy platforms, cloud security, and building AI-ready systems for startups and large enterprises while advising leadership on using strategic technology to drive business growth.&lt;br&gt;&lt;br&gt;
&lt;em&gt;&lt;a href="https://www.linkedin.com/in/jonanata/" rel="noopener noreferrer"&gt;Connect with me on LinkedIn&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://blog.jonanata.com/autodata-and-the-new-data-pipeline-why-metas-agentic-data-scientist-matters-more-than-the-model/" rel="noopener noreferrer"&gt;Autodata and the New Data Pipeline: Why Meta’s Agentic Data Scientist Matters More Than the Model&lt;/a&gt; appeared first on &lt;a href="https://blog.jonanata.com" rel="noopener noreferrer"&gt;Behind the Build&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>facebook</category>
    </item>
    <item>
      <title>IEC BC x ISACA Vancouver Cybersecurity Networking Event</title>
      <dc:creator>Jonathan Wong</dc:creator>
      <pubDate>Thu, 30 Apr 2026 23:04:57 +0000</pubDate>
      <link>https://forem.com/jonathan78wong/iec-bc-x-isaca-vancouver-cybersecurity-networking-event-nml</link>
      <guid>https://forem.com/jonathan78wong/iec-bc-x-isaca-vancouver-cybersecurity-networking-event-nml</guid>
      <description>&lt;p&gt;At the IEC BC x ISACA Vancouver Cybersecurity Networking Event this week, newcomers, experienced professionals, and employers had the chance to connect meaningfully. Thank you to &lt;a href="https://www.linkedin.com/company/iecbc/" rel="noopener noreferrer"&gt;Immigrant Employment Council of BC&lt;/a&gt;, &lt;a href="https://www.linkedin.com/company/isaca-vancouver-chapter/" rel="noopener noreferrer"&gt;ISACA Vancouver Chapter&lt;/a&gt; and &lt;a href="https://www.linkedin.com/company/vancouver-community-college/" rel="noopener noreferrer"&gt;Vancouver Community College (VCC)&lt;/a&gt; for making this possible.  &lt;/p&gt;

&lt;p&gt;The speaker shared a clear breakdown of certification pathways in cybersecurity.  &lt;/p&gt;

&lt;p&gt;Foundational certifications include ISC2 Certified in Cybersecurity (CC).&lt;br&gt;&lt;br&gt;
Intermediate tracks include CISA, CCSP, and AWS Security.&lt;br&gt;&lt;br&gt;
Senior level designations include CISSP and CISM, which reflect broader responsibility across governance, architecture, and organizational risk.  &lt;/p&gt;

&lt;p&gt;The speaker also highlighted the challenges overseas trained professionals face in the Vancouver job market. Many arrive with strong technical backgrounds, yet still need to navigate local hiring expectations, credential recognition, and the persistent “Canadian experience” requirement. Hearing this acknowledged openly was valuable for many attendees.  &lt;/p&gt;

&lt;p&gt;Another insight from the talk focused on how companies are adopting AI inside their organizational structure. The risk is not only technical. For SMEs with fewer than 50 employees, restructuring teams too quickly around AI can create instability and unclear ownership.  &lt;/p&gt;

&lt;p&gt;One final note for anyone exploring cybersecurity.&lt;br&gt;&lt;br&gt;
The free ISC2 Certified in Cybersecurity (CC) enrollment ends on May 20, 2026.  &lt;/p&gt;

&lt;p&gt;If you are considering a first step into the field, this is a good opportunity before the window closes.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;About the Author&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;em&gt;Jonathan Wong&lt;/em&gt; is an IT and AI consultant with 20+ years of experience leading engineering teams across Vancouver and Hong Kong. He specializes in modernizing legacy platforms, cloud security, and building AI-ready systems for startups and large enterprises while advising leadership on using strategic technology to drive business growth.&lt;br&gt;&lt;br&gt;
&lt;em&gt;&lt;a href="https://www.linkedin.com/in/jonanata/" rel="noopener noreferrer"&gt;Connect with me on LinkedIn&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://blog.jonanata.com/iec-bc-x-isaca-vancouver-cybersecurity-networking-event/" rel="noopener noreferrer"&gt;IEC BC x ISACA Vancouver Cybersecurity Networking Event&lt;/a&gt; appeared first on &lt;a href="https://blog.jonanata.com" rel="noopener noreferrer"&gt;Behind the Build&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>leadership</category>
      <category>networking</category>
    </item>
    <item>
      <title>My Journey to the Google Cloud Get Certified: From Fundamentals to Generative AI</title>
      <dc:creator>Jonathan Wong</dc:creator>
      <pubDate>Thu, 30 Apr 2026 22:19:03 +0000</pubDate>
      <link>https://forem.com/jonathan78wong/my-journey-to-the-google-cloud-get-certified-from-fundamentals-to-generative-ai-31hc</link>
      <guid>https://forem.com/jonathan78wong/my-journey-to-the-google-cloud-get-certified-from-fundamentals-to-generative-ai-31hc</guid>
      <description>&lt;p&gt;Deciding to get Google Cloud certified is one thing; finding a structured path to get there is another. I recently hit a major milestone in my journey: completing &lt;strong&gt;Stage 2 of the Google Cloud Get Certified program&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;By earning five specific skill badges, I’ve officially unlocked my free exam voucher. These weren’t just theoretical modules—they were intense, hands-on labs that forced me to build, secure, and automate real-world cloud environments. Here is how I structured my learning path to cross the finish line:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Build a Secure Google Cloud Network&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I started with the foundation. This badge focused on the defensive side of cloud architecture. I configured VPC firewalls, set up private access, and ensured that the network followed the “least privilege” principle to protect data from the ground up.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set Up an App Dev Environment on Google Cloud&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once the network was secure, I moved to the application layer. This badge covered the essential tools for a developer’s workflow, including setting up development clusters and managing the full lifecycle of an app within the Google Cloud ecosystem.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Implement Load Balancing on Compute Engine&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With the environment ready, I had to ensure it could handle traffic. I practiced configuring various Google Cloud load balancers (HTTP(S) and TCP/UDP) to distribute traffic across Compute Engine instances, ensuring high availability and performance.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Build Infrastructure with Terraform on Google Cloud&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After mastering the manual setups, I moved to &lt;strong&gt;Infrastructure as Code (IaC)&lt;/strong&gt;. This badge acted as the bridge, taking the networking and compute skills I’d learned and teaching me how to provision them automatically using Terraform configuration files for repeatable, version-controlled deployments.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create Your First Gemini Enterprise Application&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Finally, I explored the future of the cloud: &lt;strong&gt;Generative AI&lt;/strong&gt;. This badge involved integrating Google’s Gemini large language models into enterprise-level applications, showing how AI can be layered on top of a robust infrastructure.&lt;/p&gt;

&lt;p&gt;The Road Ahead&lt;/p&gt;

&lt;p&gt;These labs provided the practical “muscle memory” needed for the real world. Now that the voucher is unlocked, the final step toward official certification can be taken!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Are you working on GCP networking or security right now? I’d love to compare notes or hear about your approach!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdlhefnczokzvlawp63tm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdlhefnczokzvlawp63tm.png" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;About the Author&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;em&gt;Jonathan Wong&lt;/em&gt; is an IT and AI consultant with 20+ years of experience leading engineering teams across Vancouver and Hong Kong. He specializes in modernizing legacy platforms, cloud security, and building AI-ready systems for startups and large enterprises while advising leadership on using strategic technology to drive business growth.&lt;br&gt;&lt;br&gt;
&lt;em&gt;&lt;a href="https://www.linkedin.com/in/jonanata/" rel="noopener noreferrer"&gt;Connect with me on LinkedIn&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://blog.jonanata.com/google-cloud-get-certified-stage-2-1/" rel="noopener noreferrer"&gt;My Journey to the Google Cloud Voucher: From Fundamentals to Generative AI&lt;/a&gt; appeared first on &lt;a href="https://blog.jonanata.com" rel="noopener noreferrer"&gt;Behind the Build&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>certification</category>
      <category>google</category>
    </item>
    <item>
      <title>Sakura Migration   </title>
      <dc:creator>Jonathan Wong</dc:creator>
      <pubDate>Sun, 26 Apr 2026 05:03:15 +0000</pubDate>
      <link>https://forem.com/jonathan78wong/sakura-migration-m</link>
      <guid>https://forem.com/jonathan78wong/sakura-migration-m</guid>
      <description>&lt;p&gt;Not long ago I visited UBC, where the Sakura blossoms were in full bloom again.&lt;/p&gt;

&lt;p&gt;It reminded me of the first time I saw them decades ago in Japan, when I was working inside a 200‑person IT department. That was the on‑prem era. Everything lived in racks and server rooms. Everything felt stable, predictable, and physical. I thought my career would stay that way.&lt;/p&gt;

&lt;p&gt;But life moves. Sometimes quietly. Sometimes without a plan.&lt;/p&gt;

&lt;p&gt;Later I met Notey, and that was the beginning of my startup journey. I moved from enterprise structure to startup speed. From on‑prem systems to the cloud. From a single role to wearing multiple hats. That was my first migration, not geographical but mental. A shift in how I saw technology, teams, and myself. &lt;/p&gt;

&lt;p&gt;I moved to Boxful next. Built an MVP that helped secure funding. Shifted from development to product. From proprietary stacks to open source. From execution to ownership. Another migration. Another environment. Another version of myself.&lt;/p&gt;

&lt;p&gt;After that chapter, I joined venture builders, Flatiron, helping capital create new startups. Moving from building one company to helping many. From operator to enabler. From solving problems to designing systems that solve problems. Another migration.&lt;/p&gt;

&lt;p&gt;Eventually, the Startup Visa program brought me to Vancouver. From East Asia to North America. From familiar ground to a new ecosystem. From the world I grew up into the world I chose. A migration across continents, but also across identity.&lt;/p&gt;

&lt;p&gt;Walking past the students on UBC’s campus takes me back to when I was an undergraduate, excited about my first Yahoo email. At the time, I believed I would be happy coding every day. But the world kept shifting. We moved from desktop to cloud, then to mobile, and now to AI, where skills can be downloaded and coding itself is being redefined. Another migration. Another environment. Another identity.&lt;/p&gt;

&lt;p&gt;Looking back, none of these migrations were planned. They happened one step at a time. Each one pulled me into a new environment and forced me to grow in ways I never anticipated.&lt;/p&gt;

&lt;p&gt;There were migrations in scale. Migrations in technology. Migrations in geography. Migrations in identity.&lt;/p&gt;

&lt;p&gt;Those who pause beneath the Sakura on UBC’s Memorial Road often say the trees were brought from Japan and replanted here. They blossom every year in a way you cannot find anywhere else. They did not choose the journey, yet they grew. And they became something unique because of the journey, not in spite of it.&lt;/p&gt;

&lt;p&gt;Are you someone shaped by migrations. &lt;/p&gt;

&lt;p&gt;What has been your latest migration.   &lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://blog.jonanata.com/sakura-migration/" rel="noopener noreferrer"&gt;Sakura Migration   &lt;/a&gt; appeared first on &lt;a href="https://blog.jonanata.com" rel="noopener noreferrer"&gt;Behind the Build&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>lifestyle</category>
    </item>
    <item>
      <title>The Modernization Journey: How to Take a Legacy System From Zero Trust to AI Ready Without Rewrites or Downtime and Big Costs</title>
      <dc:creator>Jonathan Wong</dc:creator>
      <pubDate>Fri, 24 Apr 2026 22:58:21 +0000</pubDate>
      <link>https://forem.com/jonathan78wong/the-modernization-journey-how-to-take-a-legacy-system-from-zero-trust-to-ai-ready-without-rewrites-486j</link>
      <guid>https://forem.com/jonathan78wong/the-modernization-journey-how-to-take-a-legacy-system-from-zero-trust-to-ai-ready-without-rewrites-486j</guid>
      <description>&lt;p&gt;Modernization is often misunderstood. Many organizations believe it requires rewriting their legacy systems, replacing entire architectures, or enduring long periods of downtime. In reality, modernization is a sequence. It begins with security, continues with infrastructure uplift, and ends with AI powered capabilities that operate safely around the legacy core.&lt;/p&gt;

&lt;p&gt;This journey illustrates how we helped a client transform a fragile legacy PHP system into a secure, compliant, AI ready platform without rewriting the application and without interrupting production. The journey is documented across a series of articles that cover the business perspective, the technical deep dives, and the AI implementation. This final piece brings the entire story together.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Project Background: Why Modernization Became Mandatory&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The client wanted to elevate their product and expand into new regulated markets. To do that, they needed to meet compliance standards such as SOC 2 and PCI DSS. These frameworks require strict identity controls, auditability, and a security posture that legacy systems rarely provide. Compliance was not a technical preference. It was a business requirement for entering new industries.&lt;/p&gt;

&lt;p&gt;At the same time, the client wanted to introduce AI powered features into their product. They envisioned multilingual document understanding, intelligent automation, and natural language interfaces. But AI cannot be safely added to a system that lacks identity boundaries, secure data flows, or modern compute layers. Without the right foundation, AI becomes a risk multiplier.&lt;/p&gt;

&lt;p&gt;This created a clear sequence. Secure the current system. Achieve compliance readiness. Modernize the environment. Prepare the architecture for AI. Then introduce AI features safely. This is why the project began with Zero Trust rather than AI.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The Business Case&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The business perspective behind this transformation is detailed in the article:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://blog.jonanata.com/how-i-delivered-zero-trust-security-for-a-clients-legacy-php-system-without-rewrites-downtime-or-big-costs/" rel="noopener noreferrer"&gt;How I Delivered Zero Trust Security for a Client’s Legacy PHP System Without Rewrites, Downtime, or Big Costs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The core challenge was simple. The legacy system was too critical to rewrite and too fragile to modify. It supported daily operations and revenue generating workflows. A rewrite would introduce risk, cost, and uncertainty. A multi-year migration could easily fail. Even a successful rewrite could disrupt the business.&lt;/p&gt;

&lt;p&gt;The constraints were clear: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No application revamp &lt;/li&gt;
&lt;li&gt;No downtime&lt;/li&gt;
&lt;li&gt;Free or low‑cost solutions only &lt;/li&gt;
&lt;li&gt;Compliance‑aligned security improvements&lt;/li&gt;
&lt;li&gt;Immediate business value &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The smarter approach was to modernize around the legacy system. Strengthen the environment. Improve the security posture. Introduce modern cloud capabilities. Build new features outside the legacy core. This approach delivered value quickly and reduced risk. It also created a path where AI could be added safely and incrementally.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The Zero Trust Foundation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Zero Trust foundation became the anchor of the entire transformation. It provided the identity first model required for SOC 2 and PCI DSS. It removed public exposure. It enforced access boundaries. It created a secure perimeter around the legacy system without modifying the application code.&lt;/p&gt;

&lt;p&gt;The technical implementation is documented in two deep dive articles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://blog.jonanata.com/technical-deep-dive-how-i-delivered-zero-trust-security-for-a-clients-legacy-php-system-without-rewrites-downtime-or-big-costs-part-1/" rel="noopener noreferrer"&gt;Technical Deep Dive: How I Delivered Zero Trust Security for a Client’s Legacy PHP System Without Rewrites, Downtime, or Big Costs Part 1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.jonanata.com/technical-deep-dive-how-i-delivered-zero-trust-security-for-a-clients-legacy-php-system-without-rewrites-downtime-or-big-costs-part-2/" rel="noopener noreferrer"&gt;Technical Deep Dive: How I Delivered Zero Trust Security for a Client’s Legacy PHP System Without Rewrites, Downtime, or Big Costs Part 2&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The foundation included VPC only networking, IAM based access to RDS, S3, and CloudWatch, passwordless authentication, and a hardened runtime. Every component was isolated. Every request was authenticated. Every action was logged. The system became secure by design.&lt;/p&gt;

&lt;p&gt;This foundation made compliance achievable and created the conditions required for safe modernization.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The Modernization Path&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the security perimeter was in place, we modernized the environment around the legacy system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The application code remained untouched. Instead, we uplifted the infrastructure into modern cloud primitives. &lt;/li&gt;
&lt;li&gt;Introduced observability, identity enforcement, and automation. &lt;/li&gt;
&lt;li&gt;Replaced brittle components with managed services. &lt;/li&gt;
&lt;li&gt;Created a modernization perimeter that isolated risk and allowed new capabilities to be added without affecting production.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach delivered immediate improvements. The system became more stable. Operations became more predictable. Compliance became measurable. And the environment became ready for the next stage.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The AI Ready Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI readiness is not about adding a model. It is about preparing the system to support AI safely. That requires an architecture that is event driven, API first, identity enforced, and capable of handling structured and unstructured data.&lt;/p&gt;

&lt;p&gt;However there was a major constraint: the client’s primary business entity is registered in an  &lt;strong&gt;unsupported region&lt;/strong&gt;  for these advanced AI models. The only viable path was to leverage their overseas entity to create a new AWS account in a supported region and integrate it with their existing environment.&lt;/p&gt;

&lt;p&gt;The constraints were non‑negotiable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;multilingual inference&lt;/li&gt;
&lt;li&gt;multi‑modal document processing&lt;/li&gt;
&lt;li&gt;cross‑account AWS integration&lt;/li&gt;
&lt;li&gt;cross‑region invocation&lt;/li&gt;
&lt;li&gt;access to advanced LLMs such as Claude&lt;/li&gt;
&lt;li&gt;strong security controls for sensitive customer data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We introduced &lt;strong&gt;two‑sided trust‑granting model&lt;/strong&gt; and &lt;strong&gt;network isolation&lt;/strong&gt;  to secure APIs and data pipelines that could feed Bedrock or other models. We ensured that every AI workflow operated within the Zero Trust perimeter. The result was an architecture that could support AI features without exposing the legacy system to new risks.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Lambda and Bedrock Integration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The AI execution layer is documented in the article:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://blog.jonanata.com/how-i-delivered-a-cross-account-cross-region-multilingual-multi-modal-aws-bedrock-solution-in-a-zero-trust-environment/" rel="noopener noreferrer"&gt;How I Delivered a Cross Account Cross Region Multilingual Multi Modal AWS Bedrock Solution in a Zero Trust Environment&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Lambda act as a &lt;strong&gt;region proxy&lt;/strong&gt;. Bedrock provided secure enterprise grade AI capabilities. Together, they enabled new features without touching the legacy code.&lt;/p&gt;

&lt;p&gt;We implemented multilingual understanding, multimodal document analysis, workflow automation, and intelligent assistants. The legacy system remained stable. The AI layer delivered new value. The business moved forward without risk.&lt;/p&gt;

&lt;p&gt;The detailed implementation of the AI pipeline is documented:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://blog.jonanata.com/building-a-multilingual-multi-modal-document-analysis-pipeline-with-aws-lambda-and-claude-sonnet-4-6/" rel="noopener noreferrer"&gt;Building a Multilingual Multi Modal Document Analysis Pipeline with AWS Lambda and Claude Sonnet 4.6&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This article explains how the multilingual and multimodal capabilities were orchestrated using Lambda, Bedrock, and secure data flows inside the Zero Trust perimeter.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Outcomes and Lessons Learned&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The transformation delivered measurable results.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The client achieved a compliance ready security posture &lt;/li&gt;
&lt;li&gt;The system became more stable and more observable &lt;/li&gt;
&lt;li&gt;The legacy system remained untouched &lt;/li&gt;
&lt;li&gt;No rewrite, no downtime, no big costs &lt;/li&gt;
&lt;li&gt;The business expanded into new regulated markets &lt;/li&gt;
&lt;li&gt;The product gained new AI capabilities &lt;/li&gt;
&lt;li&gt;The entire journey followed a repeatable sequence that can be applied to any legacy environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The core lesson is simple. Modernization is not a single project. It is a sequence.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Secure first &lt;/li&gt;
&lt;li&gt;Modernize the environment &lt;/li&gt;
&lt;li&gt;Prepare the architecture &lt;/li&gt;
&lt;li&gt;Add AI safely &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach reduces risk, accelerates delivery, and creates long term value.&lt;/p&gt;

&lt;p&gt;Modernization is not a rewrite. It is a journey. And when executed in the right order, it turns a legacy system into an AI ready platform without disruption.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;About the Author&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;em&gt;Jonathan Wong&lt;/em&gt; is an IT and AI consultant with 20+ years of experience leading engineering teams across Vancouver and Hong Kong. He specializes in modernizing legacy platforms, cloud security, and building AI-ready systems for startups and large enterprises while advising leadership on using strategic technology to drive business growth.&lt;br&gt;&lt;br&gt;
&lt;em&gt;&lt;a href="https://www.linkedin.com/in/jonanata/" rel="noopener noreferrer"&gt;Connect with me on LinkedIn&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://blog.jonanata.com/the-modernization-journey-how-to-take-a-legacy-system-from-zero-trust-to-ai-ready-without-rewrites-or-downtime-and-big-costs/" rel="noopener noreferrer"&gt;The Modernization Journey: How to Take a Legacy System From Zero Trust to AI Ready Without Rewrites or Downtime and Big Costs&lt;/a&gt; appeared first on &lt;a href="https://blog.jonanata.com" rel="noopener noreferrer"&gt;Behind the Build&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>cybersecurity</category>
      <category>zerotrust</category>
    </item>
    <item>
      <title>Building a Multilingual, Multi‑Modal Document Analysis Pipeline with AWS Lambda and Claude Sonnet 4.6</title>
      <dc:creator>Jonathan Wong</dc:creator>
      <pubDate>Wed, 22 Apr 2026 20:16:46 +0000</pubDate>
      <link>https://forem.com/jonathan78wong/building-a-multilingual-multi-modal-document-analysis-pipeline-with-aws-lambda-and-claude-sonnet-4dp6</link>
      <guid>https://forem.com/jonathan78wong/building-a-multilingual-multi-modal-document-analysis-pipeline-with-aws-lambda-and-claude-sonnet-4dp6</guid>
      <description>&lt;p&gt;In my last article &lt;a href="https://blog.jonanata.com/how-i-delivered-a-cross-account-cross-region-multilingual-multi-modal-aws-bedrock-solution-in-a-zero-trust-environment/" rel="noopener noreferrer"&gt;&lt;strong&gt;How I Delivered a Cross‑Account, Cross‑Region, Multilingual, Multi‑Modal AWS Bedrock Solution in a Zero Trust Environment&lt;/strong&gt;&lt;/a&gt;, I walked through the architecture that makes secure, multilingual, multi‑modal AI processing possible across AWS accounts and regions. That foundation solved the infrastructure challenge, but it left one critical question unanswered: &lt;strong&gt;how does the system actually understand real documents?&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;This article picks up exactly where the last one ended. Now that the pipeline is built, it’s time to open the black box and examine the Lambda Python function that turns raw files into structured, multilingual insights. This is where OCR, model selection, prompt engineering, and Claude Sonnet 4.6 all come together to form the intelligence layer of the entire solution.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Selecting the Right LLM Model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The core requirement was clear: the model must handle &lt;strong&gt;multilingual content (English + Chinese)&lt;/strong&gt; and &lt;strong&gt;multi‑modal inputs&lt;/strong&gt; including text files, PDFs, and images. Several AWS‑native and third‑party models were evaluated.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Evaluation Results&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Textract + Titan&lt;/strong&gt; performs OCR well for English, but Titan consistently &lt;strong&gt;ignored Chinese content&lt;/strong&gt;, making it unsuitable for bilingual documents.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Amazon Nova&lt;/strong&gt; attempted to interpret Chinese text but produced &lt;strong&gt;random guesses&lt;/strong&gt; with a very low success rate.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Qwen3&lt;/strong&gt; handled Chinese better, with an acceptable success rate for mixed‑language documents.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Claude Sonnet 4.6&lt;/strong&gt; delivered &lt;strong&gt;5× higher accuracy than Qwen3&lt;/strong&gt; while also offering &lt;strong&gt;lower total cost&lt;/strong&gt;. It consistently extracted structured fields correctly across English‑only, Chinese‑only, and mixed‑language documents.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Final Decision&lt;/strong&gt;  &lt;br&gt;&lt;strong&gt;Claude Sonnet 4.6 was selected&lt;/strong&gt; due to its superior multilingual reasoning, stable multi‑modal performance, and cost efficiency.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Activating Claude Sonnet 4.6 in AWS Bedrock&lt;/strong&gt;  &lt;br&gt;Before the Lambda function can invoke Claude, the model must be activated in the AWS account.&lt;/p&gt;

&lt;p&gt;Step 1: Submit the Use Case &lt;br&gt;Submit the required use case form in the Bedrock console. Approval typically takes &lt;strong&gt;15–30 minutes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Step 2: Test in the Model Playground &lt;br&gt;Run a first inference in the Claude Sonnet 4.6 playground to confirm access.&lt;/p&gt;

&lt;p&gt;Step 3: Resolve Marketplace Permission Errors  &lt;br&gt;Some accounts encounter the following error during the first invocation: &lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;em&gt;“Model access is denied due to IAM user or service role not authorized to perform the required AWS Marketplace actions (aws-marketplace:ViewSubscriptions, aws-marketplace:Subscribe)...”&lt;/em&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To resolve this, attach the following IAM policy to the IAM user or role:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;{
  "Effect": "Allow",
  "Action": [
    "aws-marketplace:Subscribe",
    "aws-marketplace:ViewSubscriptions"
  ],
  "Resource": "*"
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;After applying the policy, retry the model activation. Once the subscription completes, the Lambda function can invoke Claude normally.  &lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Lambda Function Architecture&lt;/strong&gt;  &lt;br&gt;The Lambda function is the operational heart of the pipeline. It performs ingestion, OCR, reasoning, and structured extraction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Handler&lt;/strong&gt;  &lt;br&gt;The handler contains the main execution logic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Receive request payload from the EC2 caller&lt;/li&gt;



&lt;li&gt;Download the uploaded file from the S3 bucket&lt;/li&gt;



&lt;li&gt;Perform OCR text extraction&lt;/li&gt;



&lt;li&gt;Invoke Claude Sonnet 4.6 for reasoning and field extraction&lt;/li&gt;



&lt;li&gt;Clean and normalize the output&lt;/li&gt;



&lt;li&gt;Return the structured JSON result back to the EC2 instance  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Claude handles PDFs and images differently during OCR and target‑field extraction, so the Lambda logic must normalize and clean the model’s output to ensure consistent, structured results:  &lt;/p&gt;

&lt;pre&gt;&lt;code&gt;# s3_doc_in_bytes , the s3 document as bytes format  
# content_type , the content type matched by s3 document extension  

# covert the bytes to Claude supported base64 type  
doc_base64 = base64.standard_b64encode(s3_doc_in_bytes).decode("utf-8")

# Build the content block based on document type
if content_type == "application/pdf":
    doc_config = {
    "type": "document",
    "source": {
        "type": "base64",
        "media_type": content_type,
        "data": doc_base64,
    },
    }
else:
    doc_config = {
    "type": "image",
    "source": {
        "type": "base64",
        "media_type": content_type,
        "data": doc_base64,
    },
    }

request_body = {
    "anthropic_version": "bedrock-2023-05-31",
    "max_tokens": 4096,
    "temperature": 0,  # no creative answer  
    "system": OCR_PROMPT,
    "messages": [
    {
        "role": "user",
        "content": [
            doc_config,
            {"type": "text", "text": TARGET_FIELD_PROMPT},
        ],
    },
    ],
}

response = bedrock_client.invoke_model(
    modelId=BEDROCK_MODEL_ID,
    contentType="application/json",
    accept="application/json",
    body=json.dumps(request_body),
)

response_body = json.loads(response["body"].read())
assistant_text = response_body["content"][0]["text"]

# remove any markdown code fences from Claude answer, if present
cleaned = assistant_text.strip()
if cleaned.startswith("

```"):
    cleaned = cleaned.split("\n", 1)[1]
if cleaned.endswith("```

"):
    cleaned = cleaned.rsplit("

```

", 1)[0]
cleaned = cleaned.strip()

# other logic  
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Prompts&lt;/strong&gt;  &lt;br&gt;Two categories of prompts guide the LLM’s behavior.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OCR Prompt&lt;/strong&gt; Defines the system role, extraction rules, and reasoning instructions. It explains how the LLM should interpret different document scenarios.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Target Field Prompt&lt;/strong&gt; Defines how to handle different file types (PDF, image, text). Provides a list of target fields with descriptions and common pattern multilingual examples, like:   &lt;code&gt;| total_deposit | Total deposit collected in HKD (numeric) — include ALL deposits: deposit (按金), electricity deposit (電費按金), renovation deposit, etc. Use the grand total from the receipt/table if available. |&lt;/code&gt;. Specifies strict output format rules such as:
&lt;ul&gt;
&lt;li&gt;no markdown&lt;/li&gt;



&lt;li&gt;no explanation&lt;/li&gt;



&lt;li&gt;return &lt;strong&gt;only&lt;/strong&gt; a JSON object&lt;/li&gt;
&lt;/ul&gt;




&lt;/li&gt;


&lt;/ul&gt;

&lt;p&gt;These prompts ensure deterministic, repeatable extraction across diverse document types.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Deploying the Lambda Function&lt;/strong&gt;  &lt;br&gt;Deployment is straightforward but requires attention to packaging: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zip &lt;strong&gt;only the Python files&lt;/strong&gt;, not the parent folder.&lt;/li&gt;



&lt;li&gt;The zip file must contain the &lt;code&gt;.py&lt;/code&gt; files at the root level.  &lt;/li&gt;



&lt;li&gt;Use the &lt;strong&gt;Upload ZIP&lt;/strong&gt; button in the Lambda console.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Configuring the Handler&lt;/strong&gt;  &lt;br&gt;In the Lambda configuration tab, set the handler to match your file and function name. For example:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;document_handler.handler  &lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;document_handler.py&lt;/code&gt; is the file&lt;/li&gt;



&lt;li&gt;
&lt;code&gt;handler&lt;/code&gt; is the function inside that file&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Closing Thoughts  &lt;/h2&gt;

&lt;p&gt;This architecture demonstrates how to build a multilingual, multi‑modal document analysis pipeline using AWS Lambda and Claude Sonnet 4.6. The key is not just choosing the right model, but designing the prompts, IAM permissions, and Lambda workflow so the system behaves predictably under real production workloads.  &lt;/p&gt;




&lt;p&gt;&lt;strong&gt;About the Author&lt;/strong&gt;&lt;br&gt;&lt;em&gt;Jonathan Wong&lt;/em&gt; is an IT and AI consultant with 20+ years of experience leading engineering teams across Vancouver and Hong Kong. He specializes in modernizing legacy platforms, cloud security, and building AI-ready systems for startups and large enterprises while advising leadership on using strategic technology to drive business growth. &lt;br&gt;&lt;em&gt;&lt;a href="https://www.linkedin.com/in/jonanata/" rel="noopener noreferrer"&gt;Connect with me on LinkedIn&lt;/a&gt;&lt;/em&gt;  &lt;/p&gt;







&lt;p&gt;The post &lt;a href="https://blog.jonanata.com/building-a-multilingual-multi-modal-document-analysis-pipeline-with-aws-lambda-and-claude-sonnet-4-6/" rel="noopener noreferrer"&gt;Building a Multilingual, Multi‑Modal Document Analysis Pipeline with AWS Lambda and Claude Sonnet 4.6&lt;/a&gt; appeared first on &lt;a href="https://blog.jonanata.com" rel="noopener noreferrer"&gt;Behind the Build&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
    </item>
    <item>
      <title>Inside TECHSPO Vancouver 2026 at Paradox Vancouver — Innovation, Community, and Real Conversations</title>
      <dc:creator>Jonathan Wong</dc:creator>
      <pubDate>Tue, 21 Apr 2026 01:00:30 +0000</pubDate>
      <link>https://forem.com/jonathan78wong/inside-techspo-vancouver-2026-at-paradox-vancouver-innovation-community-and-real-conversations-e9n</link>
      <guid>https://forem.com/jonathan78wong/inside-techspo-vancouver-2026-at-paradox-vancouver-innovation-community-and-real-conversations-e9n</guid>
      <description>&lt;p&gt;This year’s &lt;strong&gt;&lt;a href="https://techspovancouver.ca/" rel="noopener noreferrer"&gt;TECHSPO Vancouver 2026&lt;/a&gt;&lt;/strong&gt;, hosted at the Paradox Vancouver, brought together innovators, founders, engineers, and creators across the tech spectrum, with the venue’s modern, design‑driven environment providing the perfect backdrop for two days of meaningful conversations and emerging‑tech exploration.&lt;/p&gt;

&lt;p&gt;TECHSPO showcased a diverse range of companies across &lt;strong&gt;iCRM&lt;/strong&gt; , &lt;strong&gt;MarTech&lt;/strong&gt; , &lt;strong&gt;AI&lt;/strong&gt; , &lt;strong&gt;SaaS&lt;/strong&gt; , and even &lt;strong&gt;mental‑health technology&lt;/strong&gt; , creating an experience built for hands‑on demos, high‑value networking, and discovering the next wave of digital innovation.&lt;/p&gt;




&lt;p&gt;The hotel’s atmosphere, shaped by modern architecture, clean lines, and a calm yet energetic vibe, created a perfect setting for focused conversations and networking.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Meaningful Connections Throughout the Event&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reconnected with my friend Maksym&lt;/strong&gt; , a hardware professional I first met at the Vancouver Careerin Coffee Meetup. He’s one of the rare engineers who can move seamlessly from &lt;strong&gt;low‑level parts and protocol discussions&lt;/strong&gt; to &lt;strong&gt;market intelligence&lt;/strong&gt;. His insights always sharpen my thinking.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Met Layth&lt;/strong&gt; , a cybersecurity professional with real‑world threat experience. Our conversation covered:

&lt;ul&gt;
&lt;li&gt;A real supply‑chain attack case&lt;/li&gt;
&lt;li&gt;How fake network infrastructures deceive internal teams&lt;/li&gt;
&lt;li&gt;The rising importance of &lt;strong&gt;AI model protection&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Connected with startup founders&lt;/strong&gt; , including &lt;strong&gt;Anil&lt;/strong&gt; , who is building an AI‑agent service. It’s energizing to see founders experimenting with automation, agentic workflows, and new business models.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Met new workforce talent&lt;/strong&gt; , like &lt;strong&gt;Betty&lt;/strong&gt; , an Intermediate Software Engineer open to work and passionate about &lt;strong&gt;accessibility‑first design&lt;/strong&gt;. Her enthusiasm for inclusive product development was refreshing and aligned with where modern product expectations are heading. &lt;/li&gt;

&lt;/ul&gt;




&lt;p&gt;Vancouver’s tech ecosystem is clearly accelerating, with momentum spanning AI agents, cybersecurity, and mental‑health technology, and what stood out most throughout TECHSPO was the depth of talent across the community, from experienced founders to early‑career engineers, all building with purpose and pushing the region’s innovation forward.&lt;/p&gt;

&lt;p&gt;If you attended TECHSPO Vancouver as well, I’d love to hear what conversations or trends stood out to you.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;About the Author&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;em&gt;Jonathan Wong&lt;/em&gt; is an IT and AI consultant with 20+ years of experience leading engineering teams across Vancouver and Hong Kong. He specializes in modernizing legacy platforms, cloud security, and building AI-ready systems for startups and large enterprises while advising leadership on using strategic technology to drive business growth.&lt;br&gt;&lt;br&gt;
&lt;em&gt;&lt;a href="https://www.linkedin.com/in/jonanata/" rel="noopener noreferrer"&gt;Connect with me on LinkedIn&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://blog.jonanata.com/inside-techspo-vancouver-2026-at-paradox-vancouver-innovation-community-and-real-conversations/" rel="noopener noreferrer"&gt;Inside TECHSPO Vancouver 2026 at Paradox Vancouver — Innovation, Community, and Real Conversations&lt;/a&gt; appeared first on &lt;a href="https://blog.jonanata.com" rel="noopener noreferrer"&gt;Behind the Build&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>leadership</category>
      <category>networking</category>
    </item>
    <item>
      <title>How I Delivered a Cross‑Account, Cross‑Region, Multilingual, Multi‑Modal AWS Bedrock Solution in a Zero Trust Environment</title>
      <dc:creator>Jonathan Wong</dc:creator>
      <pubDate>Mon, 20 Apr 2026 05:47:25 +0000</pubDate>
      <link>https://forem.com/jonathan78wong/how-i-delivered-a-cross-account-cross-region-multilingual-multi-modal-aws-bedrock-solution-in-a-26a8</link>
      <guid>https://forem.com/jonathan78wong/how-i-delivered-a-cross-account-cross-region-multilingual-multi-modal-aws-bedrock-solution-in-a-26a8</guid>
      <description>&lt;p&gt;After I modernized my client’s platform security, the next strategic move was clear: elevate their product with generative AI. They wanted an LLM‑powered component capable of analyzing customer‑submitted documents during application intake and performing price forecasting. The goal was to dramatically improve user experience by streamlining the workflow, offering proactive suggestions, and providing real‑time assistance.&lt;/p&gt;

&lt;p&gt;To deliver this, the AI analyst feature needed &lt;strong&gt;multilingual support&lt;/strong&gt; , &lt;strong&gt;multi‑modal document understanding&lt;/strong&gt; (images, PDFs, text files, and more), and &lt;strong&gt;advanced reasoning&lt;/strong&gt; , which pointed directly to models like Claude. But there was a major constraint: the client’s primary business entity is registered in an &lt;strong&gt;unsupported region&lt;/strong&gt; for these advanced AI models. The only viable path was to leverage their overseas entity to create a new AWS account in a supported region and integrate it with their existing environment.&lt;/p&gt;

&lt;p&gt;The constraints were non‑negotiable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;multilingual inference &lt;/li&gt;
&lt;li&gt;multi‑modal document processing &lt;/li&gt;
&lt;li&gt;cross‑account AWS integration&lt;/li&gt;
&lt;li&gt;cross‑region invocation &lt;/li&gt;
&lt;li&gt;access to advanced LLMs such as Claude &lt;/li&gt;
&lt;li&gt;strong security controls for sensitive customer data &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It reads like an AWS Solution Architect Professional exam scenario? It wasn’t a hypothetical. This was a real‑world architecture challenge with real compliance, security, and business constraints.&lt;/p&gt;

&lt;p&gt;To achieve the goal, I designed a &lt;strong&gt;two‑sided trust‑granting model&lt;/strong&gt; between the two AWS environments. AWS Account A (AC‑a) operates in Region A, and AWS Account B (AC‑b) operates in Region B.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1gt5g4yo6w5yxzdmzs0.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1gt5g4yo6w5yxzdmzs0.webp" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why Lambda in AC‑b Is Mandatory&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Claude enforces &lt;strong&gt;region restrictions based on the source IP&lt;/strong&gt;. This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;all traffic to Claude &lt;strong&gt;must originate from Region B&lt;/strong&gt; , and&lt;/li&gt;
&lt;li&gt;any request coming directly from Region A will be &lt;strong&gt;blocked&lt;/strong&gt; , regardless of AWS account ownership.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because of this, the Lambda function in AC‑b becomes a &lt;strong&gt;non‑negotiable region proxy&lt;/strong&gt;. It is the only component allowed to make outbound requests to Claude, ensuring that Anthropic sees a valid Region‑B AWS IP.&lt;/p&gt;

&lt;p&gt;In other words: &lt;strong&gt;AC‑a (data + app) → AC‑b (Lambda proxy + Claude)&lt;/strong&gt; is the only viable architecture that satisfies both the business requirements and the regional restrictions.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Resources and IAM Roles&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In AC‑a, an EC2 instance acts as the caller of the AI service hosted in AC‑b. The EC2 instance assumes the IAM role &lt;strong&gt;ec2‑production&lt;/strong&gt; , which must be explicitly granted permission to invoke the Lambda function in AC‑b.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"lambda:InvokeFunction"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:region-b:ac-b-id:function:lambda-name"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At the same time, the Lambda function in AC‑b must enforce the reverse trust boundary: it should only accept invocations originating from AC‑a’s &lt;strong&gt;ec2‑production&lt;/strong&gt; role. It uses resource-based policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Statement ID : AllowCrossAccountPHPInvoke
Principal : arn:aws:iam::ac-a-id:user/ec2-production  
Effect : Allow
Action : lambda:InvokeFunction
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At the Lambda IAM role level, it must be explicitly granted permission to access &lt;strong&gt;Amazon Bedrock&lt;/strong&gt; and invoke &lt;strong&gt;Claude&lt;/strong&gt;. This role becomes the trusted execution identity for all outbound LLM requests from AC‑b, ensuring that only the Lambda function running in the supported region is authorized to call the model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AllowBedrockInvoke"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"bedrock:InvokeModel"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:bedrock:*:ac-b-id:inference-profile/your-claude-model"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:bedrock:*::foundation-model/your-claude-model"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set the region to “*” in the IAM policy, since Bedrock may internally route the invocation across multiple supported regions. This ensures the Lambda role can invoke the service regardless of the specific regional endpoint used.&lt;/p&gt;

&lt;p&gt;This ensures &lt;strong&gt;mutual, least‑privilege trust&lt;/strong&gt; across accounts and regions.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Cross‑Account Data Access&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Because the system exchanges &lt;strong&gt;customer‑sensitive data across multiple AWS accounts and regions&lt;/strong&gt; , the communication path must meet strict security guarantees. The design enforces three core requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No traffic may traverse the public internet &lt;/li&gt;
&lt;li&gt;Data can only originate from controlled, authenticated AWS resources &lt;/li&gt;
&lt;li&gt;Data can only be delivered to controlled, authenticated AWS resources &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The PrivateLink endpoint is protected by a dedicated security group that permits inbound HTTPS traffic &lt;strong&gt;only&lt;/strong&gt; from the &lt;code&gt;ec2-production&lt;/code&gt; instance. This ensures that no other resource, inside or outside the VPC, can reach the endpoint at the network layer.&lt;/p&gt;

&lt;p&gt;In addition, the PrivateLink service is bound to a &lt;strong&gt;resource policy&lt;/strong&gt; that explicitly authorizes only the &lt;code&gt;ec2-production&lt;/code&gt; IAM role to invoke the controlled Lambda function. Even if another resource could reach the endpoint, the policy prevents it from calling the Lambda:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"RestrictToLambda"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Principal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"AWS"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::ac-a-id:user/ec2-production"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"lambda:InvokeFunction"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:aws-region-b:ac-b-id:function:lambda-name"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;  
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Together, the security group and the endpoint policy enforce strict, dual‑layer Zero Trust: &lt;strong&gt;network access is restricted to a single controlled source, and service‑level access is restricted to a single controlled identity.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Lambda function is responsible for analyzing the customer’s document as part of the AI prediction workflow. However, due to regulatory requirements, all customer data must remain stored in AC‑a’s &lt;strong&gt;private S3 bucket&lt;/strong&gt;. However, AC‑b still needs controlled access to this data for analysis. Therefore, the S3 bucket policy in AC‑a must selectively grant AC‑b permission to read only the required objects, while maintaining strict Zero Trust boundaries. This is the &lt;strong&gt;authoritative gatekeeper&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AllowACBReadSpecificObjects"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Principal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"AWS"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::ac-b-id:role/lambda-role"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"s3:GetObject"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:s3:::your-bucket/path/to/allowed/*"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;  
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures AC‑b &lt;strong&gt;cannot&lt;/strong&gt; read anything outside the approved prefix.&lt;/p&gt;

&lt;p&gt;In the AC-b, the lambda iam role attaches a policy to ensure that only attempt to access the specific S3 objects it is supposed to. This is the &lt;strong&gt;identity‑side limiter&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AllowReadOnlySpecificObjects"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"s3:GetObject"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:s3:::your-bucket/path/to/allowed/*"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;  
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the same dual‑layer pattern AWS recommends for &lt;strong&gt;cross‑account S3 access with Zero Trust&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;After the Claude model is activated and the required use‑case approvals are completed, the EC2 instance in &lt;strong&gt;AC‑a&lt;/strong&gt; can securely invoke the AWS Bedrock service.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Bringing It All Together&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This project was not only about adding generative AI to an existing platform, but also delivering advanced LLM capabilities under strict regional, security, and compliance constraints. By designing a cross‑account, cross‑region architecture with a mandatory Lambda proxy, enforcing mutual least‑privilege trust, and routing all sensitive data through PrivateLink, the solution enables multilingual, multi‑modal AI analysis without ever exposing customer information to the public internet.&lt;/p&gt;

&lt;p&gt;The result is a fully compliant, Zero‑Trust, enterprise‑grade AI integration that unlocks Claude’s capabilities for a business operating in an otherwise unsupported region—transforming their application workflow while preserving the highest security standards.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://blog.jonanata.com/how-i-delivered-a-cross-account-cross-region-multilingual-multi-modal-aws-bedrock-solution-in-a-zero-trust-environment/" rel="noopener noreferrer"&gt;How I Delivered a Cross‑Account, Cross‑Region, Multilingual, Multi‑Modal AWS Bedrock Solution in a Zero Trust Environment&lt;/a&gt; appeared first on &lt;a href="https://blog.jonanata.com" rel="noopener noreferrer"&gt;Behind the Build&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>cybersecurity</category>
      <category>zerotrust</category>
    </item>
  </channel>
</rss>
