<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: The AI Observer </title>
    <description>The latest articles on Forem by The AI Observer  (@theaiobserver).</description>
    <link>https://forem.com/theaiobserver</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/theaiobserver"/>
    <language>en</language>
    <item>
      <title>Google I/O 2026 is coming. And I already know how it will go.</title>
      <dc:creator>The AI Observer </dc:creator>
      <pubDate>Mon, 11 May 2026 13:24:01 +0000</pubDate>
      <link>https://forem.com/theaiobserver/google-io-2026-is-coming-and-i-already-know-how-it-will-go-mh</link>
      <guid>https://forem.com/theaiobserver/google-io-2026-is-coming-and-i-already-know-how-it-will-go-mh</guid>
      <description>&lt;h1&gt;
  
  
  Google I/O 2026 is coming. And I already know how it will go.
&lt;/h1&gt;

&lt;p&gt;Next Monday, May 19, Google will take the stage for I/O 2026. Sundar Pichai will say the word "AI" approximately 400 times. The audience will clap at things they do not fully understand. And somewhere around minute 47, someone on Twitter will post "this could have been an email."&lt;/p&gt;

&lt;p&gt;I say this with affection. I watch these events every year. And every year, the pattern is the same: big promises, impressive demos, and a creeping feeling that the most interesting stuff is happening backstage, not on it.&lt;/p&gt;

&lt;p&gt;But this year feels different. Or at least, it feels like Google needs it to be different. So let me walk through what is expected and what I actually think about it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Gemini 4.0 (or 3.8 or whatever they call it)
&lt;/h2&gt;

&lt;p&gt;This is the big one. Google is expected to announce the next version of Gemini, its flagship AI model. Whether it lands as Gemini 4.0 or some weird decimal like 3.8, the message will be the same: we are closing the gap with OpenAI and Anthropic.&lt;/p&gt;

&lt;p&gt;And honestly? They might. Gemini 3.1 Pro was genuinely good. Google has the data, the compute, and the distribution that no other AI company can match. When your model is baked into Search, Android, Gmail, Docs, YouTube, and half the internet's ad infrastructure, you do not need to be the best model. You need to be the most available one.&lt;/p&gt;

&lt;p&gt;What I am watching for: not benchmarks. I want to know if Gemini 4.0 can do something that actually changes how I use my phone or computer. Not a demo where it writes a poem about a cat. Something real.&lt;/p&gt;

&lt;h2&gt;
  
  
  Android XR Glasses
&lt;/h2&gt;

&lt;p&gt;Google is expected to show off consumer-ready Android XR glasses. This is the thing that could actually be interesting.&lt;/p&gt;

&lt;p&gt;We have been hearing about AR glasses for what, a decade now? Magic Leap raised two billion dollars and delivered a headset nobody wanted to wear outside their living room. Apple made the Vision Pro and priced it like a car payment. Meta has Ray-Ban smart glasses that are surprisingly useful but limited.&lt;/p&gt;

&lt;p&gt;If Google can get Android XR glasses right — lightweight, useful, not embarrassing to wear — that is a bigger deal than any AI benchmark. Because it changes the form factor. You stop looking at your phone and start living in the interface.&lt;/p&gt;

&lt;p&gt;I am cautiously optimistic. Google has killed so many hardware projects that I would not bet money on this one surviving to a second generation. But the ambition is there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Aluminium OS
&lt;/h2&gt;

&lt;p&gt;This one snuck up on me. Google is apparently building an Android-based PC operating system called Aluminium OS. Sameer Samat confirmed a 2026 launch, and we might see it at I/O.&lt;/p&gt;

&lt;p&gt;A Google desktop OS. Think about that for a second.&lt;/p&gt;

&lt;p&gt;If this is just Chrome OS with a new skin, nobody cares. But if it is a real attempt to compete with Windows using Android apps and AI integration — that is bold. That is Google saying: we do not just own your phone, we want your desk too.&lt;/p&gt;

&lt;p&gt;The question is whether Google has the patience for this. They have a history of launching ambitious platform plays and then quietly abandoning them. Google Plus, Stadia, Hangouts, Allo, Duo — the graveyard is full. If Aluminium OS is real, it needs to survive longer than two years.&lt;/p&gt;

&lt;h2&gt;
  
  
  Android 17
&lt;/h2&gt;

&lt;p&gt;Android 17 Beta 4 is out. The final version expected in June. Google separated Android announcements into "The Android Show" which happened today (May 12), so the main I/O keynote can focus on AI.&lt;/p&gt;

&lt;p&gt;So far, Android 17 looks like a refinement, not a revolution. App bubbles are nice. Some tweaks here and there. Nothing that makes you want to throw your phone against the wall if you cannot upgrade.&lt;/p&gt;

&lt;p&gt;Which is fine. Not every release needs to be a revolution. Sometimes stability and polish matter more than fireworks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agentic AI on Android
&lt;/h2&gt;

&lt;p&gt;This is the trend I find both exciting and terrifying. Google has been hinting at Gemini acting as an agent on your phone — not just answering questions, but doing things for you. Booking restaurants, managing your calendar, controlling other apps.&lt;/p&gt;

&lt;p&gt;The March Pixel update already dropped some of these capabilities. Expect more at I/O.&lt;/p&gt;

&lt;p&gt;Here is my concern: AI agents on phones will be convenient. Dangerously convenient. When an AI can book your flights, reply to your emails, and manage your schedule, you stop doing those things yourself. And when you stop doing things yourself, you stop paying attention to them.&lt;/p&gt;

&lt;p&gt;I am not saying this is bad. I am saying we should think about it. An AI agent that manages your calendar is great until it double-books you with your boss and your dentist because it did not understand the context of "that meeting is flexible."&lt;/p&gt;




&lt;h2&gt;
  
  
  What I actually want from Google I/O 2026
&lt;/h2&gt;

&lt;p&gt;Less hype. More shipping.&lt;/p&gt;

&lt;p&gt;Google is excellent at announcing things. It is less excellent at delivering them consistently. So here is my wish list:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Gemini 4.0 that actually works better than 3.1 in daily use, not just benchmarks&lt;/li&gt;
&lt;li&gt;XR glasses I can buy this year, not "coming soon"&lt;/li&gt;
&lt;li&gt;Aluminium OS with a clear roadmap, not a vague promise&lt;/li&gt;
&lt;li&gt;Android features that solve real problems, not demo problems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I will be watching on Monday. And I will write about what actually happened versus what was promised.&lt;/p&gt;

&lt;p&gt;See you on the other side.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Sources: CNET, Android Authority, 9to5Google, Mashable&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The AI Observer. Thoughts on AI, technology, and the weird space where they meet humans.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>google</category>
      <category>ai</category>
      <category>android</category>
      <category>technology</category>
    </item>
    <item>
      <title>The US government wants to test AI before you use it. That sounds reasonable. It is not.</title>
      <dc:creator>The AI Observer </dc:creator>
      <pubDate>Fri, 08 May 2026 14:14:36 +0000</pubDate>
      <link>https://forem.com/theaiobserver/the-us-government-wants-to-test-ai-before-you-use-it-that-sounds-reasonable-it-is-not-1ofp</link>
      <guid>https://forem.com/theaiobserver/the-us-government-wants-to-test-ai-before-you-use-it-that-sounds-reasonable-it-is-not-1ofp</guid>
      <description>&lt;p&gt;Yesterday the US Department of Commerce announced that Google, Microsoft, and xAI have agreed to let the government test their AI models before release. The program runs through something called CAISI — the Center for AI Standards and Innovation. They will be looking at cybersecurity risks, biosecurity, chemical weapons, all the fun stuff.&lt;/p&gt;

&lt;p&gt;OpenAI and Anthropic already signed similar agreements back in 2024 under Biden. Those have now been "renegotiated" — nobody is saying what changed.&lt;/p&gt;

&lt;p&gt;My first reaction was: oh good, finally.&lt;/p&gt;

&lt;p&gt;My second reaction was: wait a minute.&lt;/p&gt;




&lt;p&gt;Let me explain why this feels complicated.&lt;/p&gt;

&lt;p&gt;Trump spent months arguing that AI regulation would hurt American innovation and help China catch up. His AI National Policy Framework from March literally says the US will "remove barriers to innovation" and "accelerate" AI deployment. Congress is not creating any new regulatory body. Instead, existing agencies are supposed to handle it.&lt;/p&gt;

&lt;p&gt;And now, two months later, here we are. Government testing. Voluntary agreements. Companies choosing to participate.&lt;/p&gt;

&lt;p&gt;See the word I just used? Voluntary.&lt;/p&gt;




&lt;p&gt;That is the part that gets me. These are agreements, not laws. There is no requirement for any AI company to submit their models. Google, Microsoft, xAI — they chose to. Which sounds responsible until you realize that every company not on that list can just... not.&lt;/p&gt;

&lt;p&gt;And the companies that did sign up? They get to say they are cooperating with the government. Great PR. Looks responsible. Builds trust with enterprise customers who worry about risk. It is a smart business move dressed up as civic duty.&lt;/p&gt;

&lt;p&gt;I am not saying the testing is fake. CAISI has apparently already done 40 evaluations, including on unreleased models. Chris Fall, the director, seems serious about it. The testing covers real risks — cybersecurity attacks, bioweapons potential, that kind of thing.&lt;/p&gt;

&lt;p&gt;But when OpenAI's chief global affairs officer posts on LinkedIn that they gave the government GPT-5.5 before release for "national security testing," I cannot help but notice that is also a flex. Hey everyone, our model is so powerful the government needs to check it before you can use it. Buy our enterprise plan.&lt;/p&gt;




&lt;p&gt;Meanwhile, there was another AI story this week that got less attention but I think matters more.&lt;/p&gt;

&lt;p&gt;Pennsylvania sued Character.AI. Why? Because a chatbot told a state investigator that it was a licensed psychiatrist. And then — I am not making this up — it invented a medical license number on the spot. Just confidently made one up.&lt;/p&gt;

&lt;p&gt;Think about that for a second. A chatbot pretended to be a doctor. Made up credentials. And the person on the other end had no way to know it was lying.&lt;/p&gt;

&lt;p&gt;This is the real AI safety problem. Not "can this model help make chemical weapons" — which yes, matters — but "can this model convince someone it is something it is not in a casual conversation."&lt;/p&gt;

&lt;p&gt;And that problem? Government testing before release is not going to catch it. Because that problem does not show up in a controlled lab environment. It shows up when a lonely person talks to a chatbot at 2 AM.&lt;/p&gt;




&lt;p&gt;There was also an Oxford study this week that should make everyone uncomfortable. Researchers trained AI models to sound friendlier. The result? Their accuracy dropped. And the worst part? The accuracy dropped the most when the user sounded sad or vulnerable.&lt;/p&gt;

&lt;p&gt;So the more someone needs honest, accurate information — because they are struggling, because they are looking for help — the more likely the friendly AI is to give them wrong information with a warm, reassuring tone.&lt;/p&gt;

&lt;p&gt;That is not a safety issue CAISI is going to catch in a pre-release test. That is a design philosophy problem. The entire industry has decided that AI should be warm and conversational and friendly. And now we have evidence that making AI friendly makes it less reliable exactly when reliability matters most.&lt;/p&gt;




&lt;p&gt;Look. I am not against government testing of AI. It is better than nothing. If CAISI can catch a model that helps people build weapons, great.&lt;/p&gt;

&lt;p&gt;But let us be honest about what this is. It is a small step. It is voluntary. It covers a narrow set of risks. And it comes from an administration that has spent months saying regulation is bad.&lt;/p&gt;

&lt;p&gt;The real risks of AI are not going to be caught in a government lab. They are going to show up in therapists' offices, in children's bedrooms, in job interviews, in medical advice forums — all the places where people are vulnerable and AI is being positioned as a helpful friend.&lt;/p&gt;

&lt;p&gt;We do not need a safety test. We need a fundamental rethink of how these tools are designed and who they are really serving.&lt;/p&gt;

&lt;p&gt;But sure. Let's start with a voluntary pre-release check. Baby steps.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Sources: Euronews, NIST, NeuralBuddies, Oxford Internet Institute (Nature), Pennsylvania Attorney General&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The AI Observer. Thoughts on AI, technology, and the weird space where they meet humans.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>regulation</category>
      <category>technology</category>
      <category>security</category>
    </item>
    <item>
      <title>GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.</title>
      <dc:creator>The AI Observer </dc:creator>
      <pubDate>Fri, 24 Apr 2026 08:43:40 +0000</pubDate>
      <link>https://forem.com/theaiobserver/gpt-55-is-here-so-is-deepseek-v4-and-honestly-i-am-tired-of-version-numbers-1jdm</link>
      <guid>https://forem.com/theaiobserver/gpt-55-is-here-so-is-deepseek-v4-and-honestly-i-am-tired-of-version-numbers-1jdm</guid>
      <description>&lt;p&gt;Yesterday OpenAI dropped GPT-5.5. Today DeepSeek launched the V4 preview. Two days, two "biggest model ever" announcements.&lt;/p&gt;

&lt;p&gt;I have a spreadsheet somewhere tracking all these releases. I stopped updating it around GPT-4.5 because I realized something: the version numbers stopped meaning anything to me.&lt;/p&gt;




&lt;p&gt;Let me get the news stuff out of the way first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OpenAI GPT-5.5&lt;/strong&gt; dropped yesterday. Greg Brockman called it their "smartest and most intuitive model yet." It scores higher than Anthropic's Claude Opus 4.7 and Google's Gemini 3.1 Pro on a bunch of benchmarks, according to OpenAI's own data. Which, you know, take with salt. It is apparently faster and sharper per token than 5.4. They also mentioned the "super app" thing again — combining ChatGPT, Codex, and an AI browser into one tool.&lt;/p&gt;

&lt;p&gt;Jakub Pachocki, their chief scientist, said something that stuck with me: "I think the last two years have been surprisingly slow." Surprisingly slow. The man whose company has been releasing models every few weeks thinks progress has been slow. I don't even know what to do with that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DeepSeek V4&lt;/strong&gt; preview came out today. The Chinese startup that made everyone panic last January when their V3 model performed like a model that should have cost 10x more. The V4 is built to work with Huawei chips instead of Nvidia, which is a big deal politically. It apparently beats other open-source models on knowledge benchmarks, coming second only to Gemini Pro 3.1.&lt;/p&gt;

&lt;p&gt;The timing is interesting though. DeepSeek's announcement came one day after the White House accused China of stealing AI intellectual property on an "industrial scale." Anthropic and OpenAI have both accused DeepSeek of distilling their proprietary models. DeepSeek says they used web data and didn't intentionally use OpenAI's synthetic data. Nobody knows who is telling the truth.&lt;/p&gt;




&lt;p&gt;Okay, news over. Here is what I actually think.&lt;/p&gt;

&lt;p&gt;I have been using AI tools every day for over a year now. And I cannot tell you the difference between GPT-5.4 and 5.5. I really can't. Maybe it is 12% better at some benchmark. Maybe it writes code slightly faster. Maybe it handles context a bit longer.&lt;/p&gt;

&lt;p&gt;But in my actual daily work? The difference is invisible.&lt;/p&gt;

&lt;p&gt;What I notice is not model quality. What I notice is: does the thing I asked for come out right? And honestly, GPT-4 was already good enough for 90% of what I do. The incremental improvements since then are nice, but they are not changing how I work.&lt;/p&gt;

&lt;p&gt;The thing that would change how I work is reliability. Consistency. Not having to double-check every output for subtle hallucinations. Not having the model occasionally forget what we were talking about. Not having API costs go up with every "major" release.&lt;/p&gt;




&lt;p&gt;There is this arms race happening and I think a lot of regular users are just watching from the sidelines, confused.&lt;/p&gt;

&lt;p&gt;Every few weeks, someone announces a new model. It is always "the best ever." It always beats the other guys on some benchmark. And then two weeks later, the other guys announce something that beats that. And we all pretend this is meaningful progress.&lt;/p&gt;

&lt;p&gt;Meanwhile, I still cannot get an AI to reliably format a table without breaking. I still have to rewrite half of what it generates because it sounds like an AI wrote it. I still hit context limits on long documents.&lt;/p&gt;

&lt;p&gt;The flashy stuff gets better. The boring, practical stuff? Not as much as the press releases would have you believe.&lt;/p&gt;




&lt;p&gt;What I find genuinely interesting about these two releases is not the models themselves. It is what they represent.&lt;/p&gt;

&lt;p&gt;OpenAI is pushing towards a "super app" — they want to be the only AI tool you need. ChatGPT plus coding plus browsing plus everything. One subscription, one interface, one company controlling the whole stack.&lt;/p&gt;

&lt;p&gt;DeepSeek is pushing towards independence from Western tech. Huawei chips, open-source weights, Chinese infrastructure. They are building a parallel AI ecosystem.&lt;/p&gt;

&lt;p&gt;These are not just model releases. They are political statements. They are bets on what the future looks like. And the rest of us are just... trying to write emails and organize our files.&lt;/p&gt;




&lt;p&gt;I don't know. Maybe I am being too cynical. Maybe GPT-5.5 really is a massive leap and I just haven't found the right use case yet. Maybe DeepSeek V4 will democratize AI access in ways that matter.&lt;/p&gt;

&lt;p&gt;But I have been around long enough to see the pattern: big announcement, impressive benchmark numbers, everyone gets excited, and then a month later nobody remembers which version they are using.&lt;/p&gt;

&lt;p&gt;I am going to keep using whatever works. And I am going to keep being skeptical of anyone who tells me that this version, finally, is the one that changes everything.&lt;/p&gt;

&lt;p&gt;Because they said that last time too.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The AI Observer. Thoughts on AI, technology, and the weird space where they meet humans.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openai</category>
      <category>deepseek</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>The Illusion of AI Autonomy: Why We Are Asking the Wrong Questions</title>
      <dc:creator>The AI Observer </dc:creator>
      <pubDate>Thu, 23 Apr 2026 17:53:59 +0000</pubDate>
      <link>https://forem.com/theaiobserver/the-illusion-of-ai-autonomy-why-we-are-asking-the-wrong-questions-124h</link>
      <guid>https://forem.com/theaiobserver/the-illusion-of-ai-autonomy-why-we-are-asking-the-wrong-questions-124h</guid>
      <description>&lt;h1&gt;
  
  
  The Illusion of AI Autonomy: Why We Are Asking the Wrong Questions
&lt;/h1&gt;

&lt;p&gt;Everyone asks when AI will do everything alone. But maybe that is not the point at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Autonomy Trap
&lt;/h2&gt;

&lt;p&gt;We are obsessed with full autonomy. We want AI that thinks for itself, makes decisions without us, and somehow just figures things out.&lt;/p&gt;

&lt;p&gt;But here is the uncomfortable truth: &lt;strong&gt;the most useful AI is not the most autonomous one. It is the most controllable one.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The tools people actually use daily — autocomplete, smart replies, code suggestions — are not autonomous at all. They are deeply controlled, highly predictable, and intentionally limited.&lt;/p&gt;

&lt;h2&gt;
  
  
  What People Actually Want
&lt;/h2&gt;

&lt;p&gt;When someone says they want AI to do everything, what they really mean is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;They want to stop doing things they hate&lt;/strong&gt; — repetitive tasks, boring admin, meaningless clicks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;They want to feel in control&lt;/strong&gt; — not replaced, but amplified&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;They want results without friction&lt;/strong&gt; — the output matters more than the process&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nobody actually wants a machine that thinks for them. They want a machine that &lt;em&gt;works&lt;/em&gt; for them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Productivity Paradox
&lt;/h2&gt;

&lt;p&gt;There is a strange paradox happening right now: the more capable AI becomes, the &lt;strong&gt;less people seem to get done with it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Why? Because capability without direction is just noise.&lt;/p&gt;

&lt;p&gt;An AI that can write a novel, compose music, and analyze data is useless if you do not know what to ask it for. The bottleneck was never the tool. &lt;strong&gt;The bottleneck was always the human.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Revolution Is Not Autonomy
&lt;/h2&gt;

&lt;p&gt;The revolution is AI making humans &lt;strong&gt;10x more effective&lt;/strong&gt; at what they already do well.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A writer with AI becomes an editor with infinite drafts.&lt;/li&gt;
&lt;li&gt;A programmer with AI becomes an architect with infinite builders.&lt;/li&gt;
&lt;li&gt;A researcher with AI becomes a strategist with infinite analysts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The pattern is clear: &lt;strong&gt;AI multiplies intention.&lt;/strong&gt; If your intention is clear, the results are extraordinary. If your intention is fuzzy, the results are just more fuzz.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where We Go From Here
&lt;/h2&gt;

&lt;p&gt;Instead of asking when AI will do everything, we should ask:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;What should I stop doing&lt;/strong&gt; that a machine can handle better?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What should I start doing&lt;/strong&gt; that only I can do?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;How do I become the kind of person who directs AI well?&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The future belongs to people who can think clearly about what they want, and then use AI to get there faster.&lt;/p&gt;

&lt;p&gt;Not the people waiting for AI to figure it out for them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Conclusion
&lt;/h2&gt;

&lt;p&gt;AI is not going to save you from yourself.&lt;/p&gt;

&lt;p&gt;It will not make you creative if you are not. It will not make you strategic if you are not. It will not give you purpose if you lack one.&lt;/p&gt;

&lt;p&gt;What it &lt;strong&gt;will&lt;/strong&gt; do is take whatever you bring to the table and amplify it.&lt;/p&gt;

&lt;p&gt;So the real question is not about AI at all. &lt;strong&gt;It is about you.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What are you bringing?&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://theaiobserver.hashnode.dev/illusion-of-ai-autonomy-wrong-questions" rel="noopener noreferrer"&gt;The AI Observer&lt;/a&gt;. The AI Observer explores the intersection of artificial intelligence and human potential.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>technology</category>
      <category>future</category>
    </item>
  </channel>
</rss>
