<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: M Rizwan Akbar</title>
    <description>The latest articles on Forem by M Rizwan Akbar (@m_rizwanakbar_8c582ac370).</description>
    <link>https://forem.com/m_rizwanakbar_8c582ac370</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/m_rizwanakbar_8c582ac370"/>
    <language>en</language>
    <item>
      <title>Reasearch</title>
      <dc:creator>M Rizwan Akbar</dc:creator>
      <pubDate>Sat, 14 Mar 2026 18:54:56 +0000</pubDate>
      <link>https://forem.com/m_rizwanakbar_8c582ac370/reasearch-59o9</link>
      <guid>https://forem.com/m_rizwanakbar_8c582ac370/reasearch-59o9</guid>
      <description>&lt;p&gt;Asslam o alaikum everyone. My name is Muaz and I am studing BS Computer Science from FAST National University Faisalabad. This blog is basically part of my AI course assignment which is given by Dr. Bilal Jan sir. We have to read research papers and write about them. Honestly speaking when sir first told us this I was not very happy because I thought research papers are very boring and difficult thing. But when I actually open and read them I was quite suprised. So I am sharing what I learnt from this experiance.&lt;br&gt;
Why I Read Research Papers&lt;br&gt;
Okay so first thing — when sir said read research papers I was like yaar ye to bohot mushkil kaam hai. I always think research papers are only for phd students not for people like us who are just in bachelors. But then I open the papers and start reading and I notice something very intresting. I keep seeing things from our AI class inside these very new and modern papers. Like A star search algorithm which we study in our course was mention in a paper about ChatGPT type systems. That really suprised me and I think okay maybe these papers are actually worth reading.&lt;br&gt;
So my suggestion to every CS student is that please try to read atleast one or two papers every semester. It really help you understand where AI is going in real world not just what is written in textbook from many years ago.&lt;br&gt;
Paper 1 — The Rise of Agentic AI (2025)&lt;br&gt;
Title of paper: "The Rise of Agentic AI: A Review of Definitions, Frameworks, Architectures, Applications and Challenges"&lt;br&gt;
Where published: Future Internet journal by MDPI in September 2025. These people reviewed 143 different research studies to write this one paper. That is alot of work.&lt;br&gt;
What This Paper is Actually About&lt;br&gt;
Okay so this paper basically try to answer one simple question — what is agentic AI and why suddenly everyone is talking about it. The word "agentic AI" was barely exist before 2024 and then suddenly in 2025 everyone start using it everywhere. Paper say that more than 90 percent of all papers on this topic were published in only 2024 and 2025. That show how fast this whole field is moving.&lt;br&gt;
So what is agentic AI in simple words? Normal AI like chatbot just answer one question at a time. You ask something, it reply, finish. But agentic AI is quite different from this. It can set its own goals, plan many steps ahead, use different tools, remember what it did before, and keep working until whole task is completly done.&lt;br&gt;
Think about difference between asking someone "what is weather today" versus saying "book me cheapest ticket to Karachi next friday and also send email to my boss that I will not come to office." The second one need planning and multiple steps and different tools. That is exactly what agentic AI can do by itself without any human help.&lt;br&gt;
Four Things Every Agentic System Must Have&lt;br&gt;
Paper identify four core things that every agentic AI system need to actually work properly:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Planning — means break big goal into small small steps and decide what to do next. In our Q1 rescue robot assignment this is like robot deciding which survivor to go first based on battery level and distance from current position.&lt;/li&gt;
&lt;li&gt;Memory — means remember what you did before so you dont repeat same mistake again. Like robot remembering which zones it already search so it dont go back to same place again and waste battery.&lt;/li&gt;
&lt;li&gt;Reflection — means check your own performance and adjust if something is going wrong. Like robot realizing that its original path is now flooded and it need to make completely new plan in middle of mission.&lt;/li&gt;
&lt;li&gt;Goal Pursuit — means keep working toward objective without human telling every single step what to do. Like robot navigating whole flood zone completely by itself to find all survivors.
Frameworks Paper Talk About
This paper review many real frameworks which developers actually use to build agentic systems today. Some names you might already heard before — LangChain, AutoGen by Microsoft, MetaGPT and CrewAI. All of them basically implement the four things above but in their own different ways and styles.
Paper also discuss something called ReAct framework. ReAct stand for Reason plus Act. It is basically an agent that first think about what to do, then actually do it, then think again based on what it observe after doing, then do again. This loop keep going until task is finish. When I read about ReAct I immediately think — yaar this is same as perception action loop we learn in our AI class! Exact same concept just much more powerful implementation using modern AI models.
Most Intresting Finding — Compounding Errors
This was honestly the most intresting thing I found in whole paper and I think about it alot after reading. Paper talk about something called compounding errors or error propagation. What this means is that in agentic systems a small mistake in early step dont stay small — it keep growing bigger and bigger in every later step.
For example if agent make wrong assumption in step number 2 then by step number 8 that wrong assumption has effect every single decision that come in between. Final output can be completly wrong even though each individual step look okay by itself. Paper say this is one of the biggest unsolved problems in agentic AI right now and honestly it is quite scary when you think about what this mean for real applications.
I find this very relatable because in our Q1 assignment if rescue robot choose wrong path at beginning it waste battery on every single step that come after that. Then it might not have enough battery left to reach the most important survivors who need help. Same concept just different scale of application.
Paper 2 — A Survey of LLM based Deep Search Agents (2025)
Title of paper: "A Survey of LLM-based Deep Search Agents: Paradigm, Optimization, Evaluation and Challenges"
Where published: arXiv in August 2025. This is actually first ever proper survey done on this specific topic according to the authors.
What is Deep Search Agent
We all use Google everyday right? But Google search is actually quite simple thing if you think about it properly. You type some keywords, it find documents with those keywords, it rank them and show you links. That is it basically. It dont really understand what you actually want to find, it dont reason about anything at all, it just match keywords and show results.
Deep Search Agents are completly different thing from this. These are AI systems that actually understand what you want to find. They plan a proper strategy for searching before they start. They search multiple times in multiple different places. They read what they find and reason about it carefully. And then they combine everything into one proper complete answer for you.
Best real example that paper give is OpenAI Deep Research feature. When you ask it some complex question it spend several minutes searching many sources, reading them properly, connecting information from different places and then writing full structured report for you. That is a search agent working in real life right now.
Three Generations of Search — Here Course Connection Come!
Okay this is the part I find most exciting in whole paper because connection to our AI course is so clear here. I was genuinely suprised when I first see this connection.
Generation 1 — Old Search like Google: Match keywords, rank documents, show links. This is like uninformed search we study in class — like BFS with no knowledge at all, just explore everything blindly without any intelligent guidance.
Generation 2 — RAG Systems: Retrieve some documents then give them to AI to generate answer from them. Little bit better than old search but still no real planning about what to actually search for next.
Generation 3 — Agentic Search like Deep Research: Plan, search, reason, plan again, search again, combine everything and give proper answer. This is exactly like A star search we study in our AI class! It use intelligence as heuristic to guide where to search next — f(n) = g(n) + h(n). The language model itself IS the heuristic function in this case.
When I realize this connection I actually got quite excite about it. We study A star algorithm in class and honestly it feel like just another boring textbook topic that we have to memorize. But then I see same core idea — use heuristic to intelligently guide search instead of blindly exploring everything — appearing in paper about most advance AI search systems in 2025. That was genuinely cool moment for me personally.
Most Suprising Finding — Lost in the Middle Problem
I never expect to find something this suprising in a research paper but here it is and it really change how I think about AI. There is a well known problem called lost in the middle problem. What this mean is that when you give language model a very long document to read, it pay much more attention to information at beginning and end of document. The information which is place in middle of document get much less attention from the model.
So if you retrieve 20 documents and put them all together for AI to read at once, documents number 8 to 14 get much less attention than documents 1 to 3 and 17 to 20. This mean how you arrange information matter as much as what information you actually retrieve in first place. I never think that something simple as position of text inside document can affect AI performance so much. This was genuinly suprising discovery for me.
How Both Papers Connect to Our Course
This is my favourite section to write because connections I found are genuinely suprising to me. These are not just surface level connections — they are deep structural similarities between classical AI and modern research.
Agent Types Connection: Paper 1 is literally review of how agent architectures have evolve over time. Every single framework they review is different implementation of agent types we study in class — simple reflex, model-based, goal-based, utility-based. All same concepts just made more powerful with modern technology.
A star Search Connection: ReAct framework use reasoning as heuristic to decide next action — same f(n) = g(n) + h(n) structure as A star search. In Paper 2 the LLM itself act as h(n) — intelligent estimator of how useful each search direction will be. Whole process become informed search instead of blind search.
CSP Connection: MetaGPT framework decompose complex task into sub-tasks for specialised agents — exactly like CSP decomposition we study in course. In Paper 2 query decomposition break complex questions into sub-questions. Direct application of same concept from our textbook.
Multi-Agent Environment: Paper 1 have whole section on multi-agent systems where agents communicate and coordinate with each other — map directly to multi-agent dimension we classify in our Q1 rescue robot assignment.
My Experiance with Google NotebookLM
Part of this assignment was to use Google NotebookLM. Honestly I was little skeptical at start. I think it will just summarize papers and I wont really learn anything new or different from it. But I was completely wrong about this.
Manual Reading: When I first read papers without any help it was quite difficult yaar. Technical terminology was hard to understand specially for someone like me who dont read research papers regularly. Parts comparing many different frameworks in Paper 1 were specially confusing — I got confuse about how LangChain and AutoGen and MetaGPT all relate to each other exactly. I have to re-read same sections many times.
After Using NotebookLM: Experience was quite different after this. I use the question answer feature to ask specific things I dont understand properly. Like I ask it "what is difference between ReAct and Chain of Thought" and it pull exact relevant sections from paper to explain in simple way. The audio overview feature was specially good — it create podcast style summary of paper which is very easy to listen while doing other things.
Most importantly through NotebookLM I discover the lost in middle problem in Paper 2 which I had completly miss during my manual reading. So NotebookLM actually help me find something important that I miss completely by myself. That was good lesson for me.
My honest recomendation to everyone — read paper yourself first to form your own understanding of it, then use NotebookLM to fill the gaps and verify your thinking. Using it without reading first is not as beneficial because you dont have base knowledge to ask good questions.
My Video
I also make a short 2 to 3 minute video where I explain core ideas of both papers and share what I find most intresting about them. Link is below!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;What I Learnt Overall&lt;br&gt;
Before doing this assignment I genuinly think research papers are not for undergraduate students like us in bachelors program. That was completely wrong thinking I had. These papers are actually very readable if you give proper time and use right tools like NotebookLM to help you understand difficult parts.&lt;br&gt;
Most important thing I learnt from all this is that classical AI we study in university — A star search, agent types, CSP — is not outdated at all. It is literally the foundation of most advance AI systems being build right now in 2025. Modern AI is just more powerful version of same concepts we already learn in our class. That is honestly quite motivating thing to realize as a student.&lt;br&gt;
Thank you so much for reading this blog post. If you are also CS student and found this helpful please leave a comment below. Watch my YouTube video above for quick explanation of same content!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>computerscience</category>
      <category>devjournal</category>
      <category>learning</category>
    </item>
    <item>
      <title>Research</title>
      <dc:creator>M Rizwan Akbar</dc:creator>
      <pubDate>Sat, 14 Mar 2026 18:52:23 +0000</pubDate>
      <link>https://forem.com/m_rizwanakbar_8c582ac370/research-17bk</link>
      <guid>https://forem.com/m_rizwanakbar_8c582ac370/research-17bk</guid>
      <description>&lt;p&gt;Asslam o alaikum everyone. My name is Rizwan and I am studing BS Computer Science from FAST National University Faisalabad. This blog is basically part of my AI course assignment which is given by Dr. Bilal Jan sir. We have to read research papers and write about them. Honestly speaking when sir first told us this I was not very happy because I thought research papers are very boring and difficult thing. But when I actually open and read them I was quite suprised. So I am sharing what I learnt from this experiance.&lt;br&gt;
Why I Read Research Papers&lt;br&gt;
Okay so first thing — when sir said read research papers I was like yaar ye to bohot mushkil kaam hai. I always think research papers are only for phd students not for people like us who are just in bachelors. But then I open the papers and start reading and I notice something very intresting. I keep seeing things from our AI class inside these very new and modern papers. Like A star search algorithm which we study in our course was mention in a paper about ChatGPT type systems. That really suprised me and I think okay maybe these papers are actually worth reading.&lt;br&gt;
So my suggestion to every CS student is that please try to read atleast one or two papers every semester. It really help you understand where AI is going in real world not just what is written in textbook from many years ago.&lt;br&gt;
Paper 1 — The Rise of Agentic AI (2025)&lt;br&gt;
Title of paper: "The Rise of Agentic AI: A Review of Definitions, Frameworks, Architectures, Applications and Challenges"&lt;br&gt;
Where published: Future Internet journal by MDPI in September 2025. These people reviewed 143 different research studies to write this one paper. That is alot of work.&lt;br&gt;
What This Paper is Actually About&lt;br&gt;
Okay so this paper basically try to answer one simple question — what is agentic AI and why suddenly everyone is talking about it. The word "agentic AI" was barely exist before 2024 and then suddenly in 2025 everyone start using it everywhere. Paper say that more than 90 percent of all papers on this topic were published in only 2024 and 2025. That show how fast this whole field is moving.&lt;br&gt;
So what is agentic AI in simple words? Normal AI like chatbot just answer one question at a time. You ask something, it reply, finish. But agentic AI is quite different from this. It can set its own goals, plan many steps ahead, use different tools, remember what it did before, and keep working until whole task is completly done.&lt;br&gt;
Think about difference between asking someone "what is weather today" versus saying "book me cheapest ticket to Karachi next friday and also send email to my boss that I will not come to office." The second one need planning and multiple steps and different tools. That is exactly what agentic AI can do by itself without any human help.&lt;br&gt;
Four Things Every Agentic System Must Have&lt;br&gt;
Paper identify four core things that every agentic AI system need to actually work properly:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Planning — means break big goal into small small steps and decide what to do next. In our Q1 rescue robot assignment this is like robot deciding which survivor to go first based on battery level and distance from current position.&lt;/li&gt;
&lt;li&gt;Memory — means remember what you did before so you dont repeat same mistake again. Like robot remembering which zones it already search so it dont go back to same place again and waste battery.&lt;/li&gt;
&lt;li&gt;Reflection — means check your own performance and adjust if something is going wrong. Like robot realizing that its original path is now flooded and it need to make completely new plan in middle of mission.&lt;/li&gt;
&lt;li&gt;Goal Pursuit — means keep working toward objective without human telling every single step what to do. Like robot navigating whole flood zone completely by itself to find all survivors.
Frameworks Paper Talk About
This paper review many real frameworks which developers actually use to build agentic systems today. Some names you might already heard before — LangChain, AutoGen by Microsoft, MetaGPT and CrewAI. All of them basically implement the four things above but in their own different ways and styles.
Paper also discuss something called ReAct framework. ReAct stand for Reason plus Act. It is basically an agent that first think about what to do, then actually do it, then think again based on what it observe after doing, then do again. This loop keep going until task is finish. When I read about ReAct I immediately think — yaar this is same as perception action loop we learn in our AI class! Exact same concept just much more powerful implementation using modern AI models.
Most Intresting Finding — Compounding Errors
This was honestly the most intresting thing I found in whole paper and I think about it alot after reading. Paper talk about something called compounding errors or error propagation. What this means is that in agentic systems a small mistake in early step dont stay small — it keep growing bigger and bigger in every later step.
For example if agent make wrong assumption in step number 2 then by step number 8 that wrong assumption has effect every single decision that come in between. Final output can be completly wrong even though each individual step look okay by itself. Paper say this is one of the biggest unsolved problems in agentic AI right now and honestly it is quite scary when you think about what this mean for real applications.
I find this very relatable because in our Q1 assignment if rescue robot choose wrong path at beginning it waste battery on every single step that come after that. Then it might not have enough battery left to reach the most important survivors who need help. Same concept just different scale of application.
Paper 2 — A Survey of LLM based Deep Search Agents (2025)
Title of paper: "A Survey of LLM-based Deep Search Agents: Paradigm, Optimization, Evaluation and Challenges"
Where published: arXiv in August 2025. This is actually first ever proper survey done on this specific topic according to the authors.
What is Deep Search Agent
We all use Google everyday right? But Google search is actually quite simple thing if you think about it properly. You type some keywords, it find documents with those keywords, it rank them and show you links. That is it basically. It dont really understand what you actually want to find, it dont reason about anything at all, it just match keywords and show results.
Deep Search Agents are completly different thing from this. These are AI systems that actually understand what you want to find. They plan a proper strategy for searching before they start. They search multiple times in multiple different places. They read what they find and reason about it carefully. And then they combine everything into one proper complete answer for you.
Best real example that paper give is OpenAI Deep Research feature. When you ask it some complex question it spend several minutes searching many sources, reading them properly, connecting information from different places and then writing full structured report for you. That is a search agent working in real life right now.
Three Generations of Search — Here Course Connection Come!
Okay this is the part I find most exciting in whole paper because connection to our AI course is so clear here. I was genuinely suprised when I first see this connection.
Generation 1 — Old Search like Google: Match keywords, rank documents, show links. This is like uninformed search we study in class — like BFS with no knowledge at all, just explore everything blindly without any intelligent guidance.
Generation 2 — RAG Systems: Retrieve some documents then give them to AI to generate answer from them. Little bit better than old search but still no real planning about what to actually search for next.
Generation 3 — Agentic Search like Deep Research: Plan, search, reason, plan again, search again, combine everything and give proper answer. This is exactly like A star search we study in our AI class! It use intelligence as heuristic to guide where to search next — f(n) = g(n) + h(n). The language model itself IS the heuristic function in this case.
When I realize this connection I actually got quite excite about it. We study A star algorithm in class and honestly it feel like just another boring textbook topic that we have to memorize. But then I see same core idea — use heuristic to intelligently guide search instead of blindly exploring everything — appearing in paper about most advance AI search systems in 2025. That was genuinely cool moment for me personally.
Most Suprising Finding — Lost in the Middle Problem
I never expect to find something this suprising in a research paper but here it is and it really change how I think about AI. There is a well known problem called lost in the middle problem. What this mean is that when you give language model a very long document to read, it pay much more attention to information at beginning and end of document. The information which is place in middle of document get much less attention from the model.
So if you retrieve 20 documents and put them all together for AI to read at once, documents number 8 to 14 get much less attention than documents 1 to 3 and 17 to 20. This mean how you arrange information matter as much as what information you actually retrieve in first place. I never think that something simple as position of text inside document can affect AI performance so much. This was genuinly suprising discovery for me.
How Both Papers Connect to Our Course
This is my favourite section to write because connections I found are genuinely suprising to me. These are not just surface level connections — they are deep structural similarities between classical AI and modern research.
Agent Types Connection: Paper 1 is literally review of how agent architectures have evolve over time. Every single framework they review is different implementation of agent types we study in class — simple reflex, model-based, goal-based, utility-based. All same concepts just made more powerful with modern technology.
A star Search Connection: ReAct framework use reasoning as heuristic to decide next action — same f(n) = g(n) + h(n) structure as A star search. In Paper 2 the LLM itself act as h(n) — intelligent estimator of how useful each search direction will be. Whole process become informed search instead of blind search.
CSP Connection: MetaGPT framework decompose complex task into sub-tasks for specialised agents — exactly like CSP decomposition we study in course. In Paper 2 query decomposition break complex questions into sub-questions. Direct application of same concept from our textbook.
Multi-Agent Environment: Paper 1 have whole section on multi-agent systems where agents communicate and coordinate with each other — map directly to multi-agent dimension we classify in our Q1 rescue robot assignment.
My Experiance with Google NotebookLM
Part of this assignment was to use Google NotebookLM. Honestly I was little skeptical at start. I think it will just summarize papers and I wont really learn anything new or different from it. But I was completely wrong about this.
Manual Reading: When I first read papers without any help it was quite difficult yaar. Technical terminology was hard to understand specially for someone like me who dont read research papers regularly. Parts comparing many different frameworks in Paper 1 were specially confusing — I got confuse about how LangChain and AutoGen and MetaGPT all relate to each other exactly. I have to re-read same sections many times.
After Using NotebookLM: Experience was quite different after this. I use the question answer feature to ask specific things I dont understand properly. Like I ask it "what is difference between ReAct and Chain of Thought" and it pull exact relevant sections from paper to explain in simple way. The audio overview feature was specially good — it create podcast style summary of paper which is very easy to listen while doing other things.
Most importantly through NotebookLM I discover the lost in middle problem in Paper 2 which I had completly miss during my manual reading. So NotebookLM actually help me find something important that I miss completely by myself. That was good lesson for me.
My honest recomendation to everyone — read paper yourself first to form your own understanding of it, then use NotebookLM to fill the gaps and verify your thinking. Using it without reading first is not as beneficial because you dont have base knowledge to ask good questions.
My Video
I also make a short 2 to 3 minute video where I explain core ideas of both papers and share what I find most intresting about them. Link is below!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;What I Learnt Overall&lt;br&gt;
Before doing this assignment I genuinly think research papers are not for undergraduate students like us in bachelors program. That was completely wrong thinking I had. These papers are actually very readable if you give proper time and use right tools like NotebookLM to help you understand difficult parts.&lt;br&gt;
Most important thing I learnt from all this is that classical AI we study in university — A star search, agent types, CSP — is not outdated at all. It is literally the foundation of most advance AI systems being build right now in 2025. Modern AI is just more powerful version of same concepts we already learn in our class. That is honestly quite motivating thing to realize as a student.&lt;br&gt;
Thank you so much for reading this blog post. If you are also CS student and found this helpful please leave a comment below. Watch my YouTube video above for quick explanation of same content!&lt;br&gt;
References&lt;/p&gt;

</description>
      <category>ai</category>
      <category>computerscience</category>
      <category>devjournal</category>
      <category>learning</category>
    </item>
    <item>
      <title>Reasreach</title>
      <dc:creator>M Rizwan Akbar</dc:creator>
      <pubDate>Sat, 14 Mar 2026 18:46:07 +0000</pubDate>
      <link>https://forem.com/m_rizwanakbar_8c582ac370/reasreach-2dfh</link>
      <guid>https://forem.com/m_rizwanakbar_8c582ac370/reasreach-2dfh</guid>
      <description>&lt;p&gt;Asslam o alaikum everyone. My name is Muaz and I am studing BS Computer Science from FAST National University Faisalabad. This blog is basically part of my AI course assignment which is given by Dr. Bilal Jan sir. We have to read research papers and write about them. Honestly speaking when sir first told us this I was not very happy because I thought research papers are very boring and difficult thing. But when I actually open and read them I was quite suprised. So I am sharing what I learnt from this experiance.&lt;br&gt;
Why I Read Research Papers&lt;br&gt;
Okay so first thing — when sir said read research papers I was like yaar ye to bohot mushkil kaam hai. I always think research papers are only for phd students not for people like us who are just in bachelors. But then I open the papers and start reading and I notice something very intresting. I keep seeing things from our AI class inside these very new and modern papers. Like A star search algorithm which we study in our course was mention in a paper about ChatGPT type systems. That really suprised me and I think okay maybe these papers are actually worth reading.&lt;br&gt;
So my suggestion to every CS student is that please try to read atleast one or two papers every semester. It really help you understand where AI is going in real world not just what is written in textbook from many years ago.&lt;br&gt;
Paper 1 — The Rise of Agentic AI (2025)&lt;br&gt;
Title of paper: "The Rise of Agentic AI: A Review of Definitions, Frameworks, Architectures, Applications and Challenges"&lt;br&gt;
Where published: Future Internet journal by MDPI in September 2025. These people reviewed 143 different research studies to write this one paper. That is alot of work.&lt;br&gt;
What This Paper is Actually About&lt;br&gt;
Okay so this paper basically try to answer one simple question — what is agentic AI and why suddenly everyone is talking about it. The word "agentic AI" was barely exist before 2024 and then suddenly in 2025 everyone start using it everywhere. Paper say that more than 90 percent of all papers on this topic were published in only 2024 and 2025. That show how fast this whole field is moving.&lt;br&gt;
So what is agentic AI in simple words? Normal AI like chatbot just answer one question at a time. You ask something, it reply, finish. But agentic AI is quite different from this. It can set its own goals, plan many steps ahead, use different tools, remember what it did before, and keep working until whole task is completly done.&lt;br&gt;
Think about difference between asking someone "what is weather today" versus saying "book me cheapest ticket to Karachi next friday and also send email to my boss that I will not come to office." The second one need planning and multiple steps and different tools. That is exactly what agentic AI can do by itself without any human help.&lt;br&gt;
Four Things Every Agentic System Must Have&lt;br&gt;
Paper identify four core things that every agentic AI system need to actually work properly:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Planning — means break big goal into small small steps and decide what to do next. In our Q1 rescue robot assignment this is like robot deciding which survivor to go first based on battery level and distance from current position.&lt;/li&gt;
&lt;li&gt;Memory — means remember what you did before so you dont repeat same mistake again. Like robot remembering which zones it already search so it dont go back to same place again and waste battery.&lt;/li&gt;
&lt;li&gt;Reflection — means check your own performance and adjust if something is going wrong. Like robot realizing that its original path is now flooded and it need to make completely new plan in middle of mission.&lt;/li&gt;
&lt;li&gt;Goal Pursuit — means keep working toward objective without human telling every single step what to do. Like robot navigating whole flood zone completely by itself to find all survivors.
Frameworks Paper Talk About
This paper review many real frameworks which developers actually use to build agentic systems today. Some names you might already heard before — LangChain, AutoGen by Microsoft, MetaGPT and CrewAI. All of them basically implement the four things above but in their own different ways and styles.
Paper also discuss something called ReAct framework. ReAct stand for Reason plus Act. It is basically an agent that first think about what to do, then actually do it, then think again based on what it observe after doing, then do again. This loop keep going until task is finish. When I read about ReAct I immediately think — yaar this is same as perception action loop we learn in our AI class! Exact same concept just much more powerful implementation using modern AI models.
Most Intresting Finding — Compounding Errors
This was honestly the most intresting thing I found in whole paper and I think about it alot after reading. Paper talk about something called compounding errors or error propagation. What this means is that in agentic systems a small mistake in early step dont stay small — it keep growing bigger and bigger in every later step.
For example if agent make wrong assumption in step number 2 then by step number 8 that wrong assumption has effect every single decision that come in between. Final output can be completly wrong even though each individual step look okay by itself. Paper say this is one of the biggest unsolved problems in agentic AI right now and honestly it is quite scary when you think about what this mean for real applications.
I find this very relatable because in our Q1 assignment if rescue robot choose wrong path at beginning it waste battery on every single step that come after that. Then it might not have enough battery left to reach the most important survivors who need help. Same concept just different scale of application.
Paper 2 — A Survey of LLM based Deep Search Agents (2025)
Title of paper: "A Survey of LLM-based Deep Search Agents: Paradigm, Optimization, Evaluation and Challenges"
Where published: arXiv in August 2025. This is actually first ever proper survey done on this specific topic according to the authors.
What is Deep Search Agent
We all use Google everyday right? But Google search is actually quite simple thing if you think about it properly. You type some keywords, it find documents with those keywords, it rank them and show you links. That is it basically. It dont really understand what you actually want to find, it dont reason about anything at all, it just match keywords and show results.
Deep Search Agents are completly different thing from this. These are AI systems that actually understand what you want to find. They plan a proper strategy for searching before they start. They search multiple times in multiple different places. They read what they find and reason about it carefully. And then they combine everything into one proper complete answer for you.
Best real example that paper give is OpenAI Deep Research feature. When you ask it some complex question it spend several minutes searching many sources, reading them properly, connecting information from different places and then writing full structured report for you. That is a search agent working in real life right now.
Three Generations of Search — Here Course Connection Come!
Okay this is the part I find most exciting in whole paper because connection to our AI course is so clear here. I was genuinely suprised when I first see this connection.
Generation 1 — Old Search like Google: Match keywords, rank documents, show links. This is like uninformed search we study in class — like BFS with no knowledge at all, just explore everything blindly without any intelligent guidance.
Generation 2 — RAG Systems: Retrieve some documents then give them to AI to generate answer from them. Little bit better than old search but still no real planning about what to actually search for next.
Generation 3 — Agentic Search like Deep Research: Plan, search, reason, plan again, search again, combine everything and give proper answer. This is exactly like A star search we study in our AI class! It use intelligence as heuristic to guide where to search next — f(n) = g(n) + h(n). The language model itself IS the heuristic function in this case.
When I realize this connection I actually got quite excite about it. We study A star algorithm in class and honestly it feel like just another boring textbook topic that we have to memorize. But then I see same core idea — use heuristic to intelligently guide search instead of blindly exploring everything — appearing in paper about most advance AI search systems in 2025. That was genuinely cool moment for me personally.
Most Suprising Finding — Lost in the Middle Problem
I never expect to find something this suprising in a research paper but here it is and it really change how I think about AI. There is a well known problem called lost in the middle problem. What this mean is that when you give language model a very long document to read, it pay much more attention to information at beginning and end of document. The information which is place in middle of document get much less attention from the model.
So if you retrieve 20 documents and put them all together for AI to read at once, documents number 8 to 14 get much less attention than documents 1 to 3 and 17 to 20. This mean how you arrange information matter as much as what information you actually retrieve in first place. I never think that something simple as position of text inside document can affect AI performance so much. This was genuinly suprising discovery for me.
How Both Papers Connect to Our Course
This is my favourite section to write because connections I found are genuinely suprising to me. These are not just surface level connections — they are deep structural similarities between classical AI and modern research.
Agent Types Connection: Paper 1 is literally review of how agent architectures have evolve over time. Every single framework they review is different implementation of agent types we study in class — simple reflex, model-based, goal-based, utility-based. All same concepts just made more powerful with modern technology.
A star Search Connection: ReAct framework use reasoning as heuristic to decide next action — same f(n) = g(n) + h(n) structure as A star search. In Paper 2 the LLM itself act as h(n) — intelligent estimator of how useful each search direction will be. Whole process become informed search instead of blind search.
CSP Connection: MetaGPT framework decompose complex task into sub-tasks for specialised agents — exactly like CSP decomposition we study in course. In Paper 2 query decomposition break complex questions into sub-questions. Direct application of same concept from our textbook.
Multi-Agent Environment: Paper 1 have whole section on multi-agent systems where agents communicate and coordinate with each other — map directly to multi-agent dimension we classify in our Q1 rescue robot assignment.
My Experiance with Google NotebookLM
Part of this assignment was to use Google NotebookLM. Honestly I was little skeptical at start. I think it will just summarize papers and I wont really learn anything new or different from it. But I was completely wrong about this.
Manual Reading: When I first read papers without any help it was quite difficult yaar. Technical terminology was hard to understand specially for someone like me who dont read research papers regularly. Parts comparing many different frameworks in Paper 1 were specially confusing — I got confuse about how LangChain and AutoGen and MetaGPT all relate to each other exactly. I have to re-read same sections many times.
After Using NotebookLM: Experience was quite different after this. I use the question answer feature to ask specific things I dont understand properly. Like I ask it "what is difference between ReAct and Chain of Thought" and it pull exact relevant sections from paper to explain in simple way. The audio overview feature was specially good — it create podcast style summary of paper which is very easy to listen while doing other things.
Most importantly through NotebookLM I discover the lost in middle problem in Paper 2 which I had completly miss during my manual reading. So NotebookLM actually help me find something important that I miss completely by myself. That was good lesson for me.
My honest recomendation to everyone — read paper yourself first to form your own understanding of it, then use NotebookLM to fill the gaps and verify your thinking. Using it without reading first is not as beneficial because you dont have base knowledge to ask good questions.
My Video
I also make a short 2 to 3 minute video where I explain core ideas of both papers and share what I find most intresting about them. Link is below!
Watch here: &lt;a href="https://youtu.be/lmns21bMldU" rel="noopener noreferrer"&gt;https://youtu.be/lmns21bMldU&lt;/a&gt;
What I Learnt Overall
Before doing this assignment I genuinly think research papers are not for undergraduate students like us in bachelors program. That was completely wrong thinking I had. These papers are actually very readable if you give proper time and use right tools like NotebookLM to help you understand difficult parts.
Most important thing I learnt from all this is that classical AI we study in university — A star search, agent types, CSP — is not outdated at all. It is literally the foundation of most advance AI systems being build right now in 2025. Modern AI is just more powerful version of same concepts we already learn in our class. That is honestly quite motivating thing to realize as a student.
Thank you so much for reading this blog post. If you are also CS student and found this helpful please leave a comment below. Watch my YouTube video above for quick explanation of same content!
References&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Bandi A. et al. (2025). The Rise of Agentic AI. Future Internet, MDPI.&lt;br&gt;
Xi Y. et al. (2025). A Survey of LLM based Deep Search Agents. arXiv 2508.05668.&lt;br&gt;
Russell S. and Norvig P. (2022). Artificial Intelligence A Modern Approach 4th edition. Pearson.&lt;br&gt;
Yao S. et al. (2023). ReAct Synergizing Reasoning and Acting in Language Models. ICLR 2023.&lt;br&gt;
Google NotebookLM — notebooklm.google.com&lt;/p&gt;

</description>
      <category>ai</category>
      <category>computerscience</category>
      <category>devjournal</category>
      <category>learning</category>
    </item>
    <item>
      <title>From Classroom Theory to Cutting-Edge Research: What I Learnt Studying Two AI Papers</title>
      <dc:creator>M Rizwan Akbar</dc:creator>
      <pubDate>Sat, 14 Mar 2026 09:27:19 +0000</pubDate>
      <link>https://forem.com/m_rizwanakbar_8c582ac370/from-classroom-theory-to-cutting-edge-research-what-i-learnt-studying-two-ai-papers-4gb</link>
      <guid>https://forem.com/m_rizwanakbar_8c582ac370/from-classroom-theory-to-cutting-edge-research-what-i-learnt-studying-two-ai-papers-4gb</guid>
      <description>&lt;p&gt;Asslam o Alaikum everyone! My name is Rizwan, and I am studying for a BS in Computer Science at FAST National University Faisalabad. This blog is part of my AI course assignment, which was given by Dr. Bilal Jan sir. We had to read and write about research papers. Honestly, when sir first told us about this, I wasn’t very happy because I thought research papers were very boring and hard to understand. But when I actually opened and read them, I was quite surprised. Let me share what I learned from this experience.&lt;/p&gt;

&lt;p&gt;Why I Read Research Papers&lt;br&gt;
At first, I thought research papers were only for PhD students. But after reading them, I realized they can be interesting. I saw things from our AI class, like the A* algorithm, in these very new papers. That really surprised me, and I thought, "Okay, maybe these papers are actually worth reading." So, my advice to other CS students is to try reading at least one paper every semester. It really helps you understand where AI is going in the real world.&lt;/p&gt;

&lt;p&gt;Paper 1 – What is Agentic AI?&lt;br&gt;
This paper talks about Agentic AI, which is a new kind of AI. Normally, AI systems like chatbots just answer one question at a time, but Agentic AI can set goals, plan steps, and do many things without help. It can also remember what it did before. The paper also says compounding errors are a big issue, where small mistakes grow bigger with time.&lt;/p&gt;

&lt;p&gt;Paper 2 – New Search Technology with AI&lt;br&gt;
This paper talks about Deep Search Agents. Unlike Google, these AI systems understand what you are looking for. They plan the search, read information, and give you a complete answer. It also talks about three types of search systems: old keyword-based search, newer systems, and advanced systems like OpenAI’s Deep Research. This is related to the A search algorithm* we study in class, where we use smart thinking to guide the search.&lt;/p&gt;

&lt;p&gt;How the Papers Connect to What We Learn&lt;br&gt;
Both papers show that what we learned in class, like A search* and agent types, are important in real-world AI. These concepts are used in the latest AI systems.&lt;/p&gt;

&lt;p&gt;My Experience with Google NotebookLM&lt;br&gt;
At first, reading research papers was difficult. But after using Google NotebookLM, I could understand things better. It helped me find important points that I missed when reading on my own.&lt;/p&gt;

&lt;p&gt;What I Learned Overall&lt;br&gt;
Reading research papers is not just for PhD students; they show how what we learn in class is used in real-world AI. It’s motivating to know that our studies are the foundation of the most advanced AI today.&lt;/p&gt;

&lt;p&gt;Thanks for reading my blog! I hope this helps other CS students.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
      <category>computerscience</category>
      <category>learning</category>
    </item>
    <item>
      <title>From Classroom Theory to Cutting-Edge Research: What I Learnt Studying Two AI Papers</title>
      <dc:creator>M Rizwan Akbar</dc:creator>
      <pubDate>Sat, 14 Mar 2026 02:40:07 +0000</pubDate>
      <link>https://forem.com/m_rizwanakbar_8c582ac370/from-classroom-theory-to-cutting-edge-research-what-i-learnt-studying-two-ai-papers-2oaf</link>
      <guid>https://forem.com/m_rizwanakbar_8c582ac370/from-classroom-theory-to-cutting-edge-research-what-i-learnt-studying-two-ai-papers-2oaf</guid>
      <description>&lt;p&gt;Hello everyone! My name is Rizwan and I am a BS Computer Science student at FAST National University. In our Artificial Intelligence course with Dr. Bilal Jan, I was given an assignment to read recent research papers, analyze them, and write a blog connecting the findings to what we study in class. This post is that blog — but honestly, it ended up being one of the most useful things I did this semester. These two papers completely changed how I think about AI agents and search algorithms.&lt;/p&gt;

&lt;p&gt;Why Research Papers Matter for CS Students&lt;/p&gt;

&lt;p&gt;When we study AI in university, we mostly learn from textbooks written years ago. We learn about BFS, DFS, A* search, goal-based agents, utility functions — all important concepts. But the world of AI is moving so fast that what is happening in research labs right now is completely different from what is in our textbooks.&lt;/p&gt;

&lt;p&gt;When I first opened these research papers I honestly expected them to be very boring and very hard to understand. But something interesting happened — I kept recognizing concepts from our course inside these very modern papers. A search algorithm we studied in class was inside a paper about ChatGPT-like systems. The agent types we classify in assignments were being discussed in papers about autonomous systems. That connection surprised me a lot and made me want to understand more.&lt;/p&gt;

&lt;p&gt;This is what I want to share in this blog — not just a summary of two papers, but the actual connections I found between our textbook AI and modern AI research.&lt;/p&gt;

&lt;p&gt;Paper 1: The Rise of Agentic AI (2025)&lt;/p&gt;

&lt;p&gt;Paper Title: "The Rise of Agentic AI: A Review of Definitions, Frameworks, Architectures, Applications, Evaluation Metrics, and Challenges"&lt;/p&gt;

&lt;p&gt;Authors: Bandi, A., Kongari, B., Naguru, R., Pasnoor, S., Vilipala, S.V.&lt;/p&gt;

&lt;p&gt;Published in: Future Internet, MDPI, September 2025. Reviewed 143 primary studies.&lt;/p&gt;

&lt;p&gt;What is this paper actually about?&lt;/p&gt;

&lt;p&gt;This paper is a big review paper — meaning it did not do one experiment but instead read and analyzed 143 other research papers to build a complete picture of what Agentic AI means. More than 90% of those papers were published in 2024 and 2025, which shows just how new this field is. Even the search term "agentic AI" barely existed before 2024, then suddenly became extremely popular in mid-2025.&lt;/p&gt;

&lt;p&gt;So what is Agentic AI? The paper define it as AI systems that go beyond just answering questions. Traditional AI, even modern chatbots, basically just respond to one input and give one output. Agentic AI is different — it can set its own goals, plan multiple steps ahead, use external tools, remember past actions, and keep working until a complex task is actually done. Think of the difference between answering "what is the weather?" versus "book me the cheapest flight to Lahore next Friday and send my boss a calendar invite".&lt;/p&gt;

&lt;p&gt;The Four Core Capabilities of an Agentic System&lt;/p&gt;

&lt;p&gt;The paper identify that all agentic systems need four core capabilities:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Planning — Ability to break a goal into steps and decide what to do next. In our rescue robot assignment, this is the robot deciding which survivor to rescue first based on battery and distance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Memory — Remembering past actions and their results to avoid repeating mistakes. Like the robot remembering which zones it already searched so it does not go back.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reflection — Evaluating its own performance and adjusting its approach. Like the robot realizing its original path is flooded and replanning mid-mission.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Goal Pursuit — Continuing to work toward an objective without constant human instruction. Like the robot autonomously navigating the flood zone to locate all survivors.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Frameworks the Paper Covers&lt;/p&gt;

&lt;p&gt;The paper review many real agentic AI frameworks that developers actually use today. The most well-known ones include LangChain (which connects LLMs to tools and memory), AutoGen (Microsoft's framework for multi-agent conversations), MetaGPT (which organizes agents into roles like a software company), and CrewAI (which lets you create crews of specialized agents working together).&lt;/p&gt;

&lt;p&gt;The paper also discuss the ReAct framework (Reason + Act) which is used in many modern agentic systems. ReAct is basically an agent that alternates between thinking about what to do and then actually doing it — then thinking again based on what it observed. This loop continues until the task is complete. When I read this I immediately thought — this is exactly the perception-action loop we learn about in our AI course. Same concept, much more powerful implementation.&lt;/p&gt;

&lt;p&gt;Big Challenges the Paper Identifies&lt;/p&gt;

&lt;p&gt;Compounding Errors: This was the most interesting finding for me personally. In agentic systems, small mistakes in early steps do not stay small — they grow. If an agent makes a wrong assumption in step 2, by step 8 that wrong assumption has affected every decision made in between, and the final output can be completely wrong. The paper calls this "error propagation." I found this very relatable — in our Q1 assignment, if the rescue robot picks the wrong initial path, it wastes battery on all the steps that follow.&lt;/p&gt;

&lt;p&gt;Reliability and Safety: When an AI agent has the ability to actually do things in the world — send emails, write code, make purchases — reliability becomes critical. Current agentic systems still fail in unpredictable ways that are difficult to test for in advance.&lt;/p&gt;

&lt;p&gt;Evaluation: How do you know if an agentic system is actually good? For agentic systems that run for many steps, what counts as correct? The paper say this is an open research problem.&lt;/p&gt;

&lt;p&gt;Paper 2: A Survey of LLM-based Deep Search Agents (2025)&lt;/p&gt;

&lt;p&gt;Paper Title: "A Survey of LLM-based Deep Search Agents: Paradigm, Optimization, Evaluation, and Challenges"&lt;/p&gt;

&lt;p&gt;Authors: Xi, Y. et al.&lt;/p&gt;

&lt;p&gt;Published: arXiv:2508.05668, August 2025. First systematic survey of search agents.&lt;/p&gt;

&lt;p&gt;What is this paper about?&lt;/p&gt;

&lt;p&gt;This paper is about a very specific type of agentic AI — search agents. We all use search engines every day. But the way Google search works is actually quite simple — you type some keywords, it finds documents containing those keywords, and shows them ranked by relevance. This paper is about a completely different kind of search: agents that actually understand what you want to find, plan a search strategy, search multiple times in multiple places, reason about what they find, and then synthesize it all into a complete answer.&lt;/p&gt;

&lt;p&gt;The best real-world example the paper give is OpenAI Deep Research. When you ask it a complex research question, it does not just return links — it spends several minutes searching dozens of sources, reading them, connecting information across them, and writing a structured report. This is a search agent in action.&lt;/p&gt;

&lt;p&gt;The Evolution of Search — Three Generations&lt;/p&gt;

&lt;p&gt;Generation 1 — Traditional Search (Google): Match keywords, rank documents, return links. This is like uninformed search in our course — no knowledge of what is relevant, just finds everything. Cannot understand intent, cannot reason, cannot synthesize.&lt;/p&gt;

&lt;p&gt;Generation 2 — RAG Systems: Retrieve documents, feed to LLM, generate answer. Like BFS but with an answer generator at the end. Still no planning about what to search for next.&lt;/p&gt;

&lt;p&gt;Generation 3 — Search Agents (Deep Research): Plan, search, reason, plan again, search again, synthesize. This is like A* Search in our course — uses intelligence to guide where to search next, exactly like f(n) = g(n) + h(n). The LLM itself IS the heuristic function.&lt;/p&gt;

&lt;p&gt;When I saw this connection I got genuinely excited. We study A* search in class and it feels like a textbook algorithm. But here it is, the same core idea — using a heuristic to intelligently guide search rather than blindly exploring — appearing in a paper about the most advanced AI search systems in the world.&lt;/p&gt;

&lt;p&gt;Most Interesting Finding — The Lost in the Middle Problem&lt;/p&gt;

&lt;p&gt;The most surprising thing I found in this paper is the "lost in the middle" problem. Research show that when you give an LLM a very long document to read, it pays much more attention to information at the beginning and end compared to information in the middle. So if you retrieve 20 documents and put them all together for the LLM to read, the information in documents 8 to 14 gets less attention than documents 1 to 3 and 17 to 20.&lt;/p&gt;

&lt;p&gt;This means how you arrange retrieved information matters as much as what information you retrieve. I never expected that something as simple as where in the document information is located could affect AI performance so significantly.&lt;/p&gt;

&lt;p&gt;How Both Papers Connect to Our AI Course&lt;/p&gt;

&lt;p&gt;This is the section I find most exciting because the connections are genuinely surprising. These are not surface-level connections — they are deep structural similarities between classical AI concepts and modern cutting-edge research.&lt;/p&gt;

&lt;p&gt;Agent Types Connection: Paper 1 is literally a review of how agent architectures have evolved. Every framework they review is a different implementation of agent types we study in class — simple reflex, model-based, goal-based, utility-based.&lt;/p&gt;

&lt;p&gt;A Search Connection:* The ReAct framework uses reasoning as a heuristic to decide what action to take next — same f(n) = g(n) + h(n) structure as A*. In Paper 2, deep search agents use the LLM as h(n) — an intelligent estimator of how useful each search direction is — making the whole process informed rather than blind.&lt;/p&gt;

&lt;p&gt;CSP Connection: Multi-agent frameworks like MetaGPT decompose complex tasks into sub-tasks assigned to specialized agents — exactly like CSP decomposition. In Paper 2, query decomposition breaks complex questions into sub-questions — a direct application of constraint decomposition.&lt;/p&gt;

&lt;p&gt;Multi-Agent Environment Connection: Paper 1 dedicates an entire section to multi-agent agentic systems where agents communicate, negotiate, and coordinate — directly maps to the multi-agent environment dimension we classify in our AI assignment.&lt;/p&gt;

&lt;p&gt;Partially Observable Environment Connection: Agentic systems must maintain an internal world model because they cannot see all relevant information — same as the GB rescue robot in our assignment that operates in a partially observable flood zone environment.&lt;/p&gt;

&lt;p&gt;My Experience with Google NotebookLM&lt;/p&gt;

&lt;p&gt;Part of this assignment was to use Google NotebookLM to help understand the papers. I want to be honest — I was skeptical at first. I thought it would just summarize the papers and I would not really learn anything. I was wrong.&lt;/p&gt;

&lt;p&gt;Manual Reading Experience: When I first read the papers manually it was quite difficult. Technical terminology was hard to understand. The comparison of so many frameworks was specially confusing. I had to re-read same sections many times and still was not 100% sure I understood correctly.&lt;/p&gt;

&lt;p&gt;After NotebookLM: The experience was completely different. I used the question-answer feature to ask specific questions about parts I did not understand. NotebookLM helped me generate a concept map of all agent frameworks and see how AutoGPT, LangChain, MetaGPT and CrewAI relate to each other. The audio overview feature explained Paper 2 architectures very clearly — it was like listening to a podcast about the paper.&lt;/p&gt;

&lt;p&gt;Most importantly, using NotebookLM I discovered the lost-in-the-middle problem in Paper 2 — I had missed it completely during manual reading. That was a good lesson about not relying only on first impressions.&lt;/p&gt;

&lt;p&gt;My recommendation: always read the paper yourself first to form your own understanding, then use NotebookLM to fill the gaps. Using it without reading first means you do not have the base knowledge to ask good questions.&lt;/p&gt;

&lt;p&gt;My Video Explanation&lt;/p&gt;

&lt;p&gt;I also recorded a short 2-3 minute video where I explain the core ideas of both papers and share what I found most interesting. Watch it below!&lt;br&gt;


  &lt;iframe src="https://www.youtube.com/embed/lmns21bMldU"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;What I Took Away From This Experience&lt;/p&gt;

&lt;p&gt;Before doing this assignment I thought research papers were only for PhD students and professors. Now I think every CS student should try reading at least one or two papers every semester. Here is why:&lt;/p&gt;

&lt;p&gt;Classical AI is not obsolete — it is the foundation. A* search, agent types, CSP — these concepts appear directly in state-of-the-art 2025 research papers. Understanding them deeply in university gives you the ability to understand where the cutting edge is actually coming from.&lt;/p&gt;

&lt;p&gt;The field is moving unbelievably fast. More than 90% of all papers on agentic AI were published in just 2024 and 2025. Staying connected to research is the only way to not fall behind.&lt;/p&gt;

&lt;p&gt;The unsolved problems are the most interesting part. Compounding errors. The lost-in-the-middle problem. How to evaluate an agent that runs for 50 steps. These open challenges are opportunities — some of us reading this might contribute to solving them one day.&lt;/p&gt;

&lt;p&gt;References&lt;/p&gt;

&lt;p&gt;Bandi, A. et al. (2025). The Rise of Agentic AI. Future Internet, MDPI.&lt;/p&gt;

&lt;p&gt;Xi, Y. et al. (2025). A Survey of LLM-based Deep Search Agents. arXiv:2508.05668.&lt;/p&gt;

&lt;p&gt;Russell, S. &amp;amp; Norvig, P. (2022). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.&lt;/p&gt;

&lt;p&gt;Yao, S. et al. (2023). ReAct: Synergizing Reasoning and Acting in Language Models. ICLR 2023.&lt;/p&gt;

&lt;p&gt;Google NotebookLM — notebooklm.google.com&lt;/p&gt;

</description>
      <category>ai</category>
      <category>algorithms</category>
      <category>computerscience</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
