<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Muhammad Hamza Younas</title>
    <description>The latest articles on Forem by Muhammad Hamza Younas (@muhammad_hamzayounas_e6b).</description>
    <link>https://forem.com/muhammad_hamzayounas_e6b</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/muhammad_hamzayounas_e6b"/>
    <language>en</language>
    <item>
      <title>AI Tutors That Actually Get You? It's Happening</title>
      <dc:creator>Muhammad Hamza Younas</dc:creator>
      <pubDate>Mon, 13 Oct 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/muhammad_hamzayounas_e6b/ai-tutors-that-actually-get-you-its-happening-j47</link>
      <guid>https://forem.com/muhammad_hamzayounas_e6b/ai-tutors-that-actually-get-you-its-happening-j47</guid>
      <description>&lt;p&gt;Alright, grab your coffee (or pint, I won't judge). Let's chat about something that's been seriously blowing my mind lately: autonomous AI agents for hyper-personalised education. Sounds fancy, right? But trust me, it's way cooler than the name suggests. ## Remember Textbooks? I don't know about you, but my school days involved a lot of slogging through textbooks and just hoping something stuck. We were all forced into the same mould, regardless of individual learning styles. If you didn't 'get it' the way the textbook explained it, tough luck. You were left behind. It was a terrible system. That's where AI comes in. Imagine an AI tutor that adapts to *you*. Not the other way around. One that learns your strengths, weaknesses, and favourite ways of absorbing information. Sounds like science fiction? It's not. It's already happening. ## What *are* Autonomous AI Agents Anyway? Okay, let's break down the jargon. An autonomous AI agent is basically a piece of software that can act independently to achieve a specific goal. In this case, the goal is to help you learn. Think of it like this. Instead of just passively receiving information, you're interacting with a dynamic system. The AI agent observes your progress, identifies where you're struggling, and adjusts its teaching methods accordingly. It can provide different explanations, offer practice problems tailored to your weaknesses, or even connect you with other learners who are facing similar challenges. We're not talking about static chatbots here. These are agents that learn and evolve over time, becoming more effective at teaching *you* specifically. It's like having a personal tutor who knows you better than you know yourself (well, almost!). ## LLMs: The Brains Behind the Operation So, what makes these AI agents so powerful? The answer is Large Language Models (LLMs). You've probably heard of them. They're the same tech that powers things like ChatGPT and other conversational AI tools. LLMs are trained on massive amounts of text data, which allows them to understand and generate human-like language. This makes them perfect for building AI tutors that can explain complex concepts in a clear and concise way. But it's not just about language. LLMs can also be used to analyse student performance, identify patterns, and personalise the learning experience. They can even generate new learning materials on the fly, ensuring that you always have access to the most relevant and up-to-date information. We've actually covered what might be next for these models in the future in &lt;a href="https://dev.to/blog/large-language-models-what-s-next-in-2025-1760308756055"&gt;Large Language Models: What's Next in 2025?&lt;/a&gt; - it's worth a read if you want to dive deeper into the future of LLMs. ## My Own Adventures in AI Tutor Development I've been playing around with this stuff for a while now, and I've got to say, it's incredibly exciting. I ran into this last month when trying to build a simple maths tutor for my nephew. He was struggling with fractions, and I thought it would be a fun project to build something that could help him. I initially tried to use a rule-based system, where I manually defined all the possible scenarios and responses. It was a disaster! I wasted a week on it. It quickly became unmanageable, and it wasn't very good at adapting to my nephew's specific needs. I realised I had to use a different approach. LLMs it was! Here's a simplified example of how I used an LLM to generate explanations for fractions:&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;python import openai openai.api_key = "YOUR_API_KEY" # Replace with your actual API key def generate_explanation(fraction, concept): prompt = f"Explain the concept of '{concept}' in relation to the fraction {fraction}. Keep it simple and easy to understand." response = openai.Completion.create( engine="text-davinci-003", # Or your preferred LLM prompt=prompt, max_tokens=150, n=1, stop=None, temperature=0.7, ) return response.choices[0].text.strip() fraction = "1/2" concept = "equivalent fractions" explanation = generate_explanation(fraction, concept) print(explanation)&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 **Important:** You'll need an OpenAI API key to run this code. And remember to keep your API key safe! Don't commit it to your public code repositories. This is a very basic example, of course. But it shows you the power of LLMs to generate personalised explanations. The key is to provide the LLM with the right context and instructions. The &lt;code&gt;temperature&lt;/code&gt; parameter controls the randomness of the output. A lower temperature will result in more predictable and consistent explanations, while a higher temperature will result in more creative and varied explanations. I then built this into a simple web app using Flask. The app allowed my nephew to enter a fraction and a concept he was struggling with, and the LLM would generate an explanation in real time. It wasn't perfect, but it was a huge improvement over the rule-based s...&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>GitHub Copilot X Tricks I Wish I Knew Sooner</title>
      <dc:creator>Muhammad Hamza Younas</dc:creator>
      <pubDate>Mon, 13 Oct 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/muhammad_hamzayounas_e6b/github-copilot-x-tricks-i-wish-i-knew-sooner-3g3a</link>
      <guid>https://forem.com/muhammad_hamzayounas_e6b/github-copilot-x-tricks-i-wish-i-knew-sooner-3g3a</guid>
      <description>&lt;p&gt;Alright, grab a coffee (or a pint, depending on the time), and let's chat about GitHub Copilot X. I've been using it pretty heavily for the last few months, and honestly, it's changed how I code. But it's not just about accepting every suggestion it throws at you. To really get the most out of it, you need to understand its quirks and learn some advanced techniques. This tripped me up at first, but after some experimentation, I realised how powerful it could be. ## Why Bother with Advanced Techniques? Look, here's the thing: Copilot's great out of the box. It saves time on boilerplate and simple tasks. But if you're working on complex projects, relying on the basic features will only get you so far. You'll end up spending more time correcting its mistakes than actually coding. These advanced techniques let you steer Copilot in the right direction, ensuring it generates code that's not just syntactically correct but also aligned with your project's architecture and coding style. Plus, let's be real, AI-assisted coding is the future. By 2025, it'll be even more integrated into our workflows. Learning these skills now will give you a serious edge. ## My First Mistake: Trusting It Too Much I jumped in headfirst, accepting almost every suggestion. Big mistake. I quickly realised that Copilot wasn't always producing the *best* code, just the *easiest* code. It was often repetitive and didn't consider the bigger picture. I ended up with a codebase that was functional but messy and hard to maintain. That's when I knew I needed to change my approach. ## Technique 1: Prompt Engineering – Guiding the AI This is probably the most important thing I've learned. Copilot responds to prompts, just like any other AI model. The better your prompts, the better the output. It's all about giving it enough context to understand what you want. ### Be Specific Don't just write a vague comment like &lt;code&gt;// Create a function to fetch data&lt;/code&gt;. Instead, be precise: &lt;code&gt;// Create a function called fetchData that fetches data from the /api/users endpoint and returns a JSON object&lt;/code&gt; See the difference? The more information you provide, the better Copilot can understand your intentions. ### Use Docstrings Docstrings are your friend. They not only document your code but also provide valuable context for Copilot. I make it a habit to write detailed docstrings *before* writing the actual code. Copilot can then use this information to generate the function body.&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;python def calculate_average(numbers: list[float]) -&amp;gt; float: """Calculates the average of a list of numbers. Args: numbers: A list of floating-point numbers. Returns: The average of the numbers in the list. Returns 0 if the list is empty. """ # Copilot will generate the function body based on the docstring pass&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 ### Example: Refactoring with Prompts I ran into this last month when I had a huge function that needed refactoring. Instead of trying to rewrite it from scratch, I used Copilot to break it down into smaller, more manageable functions. I added comments like: &lt;code&gt;// Extract the data validation logic into a separate function called validateData&lt;/code&gt; Copilot then generated the new &lt;code&gt;validateData&lt;/code&gt; function, making the original function much cleaner. ## Technique 2: Leveraging Context – Feeding It Examples Copilot learns from your existing code. If you have a consistent coding style and architecture, it will pick up on it and generate code that fits in seamlessly. The key is to provide it with enough context. ### Open Relevant Files Make sure the files that contain related code are open in your editor. This gives Copilot a broader understanding of your project's structure and dependencies. ### Use Existing Patterns If you have a specific pattern for handling errors or logging, make sure Copilot is aware of it. You can do this by showing it examples of how you've implemented these patterns in other parts of your code. For example, if you have a custom error handling function, use it in a few places, and Copilot will start suggesting it automatically. ### Example: Consistent Error Handling Let's say you have a function called &lt;code&gt;handleError&lt;/code&gt; that logs errors and displays a user-friendly message. To make sure Copilot uses this function consistently, you can do something like this:&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;javascript async function fetchData(url) { try { const response = await fetch(url); const data = await response.json(); return data; } catch (error) { handleError(error, 'Failed to fetch data'); return null; } } // Now, when you write similar functions, Copilot will suggest using handleError&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 ## Technique 3: Fine-Tuning Suggestions – The Art of Rejection Not every suggestion is a good one. It's crucial to learn how to reject suggestions and guide Copilot towards the right solution. This is where the "X" in Copilot X really shines, with the ability to explain and refine suggestions.&lt;/p&gt;

</description>
      <category>techtrends</category>
    </item>
  </channel>
</rss>
