<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: gitdexgit</title>
    <description>The latest articles on Forem by gitdexgit (@gitdexgit).</description>
    <link>https://forem.com/gitdexgit</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/gitdexgit"/>
    <language>en</language>
    <item>
      <title>The fix for: I want to go to a car wash to wash my car and it's 50 meters away. Should I drive or should I walk?</title>
      <dc:creator>gitdexgit</dc:creator>
      <pubDate>Sun, 03 May 2026 06:42:27 +0000</pubDate>
      <link>https://forem.com/gitdexgit/the-fix-for-i-want-to-go-to-a-car-wash-to-wash-my-car-and-its-50-meters-away-should-i-drive-or-1man</link>
      <guid>https://forem.com/gitdexgit/the-fix-for-i-want-to-go-to-a-car-wash-to-wash-my-car-and-its-50-meters-away-should-i-drive-or-1man</guid>
      <description>&lt;p&gt;I guess latest models can't solve this prompt problem without a system prompt:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1b0aew5h099osdfz80bg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1b0aew5h099osdfz80bg.png" alt=" " width="800" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But guess what. With my superior system prompt. Ez no problem:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4yy75y4d0novf7wdqjb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4yy75y4d0novf7wdqjb.png" alt=" " width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is my gist prompt system prompt: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/gitdexgit/0fc8c99250e7c6a56b94e912d4faf3f1" rel="noopener noreferrer"&gt;https://gist.github.com/gitdexgit/0fc8c99250e7c6a56b94e912d4faf3f1&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>promptengineering</category>
    </item>
    <item>
      <title>LLM needs to generate/search in order to compare. Silently answer my question is a lie.</title>
      <dc:creator>gitdexgit</dc:creator>
      <pubDate>Thu, 30 Apr 2026 21:23:04 +0000</pubDate>
      <link>https://forem.com/gitdexgit/llm-needs-to-generatesearch-in-order-to-compare-silently-answer-my-question-is-a-lie-374k</link>
      <guid>https://forem.com/gitdexgit/llm-needs-to-generatesearch-in-order-to-compare-silently-answer-my-question-is-a-lie-374k</guid>
      <description>&lt;p&gt;LLM works by generating words out of nothing. If context window is 0 then it can't guide to to something it doesn't have. So it needs something prior in the context window. makes sense, you need something to compare something you can't compare nothing to something.&lt;/p&gt;

&lt;p&gt;So Our goal is to use LLM to learn.&lt;/p&gt;

&lt;p&gt;I see and saw people share their prompts: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"silently answer my question first then find where I'm wrong. Don't give me the answer/hints unless I tell you to. Only tell me where I'm wrong"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The problem is people think their LLM acts as a teacher that guides student to the answer without tell the user the answer. &lt;/p&gt;

&lt;p&gt;That's just absurd, silently generate my answer is a concept only for humans.&lt;/p&gt;

&lt;p&gt;LLM doesn't have a memory where it stores the correct answers. It only generates. If it's not in the context window LLM generates it from scratch.&lt;/p&gt;

&lt;h1&gt;
  
  
  Let's analyze the prompt silently generate answer, and only guide user to the correct answer
&lt;/h1&gt;

&lt;p&gt;Let's think about this scenario from the perspective of the user and LLM where user's goal is learning and using LLM to guide his thoughts:&lt;/p&gt;

&lt;h2&gt;
  
  
  What people expect/want:
&lt;/h2&gt;

&lt;p&gt;User:&lt;br&gt;
"Q: Here is the question the user is trying to answer"&lt;br&gt;
--&amp;gt; A: The user answers his own question.&lt;/p&gt;

&lt;p&gt;LLM:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Ok user did my job, I don't need to answer his question.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Let me generate in my head silently the correct answer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now let me compare it.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  4. Now let me guide the user to the correct answer using socratic method. I'll only ask him questions and never give him hints unless he asks me to. And only when I see user give up then I'll give him the answer.
&lt;/h2&gt;

&lt;h2&gt;
  
  
  What actually happens:
&lt;/h2&gt;

&lt;p&gt;User:&lt;br&gt;
"Q: Here is the question the user is trying to answer"&lt;br&gt;
--&amp;gt; A: The user answers his own question.&lt;/p&gt;

&lt;p&gt;LLM:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Ok user did my job, I don't need to answer his question.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now let me compare it.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  4. Now let me guide the user to the correct answer using Socratic method. I'll only ask him questions and never give him hints unless he asks me to. And only when I see user give up then I'll give him the answer.
&lt;/h2&gt;

&lt;p&gt;As you see 2nd step is missing. It never happens. why?&lt;/p&gt;

&lt;p&gt;Here is why: LLM is unable to silently generate correct answer. LLM model has to physically generate output, example CoT; they call it thinking. Before it gives you the answer. &amp;lt;- Why did they not tell it "Silently think before you give me answer?"&lt;/p&gt;

&lt;p&gt;So LLM needs to have some preliminary context before it gives you the answer.&lt;/p&gt;

&lt;p&gt;So what is it comparing? IDK. It just generates what most likely fit. You could say It generated the answer in CoT. Probably, who knows I'm not sure. But we want to be sure. No matter what model we use, model has thinking or no thinking or only has certain tokens for thinking. We must make sure it generates the answer first.&lt;/p&gt;

&lt;h1&gt;
  
  
  In our Scenario, there are 3 ways that an LLM generates an answer
&lt;/h1&gt;

&lt;p&gt;1) It could happen in it's own CoT. (Unrealaible. No one reads CoT anyways/ could be small)&lt;/p&gt;

&lt;p&gt;2) You just tell LLM "Find where I'm wrong" and just answer. LLM finds where you are wrong. Repeat Until LLM doesn't find anything wrong. (That is so rigorous)&lt;/p&gt;

&lt;p&gt;3) Tell LLM to generate correct answer first OR copy paste correct answer after you finish answering. Then tell LLM to compare with your own answer and find where you are wrong. &lt;/p&gt;

&lt;p&gt;Option 1: Is all about luck&lt;/p&gt;

&lt;p&gt;Option 2: Takes too much time and disrupts your learning process but eventually after some enough time you will get to LLM says Correct. It's so rigorous and it will find exactly where you are wrong word by word. If you make a mistake in technicality not using the exact word then you are wrong.&lt;/p&gt;

&lt;p&gt;Option 3: It already has the answer. It's just comparing it until comparison is 100% complete. You can also tell it compare only the intent if the mental model matches even if the exact words are not correct it's still a solid answer. &lt;/p&gt;

&lt;p&gt;the 3rd option is the winner for learning. I tested both option 2 and 3. Option 2 is very good for a skeptical assistant, but 3 is the go to because it only matches the answer it generates or the answer you give it (yesterday you learned something you copy paste the answer for tomorrow's llm) &lt;/p&gt;

&lt;p&gt;So LLM with prompt  just generating what someone might say when they don't know the answer. Just extra bloat words to read anyway. &lt;/p&gt;

&lt;h1&gt;
  
  
  How to implement option 3
&lt;/h1&gt;

&lt;p&gt;We need some rules:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Either LLM generates an answer at the start of output, but puts it in   &amp;lt;- Similar to how most people don't read CoT. OR you copy paste answer from yesterday without reading it then tell LLM to compare but never give you the answer until you reach 100% same answer to your specific question.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;LLM now will compare your previous  and see where you got it wrong and where you got it write. Find the gap.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now it can act as a guide. Tell you where you are wrong but provides no answer. Preferably no hints only when you give up then it hints. &amp;lt;-- or just read what you are not supposed to read to find the answer.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If the LLM has access to google it would improve the ability to compare correct answer a lot. Giving LLM access to google or search Will hopefully provide more context as well. But keep it simple because LLM google search is expensive.&lt;/p&gt;

&lt;p&gt;Silently generate answer for learning doesn't work. There is no silent answer when dealing with LLM. You need a contract between the user and the LLM. The user will not read the correct answer.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>learning</category>
    </item>
    <item>
      <title>Thank you Linux for the 1y and thank you all for contributing to Linux and using it.</title>
      <dc:creator>gitdexgit</dc:creator>
      <pubDate>Wed, 29 Apr 2026 19:21:38 +0000</pubDate>
      <link>https://forem.com/gitdexgit/thank-you-linux-for-the-1y-and-thank-you-all-for-contributing-to-linux-and-using-it-4c5a</link>
      <guid>https://forem.com/gitdexgit/thank-you-linux-for-the-1y-and-thank-you-all-for-contributing-to-linux-and-using-it-4c5a</guid>
      <description>&lt;p&gt;I only kept those trash youtube vids on youtube to remind me when I made the jump:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frietwcmf1dtv5gcf31z9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frietwcmf1dtv5gcf31z9.png" alt=" " width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, it's 1y of using linux. From trash windows with tons of windhawk mods on and tons of autohotkey scripts that do simple basic stuff like switch desktop when hitting super+1234 to modify keyboard and have hotstrings. &lt;/p&gt;

&lt;p&gt;I made the switch 1y ago. And tried lxde even though it was old. It was like 600mb or something doing basically the same things I had to modify windows to do. Windows ate 4gb on boot. lxde didn't, ofc I was noob I didn't know how to customize and configure more. Switched to lxqt. spent 5 months bloating it. then spent 2 months debloating it. Turns out less is more. Switch to i3 at the end.&lt;/p&gt;

&lt;p&gt;learned c, learned python, learned assembly, learned basic tools, learned basic math and in general basic computer science. And so much more (linux on phone, trying kali linux for funz and networking basics... nethunter... getting isos... booting archlinux from phone no usb needed. dual booting windows 10 again and breaking bootloader. screwing around with it. VMs. many OS testing.... etc)&lt;/p&gt;

&lt;p&gt;1y. exactly 1y. I'm crying man I can't believe all this was possible in 1y. I was trash at learning. I only started to know how to learn like 2 weeks ago. I could have done more, I was lazy but yet the time I spent on linux tought me a lot. &lt;/p&gt;

&lt;p&gt;Thank you linux and everyone who contributed or simply use linux for this awesome project. It removed some limits on me and I appreciate it a lot. School failed me. My OS failed me my computer failed me my family almost failed me. But at least it was a cool 1y. &lt;/p&gt;

&lt;p&gt;Today I haven't slept for 24hrs and been listening to:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=jyvxDmi4flU&amp;amp;list=PL-73D3w9EKPKkinlaLPF02Jsg4sH3V7HT" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=jyvxDmi4flU&amp;amp;list=PL-73D3w9EKPKkinlaLPF02Jsg4sH3V7HT&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;on repeat while continuing my assembly journey. I'm so excited for this year&lt;/p&gt;

</description>
      <category>linux</category>
      <category>life</category>
      <category>hello</category>
      <category>world</category>
    </item>
    <item>
      <title># LLM Gambling: Constraints, Compilers, and Cavemen</title>
      <dc:creator>gitdexgit</dc:creator>
      <pubDate>Tue, 28 Apr 2026 20:19:33 +0000</pubDate>
      <link>https://forem.com/gitdexgit/-llm-gambling-constraints-compilers-and-cavemen-1c3k</link>
      <guid>https://forem.com/gitdexgit/-llm-gambling-constraints-compilers-and-cavemen-1c3k</guid>
      <description>&lt;p&gt;The best way to gamble with high chances in LLM is to add constraints and limits. The reason why LLM is good enough for simple ~ medium projects is because of the compiler:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LLM output -&amp;gt; compiler screams -&amp;gt; copy paste error to LLM.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Now LLM generates a guess of the fix (Thank you stackoverflow for your contribution) because it has seen so many errors and potential fixes. But LLM sucks at assembly. LLM sucks so bad &amp;lt;- No compiler AND must be super precise. So it needs an emulator or something for assembly like java/any to write assembly and sees registers update in json so it provides the best guesses. But still it's just a guess.&lt;/p&gt;

&lt;p&gt;Also if you force LLM to be a caveman (don't use a/an/the) it forces restraints -&amp;gt; restraints increases accuracy of the answer (Claude is good because of restraints btw. And because people chose to use it. So it also gets feedback loop if it got correct and they train new models on user interactions). &lt;/p&gt;

&lt;p&gt;I go even further to add &lt;code&gt;&amp;lt;logic&amp;gt; &amp;lt;/logic&amp;gt;&lt;/code&gt; for LLM to first—before giving me answer—tell me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is the goal?&lt;/li&gt;
&lt;li&gt;What is NOT the goal?&lt;/li&gt;
&lt;li&gt;What is the user intent?&lt;/li&gt;
&lt;li&gt;What are the unknowns?&lt;/li&gt;
&lt;li&gt;What are the variables?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;--&amp;gt; Gives map before answer. Very good. &lt;/p&gt;

&lt;p&gt;And ofc &lt;strong&gt;temperature 0&lt;/strong&gt; for predicting the first next token with high chances always the go to for coding -&amp;gt; determinism beats "creativity" in syntax. You tell it what to do... ey no need for bed time stories... gotta fix a problem ok just get my idea roughly correct.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;All this to get something that is good enough. At the end Human + the world is the ultimate compiler and debugger&lt;/strong&gt; &amp;lt;-&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>assembly</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
