<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Bernát Kalló</title>
    <description>The latest articles on Forem by Bernát Kalló (@cie).</description>
    <link>https://forem.com/cie</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/cie"/>
    <language>en</language>
    <item>
      <title>Creating an AI policy, part II</title>
      <dc:creator>Bernát Kalló</dc:creator>
      <pubDate>Mon, 16 Mar 2026 17:38:06 +0000</pubDate>
      <link>https://forem.com/cie/creating-an-ai-policy-part-ii-2fef</link>
      <guid>https://forem.com/cie/creating-an-ai-policy-part-ii-2fef</guid>
      <description>&lt;p&gt;A lot have changed since I wrote about my thoughts on &lt;a href="https://dev.to/cie/creating-an-ai-policy-1a1o"&gt;creating an AI policy&lt;/a&gt;. Now I write lots of code with coding agents, and developing much more AI integrations into apps. AI coding agents are very standardized now, AI slop is prevalent, and hobbyists operate rogue AI agents with hardly any oversight.&lt;/p&gt;

&lt;p&gt;How should we define an AI policy in the early 2026 era?&lt;/p&gt;

&lt;p&gt;How about something we could summarize as:&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Nothing changes with AI.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What I mean by this is: Software is still software, and prompts + LLMs are also software (albeit nondeterministic).&lt;/p&gt;

&lt;p&gt;If we wanted 98% happy users before, we should want the same, even if we have AI features.&lt;/p&gt;

&lt;p&gt;If we had to ship software that works correctly 99.9% of the time, we should have the same endeavor now. Even if a quarter of our code is prompts. And even if half of our code is written by a coding agent.&lt;/p&gt;

&lt;p&gt;Having a nondeterministic automated system write parts of the code is an additional hazard compared to the old days. So we need to take more care with the steps that ensure the quality and correctness of the code: planning, architectural design, refactoring, testing, code review, static analysis etc., to get back to that 99.9% (or whatever figure we have, depending on our field).&lt;/p&gt;

&lt;p&gt;If we've been believing in unit tests, we should have LLM evals or something alike for the stochastic part of our codebase.&lt;/p&gt;

&lt;p&gt;Nothing changed: software still must be made for humans. And made by humans, even if with AI tools. And humans must still take the responsibility for their software. And the behavior of their software.&lt;/p&gt;

&lt;p&gt;This piece of wisdom from an ancient law book might be relevant in this topic:&lt;br&gt;
&lt;em&gt;"When you build a new house, be sure to install a railing around your roof, so that you won't be held guilty if someone dies falling from it."&lt;/em&gt; &lt;small&gt;(&lt;a href="https://www.freebibleversion.org/DeuteronomyFBV.pdf" rel="noopener noreferrer"&gt;Deuteronomy, FBV translation&lt;/a&gt;)&lt;/small&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>discuss</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Different color schemes for different projects in VSCode</title>
      <dc:creator>Bernát Kalló</dc:creator>
      <pubDate>Thu, 13 Feb 2025 14:43:21 +0000</pubDate>
      <link>https://forem.com/cie/different-color-schemes-for-different-projects-in-vscode-380h</link>
      <guid>https://forem.com/cie/different-color-schemes-for-different-projects-in-vscode-380h</guid>
      <description>&lt;p&gt;Helps in context switching, at least for me.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F74mx0yz2l5omcogrwdqv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F74mx0yz2l5omcogrwdqv.png" alt="Image description" width="800" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Two ways to set this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create a Profile for each project, and set &lt;code&gt;workbench.colorTheme&lt;/code&gt; in the profile settings&lt;/li&gt;
&lt;li&gt;OR create a Workspace for each project, and set &lt;code&gt;workbench.colorTheme&lt;/code&gt; in the workspace settings&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>If other web frameworks were named like Ruby on Rails...</title>
      <dc:creator>Bernát Kalló</dc:creator>
      <pubDate>Mon, 27 Jan 2025 05:51:23 +0000</pubDate>
      <link>https://forem.com/cie/if-other-web-frameworks-were-named-like-ruby-on-rails-2i5g</link>
      <guid>https://forem.com/cie/if-other-web-frameworks-were-named-like-ruby-on-rails-2i5g</guid>
      <description>&lt;p&gt;Laravel – PHP with Pails.&lt;br&gt;
NestJS – TypeScript with Tails.&lt;br&gt;
J2EE – Java in Jails.&lt;br&gt;
Netcat – Bash on Bails.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Creating an AI policy</title>
      <dc:creator>Bernát Kalló</dc:creator>
      <pubDate>Tue, 07 Nov 2023 20:11:31 +0000</pubDate>
      <link>https://forem.com/cie/creating-an-ai-policy-1a1o</link>
      <guid>https://forem.com/cie/creating-an-ai-policy-1a1o</guid>
      <description>&lt;p&gt;The gold rush has irreversibly begun, software companies are shoveling AI support into their apps, while experts in the field cannot scream loud enough about the risks. While there are infinite new opportunities, difficult to comprehend how immensely valuable, certainly there are also many dangers, which are maybe even more difficult to realize.&lt;/p&gt;

&lt;p&gt;We as software developers and product managers have a HUGE say in what will happen and what not in cyberspace. We have &lt;a href="https://www.youtube.com/watch?v=Tng6Fox8EfI"&gt;way more power&lt;/a&gt; than we often realize. And because of that, we need to be very careful. And responsible. And we need to have principles.&lt;/p&gt;

&lt;p&gt;Principles, by definition, should not depend on the actual opportunity. We should be prepared not to blindly follow the flow when a tempting opportunity comes if it compromises our principles. So I believe it's good to have an AI policy ready, preferably published on our website, so when I'm asked to create this or that AI integration, there is something written that I can refer to. A &lt;a href="https://medium.com/betterism/tie-yourself-to-the-metaphorical-mast-7140ad69947b"&gt;Ulysses pact&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So I've started to sketch an AI policy for my software development team, based on the bits that I've already understood about &lt;a href="https://www.youtube.com/watch?v=fDHvUviV8nk"&gt;the risks&lt;/a&gt; of &lt;a href="https://www.youtube.com/watch?v=xoVJKj8lcNQ"&gt;generative AI.&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Here it is:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Our basic principle is that humans should rule over the computer and not the other way around. Therefore we do not create and do not install a system where&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;an AI makes decisions about humans, without a human taking responsibility about the decision&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;where an AI can uncontrolledly influence the outside world, or make uncontrolled actions on the internet&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;We think helping human editing work by AI is useful, however, we see the risks when the human editor only superficially verifyies the AI-generated content. Therefore, in the applications we develop, every longer (&amp;gt;~10 words) AI-generated content included into human-edited content will be marked with some subtle mark (color, comment). The human editor can remove this mark, thus explicitly taking responsibility for the content.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Similarly, when I code with an AI assistant, I strive not to accept longer (multi-line) AI-generated code completions at once, only line by line or word by word.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is only my first attempt in this new genre, and I plan to further refine it as new trends are happening and my understanding of the topic broadens.&lt;/p&gt;

&lt;p&gt;I encourage you to think about this difficult and sensitive topic. And please share your thoughts!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>chatgpt</category>
      <category>ethics</category>
      <category>policy</category>
    </item>
  </channel>
</rss>
