<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: jertyuiop</title>
    <description>The latest articles on Forem by jertyuiop (@jertyuiop).</description>
    <link>https://forem.com/jertyuiop</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jertyuiop"/>
    <language>en</language>
    <item>
      <title>Should we feel guilty for using AI?</title>
      <dc:creator>jertyuiop</dc:creator>
      <pubDate>Wed, 22 Apr 2026 16:48:49 +0000</pubDate>
      <link>https://forem.com/jertyuiop/should-we-feel-guilty-for-using-ai-2do8</link>
      <guid>https://forem.com/jertyuiop/should-we-feel-guilty-for-using-ai-2do8</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This blog post is adapted from a blog I wrote at work - it focusses on AI usage in a work setting, particularly from my perspective as a software engineer.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;How do you feel about ‘AI’ (by which I mean LLMs[1] in this blogpost) - are you using it a lot on a daily basis, both for work and for personal everyday tasks? Or are you skeptical of its negative impact and have resolved not to touch it with a barge pole?&lt;/p&gt;

&lt;p&gt;I'm somewhere in the middle, I was initially hesitant to use AI but now I'm using it most days for work - not least the fact that my team recently attended a Snowflake conference that centred all around AI (and I got my own teddy of the Snowflake polar bear!).&lt;/p&gt;

&lt;p&gt;To clarify, I can't (and don't want to) dictate if you should feel guilty, I just wanted an enticing title (which I shamelessly stole from a tech Youtuber I follow, AlbertaTech[2]). But I would like to explore some problematic areas around AI, and help us consider how we use it - particularly focussing on how we’d use AI at work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqrqwdkh6sv6keq9470z.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqrqwdkh6sv6keq9470z.jpg" alt="Photo of a desk with a laptop and Snowflake polar bear teddy at a Snowflake AI conference" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h6&gt;
  
  
  Me attending a hands-on lab at the Snowflake BUILD London conference in February, going into Snowflake’s latest AI features
&lt;/h6&gt;

&lt;h1&gt;
  
  
  Environmental costs
&lt;/h1&gt;

&lt;p&gt;You’ve probably heard that there's an environmental cost to AI[3]. This doesn’t mean AI is therefore evil – the truth is all technology has some environmental cost and that can't really be avoided. It's also quite difficult to know just how much energy AI uses, but the consensus is AI uses a lot more than a Google search (without the AI overview). MIT Technology Review has done some research on this[4] and come up with the following numbers for some of Meta’s models (please note that this may not be accurate to models you use):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Text generation&lt;/strong&gt;, asking questions and receiving text-based answers - the environmental cost of this heavily depends on model size. Larger text generation models take up a lot of energy, regardless of what the request is.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The smallest of Meta's Llama models, Llama 3.1 8B, has 8 billion parameters (i.e. the adjustable "knobs" used to fine tune a model) which only uses a small amount of energy per request: ~114 Joules, enough energy to run a microwave for one tenth of a second&lt;/li&gt;
&lt;li&gt;The largest Llama model has 50 times more parameters and uses 67x more energy per response: ~6,706 Joules, enough to run a microwave for 8 seconds. This is more than generally required for image generation.&lt;/li&gt;
&lt;li&gt;The prompt is a huge factor too -  a simple prompt to request to tell a few jokes can use 9x less energy than more complicated prompts, like writing creative stories or recipe ideas.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Image generation&lt;/strong&gt; - the energy usage depends on several factors like the image resolution, but has a higher baseline than text generation: ~4,402 Joules to generate a somewhat high quality image using Stable Diffusion, enough to run a microwave for 5.5 seconds.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I want to note that with AI image generation, it often takes you multiple attempts to get the AI to create the image you want, so it’s usually not just one image you’re creating but maybe 10 or so before you get an image you can use.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Honestly at one stage AI art felt exciting and entertaining, but the environmental cost has put me off it now &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Note: here I’m talking about making images for fun/to put in a presentation, as I’m focussing on work-related AI usage in this blog post. There are a lot of other issues with AI art I won’t go into here, but my general feeling is that I don’t think you should use AI art if you’re selling it, and that AI should be a tool to help people and should not replace human creativity. I won’t open the can of worms any further as that could be a whole separate blog post!&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Video generation&lt;/strong&gt;, while something we usually won't do for work, uses &lt;strong&gt;much&lt;/strong&gt; more energy - creating a 5-second video at 16 frames per second uses more than 700 times the energy required to generate a high-quality image: ~3.4 million Joules, enough to run a microwave for over an hour.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;While the argument could be made that this is less energy than having a film crew produce a video, I’d say this is counterbalanced by the fact that it’s now much more easy to make videos, so the volume of people making AI videos is a lot higher.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  This isn’t a guilt trip
&lt;/h2&gt;

&lt;p&gt;I’d like to reiterate that I don't want to make you feel guilty. I'm not going to call you a mega-polluter for asking AI to help you phrase an email or generate some boilerplate code for you. However, if you find yourself generating your 10th image today of a cat skateboarding on the moon, then perhaps I'd encourage you to take a break.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“But if I stopped using AI would that really make any difference? I’m just one person after all.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;First of all, I’m not suggesting we all stop using AI - while some people may choose to do this, it’s not necessarily practical for everyone, especially if your company develops products that use AI. But I would encourage each of us to think about whether we’re overusing AI, and if it’s the best tool for any given task - this can apply to personal use and also in the products we develop. I know this is a big issue that you or I can’t fix on our own, but the little actions do add up and make more of a difference than we realise, especially if we all take little steps in the right direction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overproduction
&lt;/h2&gt;

&lt;p&gt;It’s also important to state that, as with other environmental issues (e.g. the meat industry[5]), the biggest problem is not the product itself, but in large corporations wanting to build and expand at such a high rate that it puts a strain on the planet.&lt;/p&gt;

&lt;p&gt;In this case, it’s about building more and more datacentres - you may be aware this has drastically increased the demand for certain hardware[6], so now for the average consumer buying certain hardware is much more expensive, if it's even available at all. And as well as land and energy use, this also results in needing vast amounts of water to cool these datacentres, making things harder for those living in water-deprived areas.&lt;/p&gt;

&lt;p&gt;This level of consumption in the name of “progress” is not sustainable - I don’t just mean environmentally, but as a business model too, you can’t just keep scaling up indefinitely whenever you hit a slight hurdle, at some point resources will run out. Interestingly, DeepSeek has shown that they can improve performance without having to rely on pumping out more datacentres, but on making more efficient use of the resources they have[7], and I truly hope other companies follow suit. But note that even though models are becoming more efficient, AI usage is increasing rapidly, which means datacentres are continuing to increase.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;“Ok so our individual use doesn’t matter after all then, good to know!”&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;While it’s true that the big companies are causing massive problems, this doesn’t mean our individual actions are irrelevant. If we as individuals, as software engineering departments, and as companies find ways in which we can use AI more efficiently, and avoid unnecessary queries, this can contribute towards slowing the growth of demand.&lt;/p&gt;

&lt;p&gt;In fact, many of us do have the power to affect things more than just on an individual scale, our departments and companies are likely making decisions about AI usage. It’s in companies' best interest to use AI tools as efficiently as possible - for example optimising token usage of our AI models reduces monetary cost while also reducing the environmental impact, it’s a win-win - whether this be selecting the most appropriate model for a given query rather than always using the most expensive one, or finding novel ways to reduce the output of chatbots by using a tool like &lt;a href="https://github.com/JuliusBrussee/caveman/tree/main" rel="noopener noreferrer"&gt;Caveman&lt;/a&gt;. If we can find the most efficient ways to use our AI tools, this will likely make a big difference in the long run, especially if we can roll this out to our whole department/company, including with the products we’re developing.&lt;/p&gt;

&lt;p&gt;As an example, someone I know is working on AI agents for internal staff use, and they’re making them available to a small subset of users who they will train on how to use them, to ensure costs (both financial and environmental) don’t spiral out of control. &lt;/p&gt;

&lt;p&gt;Do you have any ideas for how we can optimise AI usage? Leave a comment if so, I’d love to hear other people’s ideas!&lt;/p&gt;

&lt;h2&gt;
  
  
  The world’s greenest AI[8] and search engine
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.ecosia.org/" rel="noopener noreferrer"&gt;Ecosia&lt;/a&gt; is a search engine created by a not-for-profit company that is doing a lot to help the planet through green tech, they use their profits to plant trees and invest in renewable energy[9] (Ecosia has a tree counter, to count how many trees they’ve planted: this only includes trees that have survived for at least 2 years, so it’s not just pretending to be sustainable, they’re in it for the long term).&lt;/p&gt;

&lt;p&gt;I’d recommend using it as your default search engine, not only for its positive impact but it doesn’t force you to have an AI overview with every search (you can turn AI overviews off in the settings). I’ve been using it for many years and it’s served me well.&lt;/p&gt;

&lt;p&gt;Ecosia has their own AI chatbot that uses renewable energy: &lt;a href="https://www.ecosia.org/ai-search" rel="noopener noreferrer"&gt;Ecosia AI&lt;/a&gt;. While this may not be the AI you use to help with coding, I use it for out-of-work and non-technical queries. I’d encourage you to start using it and see how you find it - this is an actionable way to use AI more responsibly, at least from the environmental point of view.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Some in Ecosia’s community believe Ecosia shouldn’t have made an AI search in the first place because, while it generates more energy than it uses (using clean energy), there are still environmental impacts to using it. I do think this is the best environmental choice, but it should still be used mindfully.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;(If you’re truly attached to using Google, you can add ‘-AI’ to the end of your search to disable the AI overview, if you know you won’t need it)&lt;/p&gt;

&lt;h1&gt;
  
  
  Cognitive impacts - what effect does using AI have on our ways of thinking?
&lt;/h1&gt;

&lt;p&gt;Next I’d like to explore how using AI affects our thinking, starting with our productivity, particularly around writing code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does AI actually make us more productive?
&lt;/h2&gt;

&lt;p&gt;Personally, I've found AI has been really helpful when I'm working on something that uses a language I’m not familiar with, I’ve asked it to explain what some code is doing and it’s helped me learn. Whereas previously I was browsing StackOverflow and trawling through different sites for answers, AI has helped speed up that process a lot. I’ve also used Snowflake’s new in-built AI Cortex Code, which really helps me figure out how to use Snowflake-specific functionality.&lt;/p&gt;

&lt;p&gt;But I have also gone down rabbit holes with AI, or wasted time going around in circles trying to get AI to fix a problem for me but making no progress - I imagine many others can empathise. So on a personal level, it’s a bit difficult to say the net effect AI has had on my productivity, but please leave a comment if your experience is very different to mine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fos2azxjgt9e62acz7tza.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fos2azxjgt9e62acz7tza.jpg" alt="Meme featuring a calendar about going back and forth between asking Claude to fix bugs then having to fix them yourself" width="800" height="1177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h6&gt;
  
  
  Post from &lt;a href="https://www.instagram.com/p/DVFcea2kqnn/?igsh=ZTVveDdiY3cydWMz" rel="noopener noreferrer"&gt;dataengineeringtamil on Instagram&lt;/a&gt; - do you find this relatable?
&lt;/h6&gt;

&lt;h3&gt;
  
  
  Surely AI makes us more productive, isn’t it going to replace us soon?
&lt;/h3&gt;

&lt;p&gt;While it’s true that AI can create code for us quickly, I really don’t think AI can replace developers. If we measure productivity purely by the lines of code produced then that tells one story, but our job is so much more than writing code.&lt;/p&gt;

&lt;p&gt;“The Serious CTO” pushes back against the idea that AI will replace developers, saying that it makes developers feel faster while they actually deliver changes more slowly, and that relying on AI without understanding systems creates a whole host of problems, including with security and maintainability[10].&lt;/p&gt;

&lt;p&gt;He points out a few concerning trends from a CodeRabbit study on AI-generated PRs compared to human-written ones [11]:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AI-generated code contained 1.7x more issues than human-written code, with a 1.4x increase in “Critical” defects&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;“Logic and Correctness” issues (e.g. business logic errors) were 75% more common in AI-generated code&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;These errors are difficult for static analysis tools to detect, as they require an understanding of the &lt;em&gt;intent&lt;/em&gt; of the application&lt;/li&gt;
&lt;li&gt;These are also the most expensive to fix and most likely to cause downstream incidents&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Readability issues were over 3x more frequent in AI-generated PRs, while creating code that looks consistent but violates patterns around naming and formatting, and generates unused/redundant code&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This increases technical debt and makes the code less maintainable&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Security vulnerabilities were 2x higher in AI-generated PRs, the most prominent issues being improper password handling, insecure object reference and XSS&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;These were not &lt;em&gt;unique&lt;/em&gt; to AI-generated PRs, but were more frequent in them.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Performance was less efficient in AI-generated code - excessive I/O operations were ~8x more common in AI-authored PRs&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;CodeRabbit outlines a number of things we can do to mitigate these issues, like using policy-as-code to enforce style, adding safety guardrails and adopting AI-aware PR checklists. I’m not saying AI-generated code is all bad, sometimes it can be very helpful, but I hope the points above make us think twice before copying and pasting code from an AI chatbot (and I’ll be honest, I’ve done that before).&lt;/p&gt;

&lt;p&gt;So to answer the question in the heading… no I don’t believe AI can ever replace software engineers, despite companies claiming so - as one person put it “Software engineering has been within 6 months of being dead continually since early 2023”[12]. AI is a tool that can help us, but it’s our job to use it in a responsible and effective way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does AI change the way we think?
&lt;/h2&gt;

&lt;p&gt;The previous part focussed on coding, but many of us use AI to help with other parts of our job too, like phrasing emails, writing documentation or planning tasks. This section also addresses those areas, in particular what effects does AI have on how we think?&lt;/p&gt;

&lt;h3&gt;
  
  
  AI influences our vocabulary
&lt;/h3&gt;

&lt;p&gt;It appears that using AI affects the way we speak[13], as words more commonly used by ChatGPT have found to become more common in everyday language[14]. The question is - if AI influences the way we talk, does it affect the way we think as well? In my opinion, yes - anything we consume or immerse ourselves in affects us to some degree.&lt;/p&gt;

&lt;p&gt;This isn't a new thing, whatever we consume on social media, TV or newspapers affects our thinking, and AI chatbots are no exception. The trouble is, none of these tools are neutral, whether it’s people wanting to sell you something, writers wanting to convince us of their world view, an algorithm programmed to make us addicted to scrolling, or implicit biases in the data an AI model has been trained on.&lt;/p&gt;

&lt;p&gt;This may seem a bit overwhelming, and there’s not really an easy way to fix or address this, but being aware is the first step. When we consume any content, we can try to examine it as an active consumer rather than passively taking it in. This might feel draining with the amount of information available to us, and maybe sometimes the best course of action is to step back and take a break from whatever we’re consuming. But I’m a firm believer that taking small steps in the right direction can make a big difference in the long run.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI can make us lose certain skills
&lt;/h3&gt;

&lt;p&gt;Using AI can make us less skilled at tasks needed for our jobs, e.g. doctors experienced erosion of their ability to spot cancer when they began using AI tools[15]. While this sounds scary, and it is, I don’t think it's enough to conclude “and therefore AI is evil”. I say this because once people started learning to read and write our memories got worse, we no longer had to memorise things as much, so we lost the ability to retain information to the degree we did before.&lt;/p&gt;

&lt;p&gt;I hope it's not controversial to say - I think it's a good thing that most people can read and write now. But there was a trade-off, and in the same way technology can make us lose certain skills we've had. My dad often tells me my generation shouldn’t rely on Google Maps, but should be looking out for landmarks and be observant about your surroundings so you can trace back where you’ve been.&lt;/p&gt;

&lt;h3&gt;
  
  
  So… what do we do?
&lt;/h3&gt;

&lt;p&gt;The previous two points can be addressed by this: how are we using AI? Are we blindly trusting its output, and not questioning it? Honestly, I have done that and still do sometimes, it's draining to constantly have to question things. And yet thinking critically is so important - as The Serious CTO says "You can't outsource your brain".&lt;/p&gt;

&lt;p&gt;Using a tool is fine, but it starts to become a concern when you become dependent on it, and can no longer function if you don’t have the tool at your disposal.&lt;/p&gt;

&lt;p&gt;There’s always a balance to be honest, and we’re not always going to get it right. I suppose we can ask ourselves - are we using using AI as an alternative to thinking (which I have occasionally done)? Or are we using AI to help with an issue, and then think through the problem to come up with a solution ourselves, treating AI as an aid rather than a silver bullet for our problems?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx1jkxro5k3fefb0xeea3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx1jkxro5k3fefb0xeea3.jpg" alt="Quote: All the tools and engines on earth are only extensions of man's limbs and senses - Ralph Waldo Emerson" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h6&gt;
  
  
  A slide from one of the presentations at the Snowflake BUILD conference, which I’m pretty sure is relevant to the points I’m trying to make
&lt;/h6&gt;

&lt;h1&gt;
  
  
  Human cost
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;Content Warning: this section describes the situation of low-paid workers in poor conditions. I don’t go into great detail, but if any of this is difficult for you to read then feel free to skip ahead, and please look after your own wellbeing&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The second inspiration for this blog post (alongside the YouTube video whose title I… borrowed) was an article that my mentor wrote in December - he's been working in software for 27 years, and he refuses to use AI[16]. This is partly due to topics I’ve mentioned, but there is one issue he raises that I was completely unaware of before - the exploitation of workers used to fuel AI.&lt;/p&gt;

&lt;p&gt;I have now read up about it, and honestly it's pretty awful. I won't go into too much detail, but essentially AI models only work when provided with vast amounts of human-created data. Where do they get this data from though? AI companies, many with their headquarters in Silicon Valley, outsource this work to countries like India, Kenya, the Philippines, and Venezuela[17].&lt;/p&gt;

&lt;p&gt;There are millions of low-paid workers working in poor conditions to label images and train AI models for very low pay[18]. Many of these workers have degrees, but due to economic conditions in their countries they can’t find other jobs. Yet the contracts they’re given are often short-term and can pay $1-$2 an hour, some are paid per task completed which can vary a lot, while in some cases workers are not paid for the work they’ve done, and when they’ve tried to unionise and complain, the company stopped operating in that country and moved to a neighbouring country[19].&lt;/p&gt;

&lt;p&gt;The worst part is workers who have to view disturbing content for hours every day, training AI to recognise content that would violate policies, like violent or sexual content, but given no or inadequate mental health support despite this deeply affecting them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What can be done?
&lt;/h2&gt;

&lt;p&gt;Honestly, I don’t have the answers. I know this is a massive issue and it’s difficult to digest, but I felt it must at least be talked about, awareness is the first step. But I know I can’t fix the world in a blog post, the unfortunate fact is that systems are currently set up to allow this sort of behaviour, and you or I can’t fix that individually.&lt;/p&gt;

&lt;p&gt;However, I still believe that our attitude towards AI is important here, after all if companies want to keep expanding faster than is sustainable, they’ll want to take shortcuts like these. I do believe that people power can work, and I'd love to hear if anyone has suggestions about putting pressure on companies to say this is not ok. But aside from that, this is yet another reason to use AI more efficiently and wisely, and not contribute to the hype of “more is always better”. Plus, knowing this is the foundation of most LLMs certainly dulls the shine of the fancy, futuristic-sounding AI for me.&lt;/p&gt;

&lt;h1&gt;
  
  
  Can AI be used for good?
&lt;/h1&gt;

&lt;p&gt;I know I’ve talked about a lot of negatives, and you’ve probably gathered I don’t think AI is the magical solution to all our problems. But I don’t think AI is evil in and of itself either. AI is a tool, and it’s up to people whether we use it to help or to harm, both in what we do and how we use it. To take a very basic analogy, let’s compare this to knives - all of us probably have and use knives for cooking, but they can be harmful if not used safely or if used maliciously. There are rules about where and when you can use or carry knives, and some types of knives are banned for the public.&lt;/p&gt;

&lt;p&gt;With AI, there should be similar regulations but unfortunately they’re not at the level they need to be yet. An example that comes to mind is a recent story about an AI agent that submitted a PR, and when the PR was rejected the agent posted a blog post issuing a personal attack on the person who rejected it[20]. While this is entertaining at a surface level, the deeper implications are quite concerning, as AI agents could be used for targeted bullying on large scales.&lt;/p&gt;

&lt;p&gt;Sorry, this is meant to be the uplifting part of the blog post, but I wanted to make the point that we need to have safeguards in place to guarantee AI is not used in harmful ways. In the absence of those, it’s up to us to use AI in the most responsible way we can.&lt;/p&gt;

&lt;p&gt;I know I've painted a bleak picture, but I'd like to offer the counterbalance and answer the question: can AI be used for good?&lt;/p&gt;

&lt;h2&gt;
  
  
  Yes
&lt;/h2&gt;

&lt;p&gt;In one sense, helping us be more productive is good, but I’d like to focus on a couple of examples where AI/LLMs are being used for good causes in a wider sense.&lt;/p&gt;

&lt;p&gt;UNESCO has been using AI chatbots to provide learning resources to children in disadvantaged areas of Eastern and Southern Africa, and an LLM-powered assistant for helping teachers plan lessons[21]. From what I can tell, this is being used in a responsible way where teachers can build on the foundations that the chatbots offer them, and optionally share the lessons plans around allowing other teachers to rate how useful they find them.&lt;/p&gt;

&lt;p&gt;Another example is the Human Rights Data Analysis Group, who are using AI for many social justice issues, for example LLMs helped them analyse vast amounts of data around police misconduct in California in a way that just wouldn’t have been possible before[22].&lt;/p&gt;

&lt;p&gt;One last example I quite enjoyed learning about is a slightly more morally grey area, but a hacktivist called Martha Root and a couple of journalists took down a network of white supremacy sites, using AI to help with their efforts (Martha presents this at a conference in Berlin dressed as a Pink Power Ranger, it’s quite an entertaining watch)[23].&lt;/p&gt;

&lt;p&gt;I’m sure there are other examples, and if you know of any then please leave a comment. But I hope even the small examples I’ve given here are enough to show that, while we shouldn’t ignore the negative elements of AI, we can still celebrate the good it achieves.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;I hope I haven’t overwhelmed or stressed you out with all that I’ve written about here… but equally if this blog post did make you slightly uncomfortable, perhaps that’s not a bad thing. I want us to question our behaviours and assumptions, and be an active user of AI rather than passively consuming and accepting its responses.&lt;/p&gt;

&lt;p&gt;As I have said before, AI is a tool that can be used for good, or for bad, both in what we use it for but also how we use it, and it’s up to us to be responsible users of this tool.&lt;/p&gt;

&lt;p&gt;While I chose the title of this blog post for its shock value, a better question would be “should we be mindful about using AI?”, which I hope we can all agree that yes we should be. We should be aware that using it has a cost, so it would be great to pursue efforts on how to optimise our usage of AI to ensure we’re using it in the most efficient way we can. After all, we’re engineers so this sounds like a great use of our skills.&lt;/p&gt;

&lt;p&gt;We each have the power to start implementing positive changes in how we use AI, and if we do I truly believe this will have a ripple effect that creates lasting, meaningful impact.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thank you for reading
&lt;/h2&gt;

&lt;p&gt;I know it’s been rocky journey but we’ve finally made it to the end, hopefully intact. I imagine this is quite a controversial blog post… I'd love to hear what others have to say in response, particularly those who have found AI to be really helpful and boost productivity, or who use AI a lot (either at work or in general life). And indeed if you disagree with anything I've said, please let me know by commenting below (in a way that encourages positive discussion). And if you agree with me I’d like to hear that too of course!&lt;/p&gt;

&lt;p&gt;Thank you so much for reading to the end of this blog post :)&lt;/p&gt;

&lt;h1&gt;
  
  
  References
&lt;/h1&gt;

&lt;h5&gt;
  
  
  [1] Wikipedia page on LLMs
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Large_language_model" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Large_language_model&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [2] The initial inspiration for this blog post (at least the title), from a tech Youtuber I’ve followed for a couple of years
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=B9P5fSrT104" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=B9P5fSrT104&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [3] Article outlining environmental costs of AI
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://www.teenvogue.com/story/chatgpt-is-everywhere-environmental-costs-oped" rel="noopener noreferrer"&gt;https://www.teenvogue.com/story/chatgpt-is-everywhere-environmental-costs-oped&lt;/a&gt;&lt;br&gt;
It may surprise you coming from Teen Vogue, but this is a pretty good article&lt;/p&gt;

&lt;h5&gt;
  
  
  [4] Greenpeace article on the environmental impact of the meat industry
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://www.greenpeace.org.uk/news/why-meat-is-bad-for-the-environment/" rel="noopener noreferrer"&gt;https://www.greenpeace.org.uk/news/why-meat-is-bad-for-the-environment/&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [5] MIT Technology review going into energy use of AI
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/" rel="noopener noreferrer"&gt;https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/&lt;/a&gt;&lt;br&gt;
I'll be honest I've only read small parts of it, but it's quite detailed so have a look if you're interested&lt;/p&gt;

&lt;h5&gt;
  
  
  [6] Western Digital has sold out of HDDs for 2026
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://www.techspot.com/news/111346-western-digital-hdd-production-capacity-2026-already-sold.html" rel="noopener noreferrer"&gt;https://www.techspot.com/news/111346-western-digital-hdd-production-capacity-2026-already-sold.html&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [7] DeepSeek building things more efficiently
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://medium.com/@aryan.patel.2504/how-deepseek-achieved-gpt-4-level-results-with-fewer-resources-a-software-engineering-deep-dive-2f6ae3fc941d" rel="noopener noreferrer"&gt;https://medium.com/@aryan.patel.2504/how-deepseek-achieved-gpt-4-level-results-with-fewer-resources-a-software-engineering-deep-dive-2f6ae3fc941d&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [8] A blog describing why Ecosia AI is the best environmental choice.
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://blog.ecosia.org/ecosia-ai/" rel="noopener noreferrer"&gt;https://blog.ecosia.org/ecosia-ai/&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [9] Ecosia’s financial reports breaking down how they spend their profits
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://blog.ecosia.org/ecosia-financial-reports-tree-planting-receipts/" rel="noopener noreferrer"&gt;https://blog.ecosia.org/ecosia-financial-reports-tree-planting-receipts/&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [10] The Serious CTO
&lt;/h5&gt;

&lt;p&gt;Short (3-minute) video from The Serious CTO on data around AI helping productivity. It’s scathing in parts, but mainly encouraging us not to blindly trust AI, but work with it: &lt;a href="https://www.youtube.com/watch?v=ukmtqi8IDpw" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=ukmtqi8IDpw&lt;/a&gt;&lt;br&gt;
An article from the same person that goes more in-depth to the topics he mentions in the video: &lt;a href="https://newsletter.theseriouscto.com/p/ai-coding-assistants" rel="noopener noreferrer"&gt;https://newsletter.theseriouscto.com/p/ai-coding-assistants&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [11] Report from CodeRabbit outlining issues in AI-generated code
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report" rel="noopener noreferrer"&gt;https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [12] The quote is from 30 seconds into this video
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=42V0xazKHZA" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=42V0xazKHZA&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [13] 5-minute TED talk about how ChatGPT has changed the way people speak and, potentially, think
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=ZkXrTHpnQrQ" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=ZkXrTHpnQrQ&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [14] An article on how words that ChatGPT uses have become more common in spoken language
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://www.scientificamerican.com/article/chatgpt-is-changing-the-words-we-use-in-conversation/" rel="noopener noreferrer"&gt;https://www.scientificamerican.com/article/chatgpt-is-changing-the-words-we-use-in-conversation/&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [15] AI eroded doctor's ability to spot cancer
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://www.bloomberg.com/news/articles/2025-08-12/ai-eroded-doctors-ability-to-spot-cancer-within-months-in-study" rel="noopener noreferrer"&gt;https://www.bloomberg.com/news/articles/2025-08-12/ai-eroded-doctors-ability-to-spot-cancer-within-months-in-study&lt;/a&gt;&lt;br&gt;
(you have to pay so I haven't read this one)&lt;/p&gt;

&lt;h5&gt;
  
  
  [16] An article written by my mentor on why he doesn’t use AI
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://accu.org/journals/overload/33/190/balaam/" rel="noopener noreferrer"&gt;https://accu.org/journals/overload/33/190/balaam/&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [17] Article from 2022 highlighting the labourers who train AI models in poor conditions
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/" rel="noopener noreferrer"&gt;https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [18] WIRED article about workers training AI whose low pay does not reflect the long hours they need to spend on the platforms
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/" rel="noopener noreferrer"&gt;https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [19] CBS News article, interviewing workers who describe their poor working conditions
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://www.cbsnews.com/news/labelers-training-ai-say-theyre-overworked-underpaid-and-exploited-60-minutes-transcript/" rel="noopener noreferrer"&gt;https://www.cbsnews.com/news/labelers-training-ai-say-theyre-overworked-underpaid-and-exploited-60-minutes-transcript/&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [20] A 4-part blog telling the story of how an AI agent has its PR rejected, so published a blog using personal attacks against the person who rejected it
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/" rel="noopener noreferrer"&gt;https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [21] UNESCO article on their uses of AI to help education in Malawi, Zambia and Zimbabwe
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://www.unesco.org/en/articles/using-digital-and-ai-good-harnessing-technology-eastern-and-southern-africa-teacher-and-childrens" rel="noopener noreferrer"&gt;https://www.unesco.org/en/articles/using-digital-and-ai-good-harnessing-technology-eastern-and-southern-africa-teacher-and-childrens&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [22] Article from the Human Rights Data Analysis Group on how they used LLMs to help analyse data on police abuse in California in ways that wouldn’t have been possible before
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://hrdag.org/pulling-back-the-curtain-on-llms-policing-data/" rel="noopener noreferrer"&gt;https://hrdag.org/pulling-back-the-curtain-on-llms-policing-data/&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  [23] The story of journalists and Pink Power Ranger taking down a network of white supremacist sites
&lt;/h5&gt;

&lt;p&gt;A 45-minute talk from a conference: &lt;a href="https://www.youtube.com/watch?v=5Wva0cyliVk" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=5Wva0cyliVk&lt;/a&gt;&lt;br&gt;
Shorter 13-minute video: &lt;a href="https://www.youtube.com/watch?v=lJsS8lqCpwU" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=lJsS8lqCpwU&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>softwareengineering</category>
      <category>software</category>
    </item>
    <item>
      <title>The 6 key questions I ask when reviewing code</title>
      <dc:creator>jertyuiop</dc:creator>
      <pubDate>Sat, 11 Jun 2022 18:20:00 +0000</pubDate>
      <link>https://forem.com/jertyuiop/the-6-key-questions-i-ask-when-reviewing-code-3ch</link>
      <guid>https://forem.com/jertyuiop/the-6-key-questions-i-ask-when-reviewing-code-3ch</guid>
      <description>&lt;p&gt;In the 3 and a half years I've been working as a software developer I've done a lot of code reviews, below are the questions I ask myself and my general thought process when reviewing code. I’d be interested to hear what other people do the same or differently to me - please do leave a comment once you’ve finished reading!&lt;/p&gt;

&lt;h2&gt;
  
  
  Before looking at the code, I read the ticket description
&lt;/h2&gt;

&lt;p&gt;Where I work we are assigned relatively small tasks called tickets, so the first thing I do is read through the ticket before looking at the code review. This may seem obvious (or unnecessary), but I think it's important to know what the code is meant to achieve as well as checking that it makes logical sense and works. I like to have this in the back of my mind as I go through the code review, so I usually keep the ticket open on a different screen so I can look at it alongside the review if needed.&lt;/p&gt;

&lt;p&gt;Then for each code change I encounter, I ask…&lt;/p&gt;

&lt;h2&gt;
  
  
  1. What is this change doing and Why?
&lt;/h2&gt;

&lt;p&gt;The what may be as simple as removing import statements, adding an if check or creating a new method.&lt;br&gt;
The why could then be "I remove these imports as they're no longer used", "We need to check for this condition before doing the below" or "I need to add in this new functionality"&lt;/p&gt;

&lt;p&gt;For changes like adding a new method I would ask this for the bigger picture - what is this method doing - but I would also ask this for each line/code block within the method.&lt;/p&gt;

&lt;p&gt;For tests I would ask: what case is being tested, what is the expected outcome and why?&lt;/p&gt;

&lt;p&gt;If it takes me a long time to understand this part, it could be because I am unfamiliar with the code base or practices used, but it's worth adding a comment to ask if you're not sure what's happening. If it’s quite complicated then perhaps the code could be slightly refactored, a method/variable could be renamed or a comment added in the code to make it more clear.&lt;/p&gt;

&lt;p&gt;Once I understand the logic, I ask myself Should this be happening? Does it align with what is specified in the ticket description?&lt;/p&gt;

&lt;p&gt;What is not happening here? Can I think of any other cases that should be accounted for? Are there tests?&lt;/p&gt;

&lt;p&gt;Once I understand the what and why, I then think about How (for changes more complicated than just removing unused code).&lt;/p&gt;

&lt;h2&gt;
  
  
  2. How is this code change achieving what is intended?
&lt;/h2&gt;

&lt;p&gt;Will this functionally do what is required by the ticket?&lt;/p&gt;

&lt;p&gt;Can I spot any potential issues where this may behave incorrectly?&lt;/p&gt;

&lt;p&gt;Are there any improvements that can be made? This could be performance-related or otherwise, e.g. could an if check be short circuited, is there repeated code that could be made into a method, are any parts not used and can be removed?&lt;/p&gt;

&lt;p&gt;Can I think of any other cases that could be caught here? This overlaps a bit with what I said in the previous section, but this is slightly different because we’re looking at a lower level here - will the code behave correctly in all cases, e.g. is a null/empty/invalid input handled correctly?&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Then I think about Where - where is this functionality used?
&lt;/h2&gt;

&lt;p&gt;This is particularly for changes that would affect other places, like changing a public method’s signature or functionality, or a stored procedure.&lt;/p&gt;

&lt;p&gt;Where is this functionality used and are there other places that already use this functionality and might need to be edited to accommodate the change?&lt;/p&gt;

&lt;p&gt;Have all instances been updated that need to be? E.g. for a ticket to update all instances of X check, have all the X checks been updated or might there be more somewhere else?&lt;/p&gt;

&lt;p&gt;If we’re using a variable from elsewhere, what is its value and does it make sense here? E.g. if a string variable is used, check the original string and make sure the wording still makes sense for this use case.&lt;/p&gt;

&lt;p&gt;Sometimes I will search for this myself in the code base if I have an idea where to look, otherwise we can leave a comment to ask if the author has checked this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JfN56wTK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0uki2twbk6yg6u0qkc5i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JfN56wTK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0uki2twbk6yg6u0qkc5i.png" alt="Image description" width="880" height="631"&gt;&lt;/a&gt; Source: &lt;a href="https://www.royvanrijn.com/blog/2016/08/saving-the-world-with-a-code-review/"&gt;Saving the world with a code review&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. For text or messages that are displayed/returned I would ask - Who will see this message?
&lt;/h2&gt;

&lt;p&gt;If an external user/customer will see this message, would it make sense to them or does it just make sense to the developers who wrote it? We may well need input from someone in a different department (like Business Analysis or Product) to help with the wording. Another consideration is do we want to display this level of information to external users, or are we telling them too much about our backend systems?&lt;/p&gt;

&lt;p&gt;If this is for messages being logged, will this message make sense if you come across it in a log file? Could it do with more context, or a reference/id so that we can debug more easily?&lt;/p&gt;

&lt;p&gt;If there are existing code comments, do they need updating? For new comments, do they accurately describe what’s happening - especially given things may have changed even during the review process.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. For automation that is triggered or on a schedule - When will this be run?
&lt;/h2&gt;

&lt;p&gt;This is something I come across more rarely, but for this sort of change I would check this generally makes sense and that it aligns with what is outlined in the ticket.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Finally, after reading all the code, am I satisfied that this will achieve all the ticket’s goals?
&lt;/h2&gt;

&lt;p&gt;Usually the answer is yes, but if something seems to behave differently or is missing then I would leave a comment to ask about it. Of course, things may have changed since the ticket description was written, but even so it’s worth asking to make that clear to you and to other reviewers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Extra Idea: leave a positive comment!
&lt;/h2&gt;

&lt;p&gt;So often the only comments I leave on code reviews are negative, and I just wouldn’t leave any comments if I don’t find problems with the code. After watching &lt;a href="https://www.youtube.com/watch?v=XY6eA2_2hOg"&gt;the video mentioned below&lt;/a&gt;, I’ve been challenged to write a positive comment in code reviews to help encourage my colleagues. Not to force it or be patronising, but if there’s something I genuinely think is good or interesting then instead of keeping it in my head, why not put it in a comment as well.&lt;/p&gt;




&lt;p&gt;Thank you for reading until the end, I hope you found some of that interesting! Is there anything else that you think about when reviewing code? What do you do differently?&lt;/p&gt;

&lt;h2&gt;
  
  
  Recommended watching/reading
&lt;/h2&gt;

&lt;p&gt;“The Art of Giving and Receiving Code Reviews (Gracefully)” by Alexandra Hill (under 10 minutes) - &lt;a href="https://www.youtube.com/watch?v=XY6eA2_2hOg"&gt;https://www.youtube.com/watch?v=XY6eA2_2hOg&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I found this very useful, in particular the “As a reviewer we can…” and “As an author we can…” sections, helping us to bring out the best in each other when reviewing code.&lt;br&gt;
(Here is the equivalent blog post if you’d rather read it: &lt;a href="https://www.alexandra-hill.com/2018/06/25/the-art-of-giving-and-receiving-code-reviews/"&gt;https://www.alexandra-hill.com/2018/06/25/the-art-of-giving-and-receiving-code-reviews/&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Source for the title picture: &lt;a href="https://www.securedevelopment.org/resources/code-review/"&gt;Code Review&lt;/a&gt;&lt;/p&gt;

</description>
      <category>codereview</category>
    </item>
  </channel>
</rss>
