<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Singaraja33 </title>
    <description>The latest articles on Forem by Singaraja33  (@singarajatech).</description>
    <link>https://forem.com/singarajatech</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/singarajatech"/>
    <language>en</language>
    <item>
      <title>China’s AI strategy is working, and the west might be falling behind</title>
      <dc:creator>Singaraja33 </dc:creator>
      <pubDate>Thu, 16 Apr 2026 05:49:41 +0000</pubDate>
      <link>https://forem.com/singarajatech/chinas-ai-strategy-is-working-and-the-west-might-be-falling-behind-2k3g</link>
      <guid>https://forem.com/singarajatech/chinas-ai-strategy-is-working-and-the-west-might-be-falling-behind-2k3g</guid>
      <description>&lt;p&gt;Let's start by saying something that's becoming obvious over time: China is not just participating in the AI race, but as they do in many other areas, industries and sectors, it is just playing a completely different game.&lt;/p&gt;

&lt;p&gt;While much of the world talks about innovation in terms of startups, disruption and private investment, China has taken a way more coordinated, long term approach, one that blends government policy, industrial planning and massive financial backing into a single, focused strategy. And as it happened before with the clear examples as the electric car industry, it’s starting to show results.&lt;/p&gt;

&lt;p&gt;To understand why China’s AI and IT sectors are becoming so competitive, you only have to look beyond the technology itself. The real story is how the system around that technology is built. In China, artificial intelligence is not just a business opportunity but one of the main real and national priorities for their institutions, and this approach changes everything.&lt;/p&gt;

&lt;p&gt;Instead of relying purely on market forces, China actively directs resources into key sectors. Through state backed funding, favorable regulations and long term planning, the government has been creating over the years an environment where companies can scale quickly without the same pressures seen in Western markets. With this approach to the matter, companies like Baidu, Alibaba or Tencent are not just competing in AI, but instead they are operating within an ecosystem designed basically to accelerate them from the same core. These companies, as well as hundreds of others, benefit from access to vast amounts of data, strong infrastructure support and policies that prioritize technological self sufficiency.&lt;/p&gt;

&lt;p&gt;Data, in particular, is one of China’s biggest advantages from the moment we understand that AI systems improve with scale, and China has scale at a level few countries can match. With over a billion people interacting daily with digital platforms, both national and foreign, the amount of data generated is simply enormous, and this fact creates an extremely powerful feedback loop where AI models can be trained, tested and refined faster.&lt;/p&gt;

&lt;p&gt;But the advantage is not just quantity. It’s also something key that is called access.&lt;br&gt;
In many Western countries, data usage is heavily regulated, often limiting how companies can train and deploy AI systems. Instead, in China the balance between privacy and innovation is handled very differently, allowing companies to move faster in certain areas, especially in applications like facial recognition, smart cities or fintech.&lt;/p&gt;

&lt;p&gt;Another key factor is speed of execution. As we already spoke in previous articles, Chinese companies are known for rapid iteration. In China, products are launched quickly, tested in real world conditions and improved continuously. This “deploy first, refine later” approach contrasts strongly with the more cautious rollout strategies often seen elsewhere, and the inevitable result is an ecosystem where AI doesn’t stay in research labs for long, but it gets deployed into everyday life.&lt;/p&gt;

&lt;p&gt;You can see this in areas like digital payments, logistics or urban infrastructure. AI is already deeply integrated into how cities function, how goods move and how services are delivered. In China all this is not just a future concept, but pure infrastructure.&lt;/p&gt;

&lt;p&gt;And then there’s the financial model, where we can see that China’s approach to funding AI is extremely aggressive and strategic in comparison to what we are used to see in the West. Instead of waiting for private capital to decide where to invest, the government often guides capital into sectors it considers critical, in a constant spiral that drastically reduces risk for companies and allows them to focus on scaling rather than short term profitability.&lt;/p&gt;

&lt;p&gt;In contrast, as mentioned, many Western AI companies face constant pressure to justify costs, specially given how expensive AI development has become. Training large models requires massive computing power and maintaining them is even more costly. Chinese firms, supported by national priorities, often have more flexibility to absorb these costs in the short term, and this is a very differentiated and advantageous factor for them.&lt;/p&gt;

&lt;p&gt;In any case, and despite all this, the Chinese system is of course not living without challenges and the truth is that China’s AI industry actually faces real constraints in the global battle, particularly when it comes to advanced semiconductor technology. Restrictions from countries like the US have clearly limited access to cutting edge chips, forcing Chinese companies to accelerate their own domestic chip development. In any case, we see this as both a weakness and, potentially, a long term strength if it leads to greater self reliance.&lt;/p&gt;

&lt;p&gt;There are also structural risks in a system that relies heavily on centralized direction, typical from the government model China has. &lt;/p&gt;

&lt;p&gt;Innovation can sometimes be less unpredictable and companies may prioritize alignment with national goals over experimentation, but still, the overall trajectory is clear and China is not just catching up in AI, but in some areas it is actually already leading, particularly in applied AI at scale.&lt;/p&gt;

&lt;p&gt;And what is coming next could be even more significant from the moment we are able to identify that most probably over the next few years, China’s AI industry is expected to expand deeper into manufacturing, healthcare and autonomous systems. The integration of AI with robotics, smart infrastructure and industrial automation could redefine productivity at a national level, and this is where China’s model might become specially powerful because it doesn’t treat AI as a standalone sector, but it treats it as a layer that enhances every other sector.&lt;/p&gt;

&lt;p&gt;That approach could give China a long term advantage, not necessarily by having the most advanced models at any given moment, but by embedding AI more deeply and more widely across its economy.&lt;/p&gt;

&lt;p&gt;Meanwhile, the global AI race is becoming less about who has the best technology and more about who can deploy it at scale, integrate it into society and sustain it economically.&lt;/p&gt;

&lt;p&gt;In that context, China’s strategy starts to make a lot of sense since they made it clear that they are not trying to win the race with a single breakthrough, but they are instead building an entire system designed to keep moving forward, consistently, at scale and with intent. And if that system continues to evolve the way it has over the past number of years, China will not just be a competitor in AI but will be one of the forces defining what AI becomes.&lt;/p&gt;

&lt;p&gt;*[How China Caught up on AI and might win the race]&lt;a href="https://time.com/7358175/china-us-ai-race/" rel="noopener noreferrer"&gt;https://time.com/7358175/china-us-ai-race/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*[China's accelerating AI industry, a multifaceted approach]&lt;a href="https://beijingpost.com/china-s-accelerating-artificial-intelligence-industry-a-multifaceted-approach" rel="noopener noreferrer"&gt;https://beijingpost.com/china-s-accelerating-artificial-intelligence-industry-a-multifaceted-approach&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Author: Translock IT, Luis Carlos Yanguas Gómez de la Serna&lt;/p&gt;

</description>
      <category>chinaai</category>
      <category>ai</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>La geopolítica y su influencia en la IA

Léelo aquí 👇🏻 

https://luisyanguas22.medium.com/los-conflictos-globales-y-su-enorme-impacto-en-el-futuro-de-la-inteligencia-artificial-37d87c4f9dfd</title>
      <dc:creator>Singaraja33 </dc:creator>
      <pubDate>Sat, 11 Apr 2026 15:07:42 +0000</pubDate>
      <link>https://forem.com/singarajatech/la-geopolitica-y-su-influencia-en-la-ia-leelo-aqui-546k</link>
      <guid>https://forem.com/singarajatech/la-geopolitica-y-su-influencia-en-la-ia-leelo-aqui-546k</guid>
      <description>&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
      &lt;div class="c-embed__body flex items-center justify-between"&gt;
        &lt;a href="https://luisyanguas22.medium.com/los-conflictos-globales-y-su-enorme-impacto-en-el-futuro-de-la-inteligencia-artificial-37d87c4f9dfd" rel="noopener noreferrer" class="c-link fw-bold flex items-center"&gt;
          &lt;span class="mr-2"&gt;luisyanguas22.medium.com&lt;/span&gt;
          

        &lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


</description>
    </item>
    <item>
      <title>The key and crucial role of AI Labelers in the world of AI

From Translock IT. Read here 👇🏻

https://open.forem.com/singarajatech/the-invisible-people-teaching-ai-what-humans-mean-and-why-it-matters-more-than-you-think-5pk</title>
      <dc:creator>Singaraja33 </dc:creator>
      <pubDate>Wed, 08 Apr 2026 15:08:49 +0000</pubDate>
      <link>https://forem.com/singarajatech/the-key-and-crucial-role-of-ai-labelers-in-the-world-of-ai-from-translock-it-read-here-5gcn</link>
      <guid>https://forem.com/singarajatech/the-key-and-crucial-role-of-ai-labelers-in-the-world-of-ai-from-translock-it-read-here-5gcn</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://open.forem.com/singarajatech/the-invisible-people-teaching-ai-what-humans-mean-and-why-it-matters-more-than-you-think-5pk" class="crayons-story__hidden-navigation-link" rel="noopener noreferrer"&gt;The invisible people teaching AI what humans mean and why it matters more than you think&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/singarajatech" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3714811%2F77f3269a-8d54-4c49-98ec-c757dc471ffc.jpg" alt="singarajatech profile" class="crayons-avatar__image" width="718" height="990"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/singarajatech" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Singaraja33 
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Singaraja33 
                
              
              &lt;div id="story-author-preview-content-3472258" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/singarajatech" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3714811%2F77f3269a-8d54-4c49-98ec-c757dc471ffc.jpg" class="crayons-avatar__image" alt="" width="718" height="990"&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Singaraja33 &lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://open.forem.com/singarajatech/the-invisible-people-teaching-ai-what-humans-mean-and-why-it-matters-more-than-you-think-5pk" class="crayons-story__tertiary fs-xs" rel="noopener noreferrer"&gt;&lt;time&gt;Apr 8&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://open.forem.com/singarajatech/the-invisible-people-teaching-ai-what-humans-mean-and-why-it-matters-more-than-you-think-5pk" id="article-link-3472258" rel="noopener noreferrer"&gt;
          The invisible people teaching AI what humans mean and why it matters more than you think
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ailabelers"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ailabelers&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/softwaredevelopment"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;softwaredevelopment&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/worldofai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;worldofai&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
            &lt;a href="https://open.forem.com/singarajatech/the-invisible-people-teaching-ai-what-humans-mean-and-why-it-matters-more-than-you-think-5pk#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center" rel="noopener noreferrer"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            6 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
    </item>
    <item>
      <title>The invisible people teaching AI what humans mean and why it matters more than you think</title>
      <dc:creator>Singaraja33 </dc:creator>
      <pubDate>Wed, 08 Apr 2026 15:01:24 +0000</pubDate>
      <link>https://forem.com/singarajatech/the-invisible-people-teaching-ai-what-humans-mean-and-why-it-matters-more-than-you-think-5pk</link>
      <guid>https://forem.com/singarajatech/the-invisible-people-teaching-ai-what-humans-mean-and-why-it-matters-more-than-you-think-5pk</guid>
      <description>&lt;p&gt;There is a strange moment that happens inside every modern AI system, but almost no one ever sees it...&lt;/p&gt;

&lt;p&gt;Before an AI writes a sentence, answers a question or suggests a solution, there is something far less visible that has already shaped it, and this something is not code in the usual sense or a specific mathematical formula, but thousands of small human judgments about what “good” actually means.&lt;/p&gt;

&lt;p&gt;Before you get any input from an AI, someone decided whether an answer was helpful or confusing, correct or misleading, safe or unsafe, useful or simply better than another option. These people are known as AI labelers, and without them, the systems we now treat as intelligent would simply not know how to behave at all.&lt;/p&gt;

&lt;p&gt;The thing is that most people imagine artificial intelligence as something that learns directly from the internet, as if it were absorbing knowledge in a neutral, automatic way, but that idea misses something important that is basically the fact that AI does not just learn from data but learns from interpretation, and interpretation always requires humans.&lt;/p&gt;

&lt;p&gt;In this interpretation stage is where AI labelers come in. Their work sits at a hidden layer of the entire AI ecosystem, quietly shaping how models understand language, respond to prompts and decide what tone to take when answering.&lt;/p&gt;

&lt;p&gt;At a basic level, the job of this guys sounds simple. They basically take raw data and add structure to it. A sentence might be defined as toxic or safe, two AI-generated answers might be compared and ranked, an image might be labeled with objects or context, a user query might be classified by intent...&lt;/p&gt;

&lt;p&gt;On paper, I agree this looks like routine annotation work, but in practice it quickly becomes something way more complicated because language is rarely clear and human intention is almost never obvious. Very common things like any given sarcastic comment might look harmful or playful depending on interpretation, a confident answer might be technically correct but still misleading, and a vague explanation might be good enough for one person and absolutely unacceptable for another. &lt;/p&gt;

&lt;p&gt;Every decision requires judgment, not just classification, and those judgments do not normally stay isolated but instead they become training signals that shape how AI systems behave at scale.&lt;/p&gt;

&lt;p&gt;To understand why this matters, it helps to zoom out and look at how modern AI actually learns in a world where large language models are not simply trained once and left alone but they go through multiple stages of refinement. First, they learn patterns from vast datasets of text and code, and then humans step in to guide their behavior, helping the model understand what kinds of responses are preferable.&lt;/p&gt;

&lt;p&gt;This second stage is where AI starts to feel less like a prediction engine and more like a system that understands expectations, and here is where Labelers compare answers, rank outputs and highlight mistakes that are not always obvious errors, but subtle misalignments with what a human would consider useful or appropriate. This process is often described as reinforcement learning from human feedback, but behind that technical phrase is something very simple: humans teaching machines what they prefer.&lt;/p&gt;

&lt;p&gt;And these humans do not just teach what is true, but more precisely what feels acceptable, clear or just safe, a distinction that outlines where things become interesting.&lt;/p&gt;

&lt;p&gt;Because once AI systems begin learning from human preference, we should all make ourselves a deeper question in the moment we realize that those preferences are not uniform and may vary between people, cultures and contexts.&lt;/p&gt;

&lt;p&gt;Even within strict guidelines, interpretation is unavoidable, and what counts as helpful or safe to one person may feel incomplete or overly cautious to another. Multiply those small differences across millions of training examples, and something subtle begins to emerge.&lt;/p&gt;

&lt;p&gt;AI behavior becomes then a reflection of aggregated human judgment rather than pure data, and there is when the factor of neutrality comes up, because human judgment is simply never neutral.&lt;/p&gt;

&lt;p&gt;Labelers are not just writing the final outputs that users see, but they are even shaping the boundaries of what those outputs can be. Labelers influence tone, caution level, clarity and even the style of reasoning a model tends to use.&lt;/p&gt;

&lt;p&gt;Most users will never see this layer of influence because they are used to just interact with polished systems that feel coherent and consistent, but behind that coherence is a distributed network of human decisions that quietly define what the system considers acceptable intelligence.&lt;/p&gt;

&lt;p&gt;From a more physiological perspective, traditionally, knowledge systems have always been built around authorship. Someone writes, someone edits and someone is simply responsible for the final product. But this has a completely different angle here because in modern AI systems, authorship becomes distributed across many layers. Data comes first from millions of sources, then models are trained on statistical patterns, Labelers are there to adjust interpretation and finally engineers come up to shape constraints. The final output is the result of all these layers interacting.&lt;/p&gt;

&lt;p&gt;So the question is that knowing how all this chain works, then we must realize that AI labelers occupy a strange position in it. They are not visible in the final product, yet they influence how it behaves. They are part of the system, but not part of the output. And their role is invisible, but structurally essential. This creates a form of invisible authorship, where influence exists without recognition.&lt;/p&gt;

&lt;p&gt;And with all this comes also a way deeper question about responsibility, because if AI behavior is shaped by distributed human judgment, then responsibility is also distributed. It does not belong to a single actor and spreads across designers, trainers, companies and labelers who collectively define how the system behaves.&lt;/p&gt;

&lt;p&gt;As AI systems become more advanced, the nature of labeling work also changes. Early tasks some time ago were relatively straightforward and mainly involved classification or tagging, but all those much more advanced and newer systems require far more nuanced evaluation. Now Labelers are asked to compare complex reasoning chains, detect subtle inconsistencies and judge qualities like clarity, usefulness and coherence. In some cases, they are not just evaluating answers but evaluating how those answers were constructed.&lt;/p&gt;

&lt;p&gt;This shifts labeling from simple annotation into something closer to structured judgment of thought itself, and that makes the work much more philosophical than it first appears, because every evaluation requires a decision about what “good reasoning” looks like, even when there is no single correct answer. &lt;/p&gt;

&lt;p&gt;Over time, this forces a constant confrontation with ambiguity in human language and thought. Meaning is not anymore fixed, it depends on context, intent and perspective. And AI labeling makes that visible at scale.&lt;/p&gt;

&lt;p&gt;So even if AI feels autonomous, its behavior is carefully guided and optimized not only for accuracy but also for alignment with human preferences about how intelligence should behave.&lt;br&gt;
This fact does not reduce its power but it reveals something important about how intelligence is constructed in practice. It is not just computational but up to interpretation, and interpretation requires humans.&lt;/p&gt;

&lt;p&gt;In our opinion it is clear that the work of AI Labelers is part of one of the most important processes in modern technology. They basically help defining how machines understand human meaning. They are not just labeling data but they are genuinely shaping the early behavioral grammar of AI systems. They decide, through countless small judgments, how intelligence should respond when it encounters uncertainty, disagreement or ambiguity. And the most striking part is that this influence remains almost entirely invisible.&lt;/p&gt;

&lt;p&gt;In the near future, AI will not be defined only by model size or computational power, but also by something far less visible but equally important: the accumulated human judgment embedded in its training.&lt;/p&gt;

&lt;p&gt;AI labelers are the ones shaping that judgment today.&lt;br&gt;
And in doing so, they are quietly influencing not just how machines learn to understand us, but how we gradually come to understand them in return.&lt;/p&gt;

&lt;p&gt;[Why companies are paying huge amounts to AI Labelers]&lt;a href="https://bernardmarr.com/why-companies-are-paying-huge-money-for-ai-labelers/" rel="noopener noreferrer"&gt;https://bernardmarr.com/why-companies-are-paying-huge-money-for-ai-labelers/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[AI is African Intelligence]&lt;a href="https://www.404media.co/ai-is-african-intelligence-the-workers-who-train-ai-are-fighting-back/" rel="noopener noreferrer"&gt;https://www.404media.co/ai-is-african-intelligence-the-workers-who-train-ai-are-fighting-back/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="http://www.translockit.com" rel="noopener noreferrer"&gt;www.translockit.com&lt;/a&gt;&lt;br&gt;
Author: Luis Carlos Yanguas Gómez de la Serna&lt;br&gt;
AI, Software Development.&lt;/p&gt;

</description>
      <category>ailabelers</category>
      <category>ai</category>
      <category>softwaredevelopment</category>
      <category>worldofai</category>
    </item>
    <item>
      <title>Humans and the retain of control in a world where AI thinks and decides alongside us</title>
      <dc:creator>Singaraja33 </dc:creator>
      <pubDate>Sat, 04 Apr 2026 06:24:25 +0000</pubDate>
      <link>https://forem.com/singarajatech/humans-and-the-retain-of-control-in-a-world-where-ai-thinks-and-decides-alongside-us-1930</link>
      <guid>https://forem.com/singarajatech/humans-and-the-retain-of-control-in-a-world-where-ai-thinks-and-decides-alongside-us-1930</guid>
      <description>&lt;p&gt;It's not the first time we write on this topic, but it's relevance makes it worth it because the evolution of AI as a whole could easily make it possible that in just a few months from now, you might be making an important decision and not even remember if it was actually yours. And not because you forgot, but because the line between your thinking and the machine’s suggestion will simply have quietly disappeared.&lt;/p&gt;

&lt;p&gt;That’s not something futuristic anymore, it’s already happening.&lt;/p&gt;

&lt;p&gt;As mentioned in previous articles, we are entering a phase where artificial intelligence doesn’t just assist us, but participates in our processes, suggests us things and is able to refine, anticipate and sometimes even act for us. And while that sounds like progress (and in many ways it is!), it raises a deeper question that most of us are only beginning to understand.&lt;/p&gt;

&lt;p&gt;This whole thing is not just a technical problem but more of a philosophical one, a design challenge and ultimately a human one, because while for many years software followed a simple pattern of sending instructions with the matching executing them, that relationship has now changed.&lt;br&gt;
Modern AI systems no longer wait for explicit commands and instead they anticipate intent, generate options and shape decisions before you even realize it. They act less like tools and more like collaborators. This shift is subtle, but it is probably one of the most important changes in the history of software.&lt;/p&gt;

&lt;p&gt;Once a system begins to shape your options, it begins to shape your decisions, and when this happens, control is no longer about who clicks the button and it turns into something about who influenced what the button does.&lt;/p&gt;

&lt;p&gt;In the midst of all this, still most people believe they are in control simply because they are the ones interacting with the system, but we must understand that control is not about interaction with that system but about understanding and intention.&lt;br&gt;
If a system suggests the best option, frames the problem and filters the available information, your role changes, you are no longer fully deciding and you are just confirming, which is in fact a totally different thing.&lt;/p&gt;

&lt;p&gt;This basically creates an illusion of control. You feel in charge but the system has already narrowed the space of possibilities. You are choosing, but only within boundaries you did not define.&lt;br&gt;
And don't get me wrong, because this is not necessarily harmful. In many cases it's actually incredibly useful, but it just changes the whole nature of decision making in a way that is easy to overlook, and that we are obliged to at least understand.&lt;/p&gt;

&lt;p&gt;Now consider what happens when something goes wrong...An AI system helps write production code, approve a financial decision or recommend a medical action. The outcome is flawed or harmful. At that point, a difficult question emerges. Who is responsible? Who can we go to blame??&lt;/p&gt;

&lt;p&gt;Traditional systems of responsibility rely on clear agency and they are places where a person makes a decision, takes an action and responds for the result, but AI dramatically disrupts this clarity because now most decisions become the result of a mixture of human input, machine suggestion, training data and system design.&lt;br&gt;
Responsibility does not disappear but it becomes distributed and it spreads across layers that are difficult and sometimes close to impossible to separate. And when responsibility becomes difficult to locate, accountability becomes weaker.&lt;/p&gt;

&lt;p&gt;There is another big change happening at the same time, one that is less visible but equally important, and this is that we are beginning to outsource not only tasks, but understanding itself. In our days it is increasingly common to accept generated code without even fully reading it, to rely on summaries instead of engaging with original sources and to trust explanations instead of building our own reasoning. This is efficient and of course often practical, but it introduces a quiet dependency that is very risky.&lt;/p&gt;

&lt;p&gt;Over time, we begin to understand less about the systems we rely on, and despite sounding alarming we must also notice that this pattern has existed before. Look for example at calculators...When they appeared, they reduced the need for manual arithmetic. Also, GPS reduced the need for spatial navigation. There are other examples in the past, but the difference now is that AI operates at a higher cognitive level. It affects how we think, how we reason and how we make decisions.&lt;/p&gt;

&lt;p&gt;If this trend continues without reflection, we risk becoming operators of systems we no longer truly understand. And of course nobody is saying we should be controlling every output or understanding every technical detail, that is no longer realistic, but what is clear is that meaningful control becomes something more practical and more necessary than ever before. &lt;/p&gt;

&lt;p&gt;We should keep an effort in recognizing when not to trust the system, understanding that blind trust is not control and can lead us to simply delegating without oversight.&lt;/p&gt;

&lt;p&gt;Real control includes the ability to question outputs, to pause and to step outside the system when something just feels wrong. It also means understanding the boundaries of the system. You do not need to know every parameter of a model but you should for sure have a sense of what it does well, where it tends to fail and what kind of information shapes its behavior. Without that awareness, the system becomes a black box that you depend on rather than a tool you use, and there is where the danger comes up.&lt;/p&gt;

&lt;p&gt;Among all the above mentioned concepts and ideas, the most important thing to consider is that really meaningful control requires keeping the human intent at the center of the stage, prioritising and understanding that AI can optimize, suggest and automate, but it should not replace the underlying reason behind decisions. Humans should always be the ones defining goals, and systems should just be the excellent tools helping us to execute them. But when systems begin to influence or redefine those human goals, control starts to slip away.&lt;/p&gt;

&lt;p&gt;There is a common idea in AI design that many might heard of, known as “human in the loop”. This idea suggests that as long as a human is involved in the process, everything remains under control. Nothing more far from the truth. In practice, this actually often becomes a simple formality where the system generates an output and the human approves it. But that is not meaningful oversightn but only passive validation.&lt;/p&gt;

&lt;p&gt;True human involvement requires active engagement. It requires attention, critical thinking and the ability to intervene before outcomes are finalized. Without that, the human role becomes only symbolic rather than functional, and the machine will remain a defining factor.&lt;/p&gt;

&lt;p&gt;In a world where almost anything can be generated, the real question is not whether something can be built but whether it should be used.&lt;/p&gt;

&lt;p&gt;It is easy to think of this as a niche concern, something relevant only to developers or AI researchers, but that would be a big mistake because AI systems are already embedded in critical areas such as healthcare, finance, education or law. They influence decisions that affect real lives, and the way we design and interact with these systems will shape how responsibility, trust and authority function in society.&lt;/p&gt;

&lt;p&gt;If meaningful control is lost, the consequences go beyond technical errors. They might affect accountability, decision making and the balance between efficiency and human values.&lt;/p&gt;

&lt;p&gt;The solution is not simple and it's definitely not to reject AI or slow its progress. That is neither realistic nor necessary. Instead, and as we mentioned before, the real twist needs to happen in how we relate to these systems. This means questioning outputs instead of accepting them automatically. It means understanding systems well enough to recognize their limits. It means designing workflows where human reasoning remains central, even when machines handle most of the execution. And it also means accepting a new kind of responsibility, not just for what we directly create but for what we allow systems to create on our behalf.&lt;/p&gt;

&lt;p&gt;In our opinion, the future of AI is not about machines suddenly taking control but more about humans gradually giving it away, with consciousness but often without noticing. Meaningful control does not and should not disappear all at once, but it should fade through convenience, efficiency and increasing trust in systems that seem to work most of the time.&lt;/p&gt;

&lt;p&gt;If there is one thing clear is that we should ultimately preserve control, and maybe just redefine it in a way that actually fits the future we are building.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>software</category>
      <category>humanthinking</category>
      <category>programming</category>
    </item>
    <item>
      <title>How Google is risking to lose a big chunk of the future tech race.

Read it in Medium:

https://luisyanguas22.medium.com/internet-is-changing-and-the-invincible-google-might-be-the-first-victim-76059f1582b2</title>
      <dc:creator>Singaraja33 </dc:creator>
      <pubDate>Wed, 01 Apr 2026 04:39:51 +0000</pubDate>
      <link>https://forem.com/singarajatech/how-google-is-risking-to-lose-a-big-chunk-of-the-future-tech-race-read-it-in-medium-2fao</link>
      <guid>https://forem.com/singarajatech/how-google-is-risking-to-lose-a-big-chunk-of-the-future-tech-race-read-it-in-medium-2fao</guid>
      <description>&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
      &lt;div class="c-embed__body flex items-center justify-between"&gt;
        &lt;a href="https://luisyanguas22.medium.com/internet-is-changing-and-the-invincible-google-might-be-the-first-victim-76059f1582b2" rel="noopener noreferrer" class="c-link fw-bold flex items-center"&gt;
          &lt;span class="mr-2"&gt;luisyanguas22.medium.com&lt;/span&gt;
          

        &lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


</description>
    </item>
    <item>
      <title>When Melania Trump walked the other day along a Figure AI humanoid...

Read here 👇🏻👌🏻

https://future.forem.com/singarajatech/figure-ai-tesla-optimus-and-the-crazy-evolution-of-robotic-technologies-2hhn</title>
      <dc:creator>Singaraja33 </dc:creator>
      <pubDate>Fri, 27 Mar 2026 07:08:39 +0000</pubDate>
      <link>https://forem.com/singarajatech/when-melania-trump-walked-the-other-day-along-a-figure-ai-humanoid-read-here-1bp3</link>
      <guid>https://forem.com/singarajatech/when-melania-trump-walked-the-other-day-along-a-figure-ai-humanoid-read-here-1bp3</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://future.forem.com/singarajatech/figure-ai-tesla-optimus-and-the-crazy-evolution-of-robotic-technologies-2hhn" class="crayons-story__hidden-navigation-link" rel="noopener noreferrer"&gt;Figure AI, Tesla Optimus and the crazy evolution of robotic technologies.&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/singarajatech" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3714811%2F77f3269a-8d54-4c49-98ec-c757dc471ffc.jpg" alt="singarajatech profile" class="crayons-avatar__image" width="718" height="990"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/singarajatech" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Singaraja33 
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Singaraja33 
                
              
              &lt;div id="story-author-preview-content-3413027" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/singarajatech" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3714811%2F77f3269a-8d54-4c49-98ec-c757dc471ffc.jpg" class="crayons-avatar__image" alt="" width="718" height="990"&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Singaraja33 &lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://future.forem.com/singarajatech/figure-ai-tesla-optimus-and-the-crazy-evolution-of-robotic-technologies-2hhn" class="crayons-story__tertiary fs-xs" rel="noopener noreferrer"&gt;&lt;time&gt;Mar 27&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://future.forem.com/singarajatech/figure-ai-tesla-optimus-and-the-crazy-evolution-of-robotic-technologies-2hhn" id="article-link-3413027" rel="noopener noreferrer"&gt;
          Figure AI, Tesla Optimus and the crazy evolution of robotic technologies.
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/figureai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;figureai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/teslaoptimus"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;teslaoptimus&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/robotics"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;robotics&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/softwaredevelopment"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;softwaredevelopment&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
            &lt;a href="https://future.forem.com/singarajatech/figure-ai-tesla-optimus-and-the-crazy-evolution-of-robotic-technologies-2hhn#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center" rel="noopener noreferrer"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            5 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
    </item>
    <item>
      <title>Figure AI, Tesla Optimus and the crazy evolution of robotic technologies.</title>
      <dc:creator>Singaraja33 </dc:creator>
      <pubDate>Fri, 27 Mar 2026 07:05:18 +0000</pubDate>
      <link>https://forem.com/singarajatech/figure-ai-tesla-optimus-and-the-crazy-evolution-of-robotic-technologies-2hhn</link>
      <guid>https://forem.com/singarajatech/figure-ai-tesla-optimus-and-the-crazy-evolution-of-robotic-technologies-2hhn</guid>
      <description>&lt;p&gt;When Melania Trump casually walked the other day into the White House alongside a humanoid 1,7m tall robot from Figure AI, many of you probable had this feeling that something surreal, almost crazy is going on. The machine, in a quite natural manner, just walked beside her, addressed world leaders in multiple languages and spoke as if it belonged there.&lt;/p&gt;

&lt;p&gt;For a second, it felt like a scene pulled straight out of one of those science fiction movies from the 90's, but it wasn’t fiction and if that moment made you stop and think, it definitely should because it signals something much bigger than just a viral headline. In fact this simple moment might have marked the point where humanoid robotics stopped being experimental and started becoming part of our real world systems, education, labor and everyday life.&lt;/p&gt;

&lt;p&gt;What we are witnessing right now with Figure AI and Tesla’s Optimus is not just technological progress but more of a big shift in how humans and machines will coexist. Until very recently, robots were either industrial arms locked in factories or machine looking prototypes struggling to walk without falling. They were specialized, limited and far from human like. We could not associate them to us at all, in any sense. But that’s no longer the case in a moment in time when the convergence of advanced hardware and breakthroughs in artificial intelligence, driven in part by organizations like OpenAI, has changed the trajectory entirely.&lt;/p&gt;

&lt;p&gt;Figure AI’s humanoids, including the one that appeared at the White House, are designed to operate in human environments, and this is in fact a crucial distinction. Instead of building machines that require controlled conditions, they are building machines that are able to adapt to our world as it already exists. These robots can walk, manipulate objects, interpret instructions and, increasingly, communicate in natural language.&lt;/p&gt;

&lt;p&gt;Tesla Optimus robot follows a similar philosophy but brings a different advantage: scale. To put it in context, Tesla is not just building robots, but it’s building them with the intention of mass production, and this is something Musk is constantly pointing out as anyone can see on his daily posts on X. Drawing from its expertise in manufacturing and AI systems used in autonomous driving, Tesla is aiming to make humanoid robots economically viable, and that’s the real unlock they seek because once these machines become affordable, their impact will not be limited to tech demos or high profile events and they will literally be everywhere.&lt;/p&gt;

&lt;p&gt;What is actually incredible is not just their potential but what they can already do, as today’s humanoid robots can already handle logistics tasks like sorting, lifting and transporting objects. They can even assist in structured environments such as warehouses and factories where labor shortages are already a growing issue. They can also follow clear instructions, adjust to minor variations in tasks and operate for extended periods without getting tired at all. &lt;/p&gt;

&lt;p&gt;And yet, this is still actually only the early stage, and the true real transformation lies in what comes next.&lt;/p&gt;

&lt;p&gt;Imagine disaster zones with hundreds or thousands of harmed people where robots are first responders, navigating unstable structures and helping others with huge precision and without risking human lives. Think of hospitals where robots assist nurses by handling physically demanding tasks, allowing medical professionals to focus on care rather than logistics. Consider construction sites where machines take on repetitive or dangerous work, reducing injuries and increasing efficiency...&lt;br&gt;
Even in education, the idea of robots becoming useful, something once dismissed as futuristic, is already being discussed at the highest levels. &lt;/p&gt;

&lt;p&gt;The robot that walked with Melania was presented as part of a broader vision where humanoid systems could support personalized learning, potentially acting as adaptive, always-available tutors, and this is where the conversation shifts from “what robots can do” to “what role they will play in society”, because these machines are not just tools anymore and they are turning into pure collaborators.&lt;/p&gt;

&lt;p&gt;That new panorama opens the door to enormous benefits in a field where labor intensive, repetitive and dangerous jobs could be offloaded to robots, reducing human exposure to risk. Productivity could increase dramatically, lowering costs and potentially reshaping entire industries. For individuals, despite of all the logical mental and real challenges, it could mean less time spent on physically exhausting work and more time focused on creative, strategic or meaningful activities.&lt;/p&gt;

&lt;p&gt;But in the midst of all this new industry there a understandable tensions that can’t be ignored, because the exact same capabilities that make humanoid robots interesting and powerful also make them disruptive.&lt;br&gt;
One of the biggest challenges many people talk about is reliability, because while human environments are messy, unpredictable and constantly changing, a robot might perform flawlessly in a controlled demo but struggle in a cluttered home or a chaotic construction site. And achieving true adaptability (the kind humans take for granted), is still an unsolved problem.&lt;/p&gt;

&lt;p&gt;Cost is another barrier, because while companies like Tesla aim to reduce production costs, building a humanoid robot with advanced AI is still very expensive. For widespread adoption, these systems must become not just functional but economically practical, in a twist like the one that happened when wireless mobile phones appeared (initially extremely expensive and then becoming affordable over time)&lt;/p&gt;

&lt;p&gt;Safety adds another layer of complexity, because when robots operate alongside humans, the margin for error becomes extremely small. A misinterpreted instruction or a mechanical failure could lead to real world consequences, and designing systems that are both capable and consistently safe is one of the hardest engineering challenges ahead.&lt;/p&gt;

&lt;p&gt;Then there’s the question everyone is thinking about: jobs.&lt;br&gt;
Automation has always reshaped labor markets, but humanoid robots take it further in many ways from the moment we understand that these machines are not limited to one function. They can, in theory, perform a wide range of physical tasks, and that raises concerns about displacement across industries, from logistics to retail or construction.&lt;/p&gt;

&lt;p&gt;The transition won’t be simple and this is something even Musk admits. It will require mandatory factors like rethinking education, reskilling workers and creating new opportunities that align with a world where physical labor is increasingly automated.&lt;/p&gt;

&lt;p&gt;And beyond economics, there is also something deeper, quite philosophical, as when a robot can walk beside a human, speak multiple languages or participate in public events, it starts to blur the line between machine and presence. It forces us to reconsider how we define intelligence, interaction and even companionship.&lt;/p&gt;

&lt;p&gt;That moment at the White House was actually something quite symbolic. It showed that humanoid robots are no longer confined to labs or factories and they are entering public life, cultural spaces and global conversations, and as this has happened, there’s no easy going back.&lt;/p&gt;

&lt;p&gt;The evolution of robotics is no longer a distant possibility and is unfolding in real time, faster than most people expected. Companies like Figure AI and Tesla are pushing the boundaries of what machines can do, but more importantly, they are redefining what machines actually are. We are moving towards a world where intelligence is not just something that exists in screens but something that walks, moves and interacts in physical space.&lt;/p&gt;

&lt;p&gt;Robotics future has definitely arrived, and now it's time for us to see whether we’re ready for it or not.&lt;/p&gt;

&lt;p&gt;[Meet Figure AI, the company hosted by Melania Trump]&lt;a href="https://www.cnbc.com/2026/03/26/figure-ai-the-robotics-company-hosted-by-melania-trump.html" rel="noopener noreferrer"&gt;https://www.cnbc.com/2026/03/26/figure-ai-the-robotics-company-hosted-by-melania-trump.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Translock IT&lt;br&gt;
Author: Luis Carlos Yanguas Gómez de la Serna&lt;/p&gt;

</description>
      <category>figureai</category>
      <category>teslaoptimus</category>
      <category>robotics</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Understanding the evolution and stabilization of the AI industry</title>
      <dc:creator>Singaraja33 </dc:creator>
      <pubDate>Wed, 25 Mar 2026 06:08:53 +0000</pubDate>
      <link>https://forem.com/singarajatech/understanding-the-evolution-and-stabilization-of-the-ai-industry-apn</link>
      <guid>https://forem.com/singarajatech/understanding-the-evolution-and-stabilization-of-the-ai-industry-apn</guid>
      <description>&lt;p&gt;If there is something we would all agree on is that the artificial intelligence landscape is evolving at a rhythm that very few of us could have predicted just a few years ago. From language models able of holding complex conversations to systems generating extremely realistic images and assisting in scientific research, the world of AI has moved from experimental laboratories to becoming an integral part of nearly everybody's daily life.&lt;/p&gt;

&lt;p&gt;As we soon will enter April 2026, the industry is still characterized by rapid growth, huge competition and a mix of excitement and also uncertainty, but yet, signs are emerging that the sector may soon enter a phase of stabilization, offering clearer paths for businesses and everyday users alike. &lt;/p&gt;

&lt;p&gt;Understanding how and why this transition might occur is crucial for anyone interested in the future of technology, and the first factor shaping this potential stabilization is basically technological consolidation. Over the past few years, an immense number of startups and research labs have launched their own AI models, leading to a fragmented landscape, and while diversity in all this fuels innovation, it also introduces challenges related to interoperability, quality control and standardization. In the midst of this crazy storm, users are logically asking themselves a number of questions.&lt;/p&gt;

&lt;p&gt;Experts, on their side, suggest that the industry is likely to experience consolidation in the coming years, and this consolidation they think will mean larger companies either acquiring smaller players or collaborating within shared ecosystems. This process could well lead to standardized protocols for model deployment, greater compatibility across different platforms and more reliable user experiences, and by reducing fragmentation, AI will become more predictable and manageable, both for the businesses integrating these technologies and also for users relying on them for everyday tasks.&lt;/p&gt;

&lt;p&gt;Efficiency and scalability represent another critical element in the evolution of AI because as of today, training "state of the art" models usually demand big computational resources, expensive hardware and significant energy consumption, and as these demands grow, we expect that companies will be incentivized to develop more efficient architectures and innovative training methods. Advances in model compression, optimized inference techniques and lighter architectures will allow AI systems to perform complex tasks while using less power and memory, and the result for this is an industry where AI becomes accessible not only in cloud based environments but also on personal devices like smartphones and laptops. Such accessibility not only helps broadening the user base but also stabilizes market expectations by reducing dependency on enormous data centers and costly infrastructure.&lt;/p&gt;

&lt;p&gt;Regulatory frameworks will also play a central role in moving the AI industry toward stability, and we are sure that governments around the world will be increasingly aware of both the opportunities and the risks that advanced AI represent. Issues like privacy concerns, potential misuse and intellectual property have driven discussions on how to govern these technologies responsibly and this talks will only increase over time because a consistent and transparent regulatory environment is the only way for reducing uncertainty for companies, investors and consumers. Standards for transparency, safety and ethical deployment will also probably emerge, functioning similarly to the ISO standards in traditional industries, and with all those clear rules in place, businesses will be able to plan better their long term strategies without fear of sudden legal or social backlash, while users will gain trust in the safety and fairness of AI systems.&lt;/p&gt;

&lt;p&gt;Equally important to the previous two is the professionalization of the sector. In the past, a substantial portion of AI development relied on enthusiasts and self taught talent, a kind of wild and unregulated field typical of many upcoming new undustries, and while this approach definitely accelerated innovation, it also introduced variability in quality and increased the risk of mistakes in deployment, and as the industry in itself matures, demand for highly trained professionals will only grow. &lt;/p&gt;

&lt;p&gt;With regards to AI educational programs and certifications focused on AI safety and operational excellence, these are becoming standard, leading to a more qualified workforce that is not only better skilled but also aligned with industry best practices. This professionalization ensures that innovation continues without compromising reliability or accountability, further contributing to a more stable ecosystem.&lt;/p&gt;

&lt;p&gt;Also, economic factors cannot be overlooked when talking about the sector's stabilization because AI remains an extremely capital intensive industry, with high upfront costs for research, model training and infrastructure. In the early stage AI market volatility, fueled by highly speculative investments and hype cycles, we saw a market that was certainly unpredictable at times, however, as consolidation and regulation take effect, business models are likely to become more predictable. Services based on subscription, enterprise licenses and targeted B2B solutions will for sure generate more consistent revenue streams for AI players, and the stabilization of these financial patterns will not only benefit investors but also allow smaller companies to participate in the ecosystem without risking crazy losses, fostering a balanced, sustainable industry landscape.&lt;/p&gt;

&lt;p&gt;From the user standpoint, public perception and adoption also play a powerful role in stabilization, in a world where AI technologies are increasingly present in everyday life, from virtual assistants to tools supporting content creation, research and even revelant decision making. For the industry to really stabilize, users must strongly trust the systems they rely on, and for this to become a fact, transparency about how AI works, clear explanations of data usage and robust oversight mechanisms will reinforce public confidence. &lt;/p&gt;

&lt;p&gt;As adoption grows alongside trust, the AI sector can also achieve a balance where innovation continues to grow but at the same time is guided by societal expectations and certain well understood standards. This alignment will ensure that technological progress does not outpace the social and cultural readiness to integrate AI responsibly.&lt;/p&gt;

&lt;p&gt;Another emerging trend that definitely contributes to a more stable future is the coexistence of open source and proprietary models, a test that at times is quite challlenging. Open source AI initiatives push for experimentation, collaboration and customization, fostering innovation across the globe, but meanwhile, well funded proprietary models provide reliability, commercial support and optimized performance. The coexistence of these approaches becomes more and more necessary and also allows to create a diverse but structured ecosystem where individuals and organizations can choose solutions that match their needs while benefiting from established standards and safe deployment practices. The best of both worlds. This balance is fundamental and further reduces market volatility while encouraging sustainable growth.&lt;/p&gt;

&lt;p&gt;In short, the stabilization of the AI model industry is likely to result from a combination of technological consolidation, improved efficiency, regulatory clarity, professionalization, economic predictability, public trust and the coexistence of open source and proprietary solutions. Each of these factors are key and address current uncertainties and risks, providing a framework for a mature, reliable and sustainable AI ecosystem that is the future both companies and users should aim for. And while innovation will remain a central driver, it should occur within boundaries that protect users and guide investment.&lt;/p&gt;

&lt;p&gt;In view of all this things, and loking forward, the next few years are expected to solidify these trends. Most probably, companies will continue to push the limits of what AI can achieve, but they will do so in a more structured environment where expectations, responsibilities and outcomes are clearer. &lt;/p&gt;

&lt;p&gt;On the other hand, users will interact with AI more confidently, investors will commit capital with higher predictability and governments will oversee the deployment of these technologies not in the unorganised or improvising way we sometimes see, but more responsibly. This convergence of technological, social, economic and regulatory forces will for sure mark the transition from the current period of rapid, sometimes chaotic growth to a more mature, stable phase where AI will become an integral, trusted and sustainable component of global society.&lt;/p&gt;

&lt;p&gt;In essence, we don't expect that the future of AI models will be about slowing innovation but about channeling it in ways that create reliability, predictability and trust. By addressing fragmentation, inefficiency, legal uncertainty and public skepticism, the AI industry is positioned to move from the excitement of discovery to the steadiness of maturity, offering benefits that are both transformative and enduring, which is what everybody should be expecting.&lt;/p&gt;

&lt;p&gt;The stabilization of this disruptive sector, maybe the largest tech change since the internet era, promises a world where AI is not just powerful but also properly integrated into the fabric of daily life, enhancing productivity, creativity and proper decision making across all areas of human endeavor. And the day we achieve this, the world will definitely be an even better place.&lt;/p&gt;

&lt;p&gt;[The future of AI: How Artificial Intelligence will change the world]&lt;a href="https://builtin.com/artificial-intelligence/artificial-intelligence-future" rel="noopener noreferrer"&gt;https://builtin.com/artificial-intelligence/artificial-intelligence-future&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[The future of AI: Predictions for the next decade]&lt;a href="https://www.gottabemobile.com/future-of-artificial-intelligence/" rel="noopener noreferrer"&gt;https://www.gottabemobile.com/future-of-artificial-intelligence/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[The truth about AI regulation and the real winners behind it]&lt;a href="https://luisyanguas22.medium.com/the-truth-about-ai-regulation-power-and-the-real-winners-behind-all-of-it-da9c0367c3d5" rel="noopener noreferrer"&gt;https://luisyanguas22.medium.com/the-truth-about-ai-regulation-power-and-the-real-winners-behind-all-of-it-da9c0367c3d5&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Author: Luis Carlos Yanguas Gomez de la Serna&lt;br&gt;
&lt;a href="http://www.translockit.com" rel="noopener noreferrer"&gt;www.translockit.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwaredevelopment</category>
      <category>futureofai</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Are we outsourcing our thinking to AI without noticing?</title>
      <dc:creator>Singaraja33 </dc:creator>
      <pubDate>Wed, 18 Mar 2026 08:46:20 +0000</pubDate>
      <link>https://forem.com/singarajatech/are-we-outsourcing-our-thinking-to-ai-without-noticing-2idc</link>
      <guid>https://forem.com/singarajatech/are-we-outsourcing-our-thinking-to-ai-without-noticing-2idc</guid>
      <description>&lt;p&gt;Involved as we are in this crazy hurricane of AI new developments, it's starting to be quite common to hear of situations where people are able to ship something in half the time thanks to AI, but then struggle to explain how it actually works. &lt;/p&gt;

&lt;p&gt;This is something absolutely new, something that is slowly emerging around us as AI becomes part of everyday development work. We are entering a new tech chapter where teams can build faster than they can fully understand what they’re building, and while that is for sure powerful, it also carries a true risk that almost no one is talking about, because the reality is that we may be outsourcing not just effort, but understanding itself.&lt;/p&gt;

&lt;p&gt;And this is not an argument against AI, in fact it might be quite the opposite. It’s an argument for using AI with more awareness, because the real opportunity right now is not just speed, but leverage without losing depth. This is where the real advantage comes from in our days.&lt;/p&gt;

&lt;p&gt;Traditional software development has always followed a well known rhythm that was actually the root of true learning. You basically learned the fundamentals, fought with complexity, made mistakes and slowly built intuition. Understanding came from real experience and from being forced to figure things out the hard way.&lt;/p&gt;

&lt;p&gt;AI changes that learning curve dramatically, because in our days a developer can simply generate production ready code in seconds, debug issues with guided suggestions or build systems they have never built before, creating an scenario where what used to take days or weeks can now take only a couple of hours.&lt;/p&gt;

&lt;p&gt;This is a great and extraordinary progress, but it also introduces a challenge that is easy to overlook, when we start to realise that we are moving from doing the work to delegating the work. And delegation, by its own nature, creates distance.&lt;/p&gt;

&lt;p&gt;The more we rely on AI to fill in the gaps and do it all, the less friction we experience, and while friction is often seen as something to eliminate, it is also where real understanding is formed. Without it, it becomes possible to build something that works without fully knowing why it works, and this is dangerous.&lt;/p&gt;

&lt;p&gt;In the short term, this feels like a superpower and of course nobody can say it does not come with tons of advantages. Thanks to that, projects can move faster, teams can deliver way more and ideas turn into products at a speed that would have seemed unrealistic not long ago. But the risks that come with it cannot be ignored, and with all of this new fantastic trends and capabilities we are at the same time starting to see that sometimes teams are starting to experience simple and small bugs that take far longer than expected to diagnose, simple changes that create unexpected side effects, or small hesitations in the code that bring almost unpredictable effects.&lt;/p&gt;

&lt;p&gt;These of course are not new problems for any developer, but they become more common when systems are assembled faster than they are understood. What is happening is simple but important. Execution is accelerating faster than understanding, and that gap matters more than it might seem at first.&lt;/p&gt;

&lt;p&gt;In software, understanding is not optional because it's basically what allows you to debug with confidence, to scale systems reliably and to make proper architectural decisions. Without it you are not truly in control and you risk being operating something that works, but only within boundaries you do not fully see or reach to understand.&lt;/p&gt;

&lt;p&gt;Every major abstraction in software has created a similar dynamic. We've seen similar things before. Frameworks, as an example, made development easier but hid complexity. Cloud platforms removed infrastructure burdens but introduced new layers that few people fully really understand, and tools without code allowed more people to build but sometimes at the cost of deeper understanding.&lt;/p&gt;

&lt;p&gt;And while not being something new, AI is simply the most powerful abstraction we have ever introduced because maybe the big difference is the speed at which it is changing everything. What used to take years of gradual adaptation is now happening in months, and that compresses the time we have to rebuild understanding.&lt;/p&gt;

&lt;p&gt;It would be easy to frame this as a problem, but that would miss the bigger opportunity. AI is not making people less capable but it is actually giving capable people unprecedented leverage, and developers and teams who benefit the most are not the ones who rely on AI blindly, but the ones who combine it with strong mental models. &lt;/p&gt;

&lt;p&gt;When a developer or a team is able to understand the fundamentals, AI becomes a tool they can really direct, refine and challenge. They can spot when it is wrong, adapt its output to the desired context, and move faster without losing control.&lt;br&gt;
In that sense, AI is an incredible tool to amplify capacities.&lt;/p&gt;

&lt;p&gt;The real risk in this landscape is not the technology itself but how passively it can be used, and there is a huge difference between copying and pasting an answer and engaging with it. One approach creates dependency over time, while the other builds capability.&lt;/p&gt;

&lt;p&gt;Active use of all the AI tools at hand today requires a bit of effort, but it strengthens understanding with every interaction, and over time, that difference becomes significant and creates a very big competitive advantage.&lt;/p&gt;

&lt;p&gt;The most effective teams are not, and should not be stepping away from AI. They should be simply becoming more intentional in how they use it. They should treat it as a collaborator rather than an authority, always question its outputs, refining them and using them as a starting point rather than a final answer.&lt;br&gt;
They also should continue to invest in fundamentals even when it might feel unnecessary in the short term. They should value the ability to explain a system as much as the ability to build it, as they always did in the past. In the end, they should create small habits that keep understanding alive, whether that means pausing to ask why something works or taking the time to rewrite a piece of logic to fully grasp it.&lt;/p&gt;

&lt;p&gt;What we thing should come up from all this new wave is a new kind of developer and a new kind of team, not one defined by how much they use AI, but by how well they integrate it into their thinking, and we think the future of our industry will not be shaped by those who use AI the most, but by those who use it with intention and understanding.&lt;/p&gt;

&lt;p&gt;AI is one of the most powerful tools ever introduced into software development. It can accelerate learning, reduce problema and unlock levels of creativity and productivity that were previously out of reach, but as we said before, its real value depends on how we choose to use it.&lt;/p&gt;

&lt;p&gt;If we allow AI to fully replace our thinking, we lose something essential. If we use it to enhance our thinking, we gain an advantage that compounds over time. As simple as that.&lt;br&gt;
Because in the end, the real advantage is not just building faster. It is understanding what you have built and why it works.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwaredevelopment</category>
      <category>aioutsourcing</category>
    </item>
    <item>
      <title>Creating an amazing son with AI is already possible and here is why! 👇🏻

https://dev.to/singaraja33/how-ai-is-rewriting-the-rules-of-music-creation-and-production-51p7</title>
      <dc:creator>Singaraja33 </dc:creator>
      <pubDate>Sun, 15 Mar 2026 02:53:00 +0000</pubDate>
      <link>https://forem.com/singarajatech/creating-an-amazing-son-with-ai-is-already-possible-and-here-is-why-1nmn</link>
      <guid>https://forem.com/singarajatech/creating-an-amazing-son-with-ai-is-already-possible-and-here-is-why-1nmn</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/singaraja33" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3589702%2Ffb24fe2f-d047-43a0-900f-7232fd828610.png" alt="singaraja33"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/singaraja33/how-ai-is-rewriting-the-rules-of-music-creation-and-production-51p7" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How AI is rewriting the rules of music creation and production&lt;/h2&gt;
      &lt;h3&gt;Singaraja33 ・ Mar 15&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#musicai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#softwaredevelopment&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
    </item>
    <item>
      <title>Si quieres construir tu App tu mismo, rápido y prácticamente gratis, tienes que leer esto!! 👇🏻

https://open.substack.com/pub/luisyanguas/p/lovable-la-startup-de-ia-con-la-que?utm_source=share&amp;utm_medium=android&amp;r=7es6ar</title>
      <dc:creator>Singaraja33 </dc:creator>
      <pubDate>Fri, 13 Mar 2026 04:24:30 +0000</pubDate>
      <link>https://forem.com/singarajatech/si-quieres-construir-tu-app-tu-mismo-rapido-y-practicamente-gratis-tienes-que-leer-esto-3h51</link>
      <guid>https://forem.com/singarajatech/si-quieres-construir-tu-app-tu-mismo-rapido-y-practicamente-gratis-tienes-que-leer-esto-3h51</guid>
      <description>&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://luisyanguas.substack.com/p/lovable-la-startup-de-ia-con-la-que?utm_source=share&amp;amp;amp%3Butm_medium=android&amp;amp;amp%3Br=7es6ar&amp;amp;triedRedirect=true" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_%21P1J9%21%2Cw_1200%2Ch_675%2Cc_fill%2Cf_jpg%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Cg_auto%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F8cf24039-1f14-4f37-9eee-1c8d5ab191a5_1080x830.jpeg" height="499" class="m-0" width="800"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://luisyanguas.substack.com/p/lovable-la-startup-de-ia-con-la-que?utm_source=share&amp;amp;amp%3Butm_medium=android&amp;amp;amp%3Br=7es6ar&amp;amp;triedRedirect=true" rel="noopener noreferrer" class="c-link"&gt;
            Lovable: la startup de IA con la que puedes crear apps hablando con una máquina
          &lt;/a&gt;
        &lt;/h2&gt;
          &lt;p class="truncate-at-3"&gt;
            La historia de cómo un ingeniero sueco está haciendo que cualquiera de nosotros pueda crear software mediante IA
          &lt;/p&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
            &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsubstackcdn.com%2Ficons%2Fsubstack%2Ffavicon.ico" width="32" height="32"&gt;
          luisyanguas.substack.com
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


</description>
    </item>
  </channel>
</rss>
