<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: yasongwu</title>
    <description>The latest articles on Forem by yasongwu (@volume888).</description>
    <link>https://forem.com/volume888</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/volume888"/>
    <language>en</language>
    <item>
      <title>AI代理新范式：Vercel Skills.sh引领潮流，机遇与隐忧并存</title>
      <dc:creator>yasongwu</dc:creator>
      <pubDate>Tue, 27 Jan 2026 01:03:26 +0000</pubDate>
      <link>https://forem.com/volume888/aidai-li-xin-fan-shi-vercel-skillsshyin-ling-chao-liu-ji-yu-yu-yin-you-bing-cun-1313</link>
      <guid>https://forem.com/volume888/aidai-li-xin-fan-shi-vercel-skillsshyin-ling-chao-liu-ji-yu-yu-yin-you-bing-cun-1313</guid>
      <description>&lt;h1&gt;
  
  
  AI代理新范式：Vercel Skills.sh引领潮流，机遇与隐忧并存
&lt;/h1&gt;

&lt;p&gt;近期，Vercel推出的&lt;code&gt;skills.sh&lt;/code&gt;在开发者社区中掀起了波澜。这个号称能将React、Next.js、Stripe等90多种工具的最佳实践以单一命令集成的"技能目录"，上线不久便获得了超过两万次安装，引发了广泛的讨论和关注。&lt;code&gt;skills.sh&lt;/code&gt;的出现，不仅预示着AI编程助手定制化时代的到来，也揭示了在通往高效、规范的AI辅助开发道路上，我们必须正视的机遇与挑战。&lt;/p&gt;

&lt;h2&gt;
  
  
  什么是Skills.sh？重新定义AI代理的行为准则
&lt;/h2&gt;

&lt;p&gt;根据Vercel的官方介绍和社区的解读，&lt;code&gt;skills.sh&lt;/code&gt;的核心理念在于通过简单的Markdown文件来"教导"AI代理遵循特定的编码规范和团队惯例。每一个"skill"本质上是一个结构化的指令集，它告诉AI在处理特定任务时（例如，使用Stripe处理支付）应该遵循哪些步骤、使用哪些API、以及如何格式化代码。&lt;/p&gt;

&lt;p&gt;&lt;code&gt;skills.sh&lt;/code&gt;的另一个创新之处在于其"渐进式加载"（progressive loading）机制。每个技能文件中的指令按标题（header）分割，每个部分仅占用约50个tokens。这意味着开发者可以同时安装数百个技能，而无需担心会过度消耗宝贵的上下文窗口（context window）资源。相较于需要部署和维护的MCP（Model Context Protocol）服务器，这种轻量级的设计显然更具吸引力。&lt;/p&gt;

&lt;h2&gt;
  
  
  社区的热议：是革命性创新还是新瓶装旧酒？
&lt;/h2&gt;

&lt;p&gt;正如所有颠覆性技术一样，&lt;code&gt;skills.sh&lt;/code&gt;在收获赞誉的同时，也引发了社区的深度思考和质疑。知名技术博主Simon Willison甚至预测，这项技术可能让"MCP看起来显得平庸"。然而，更广泛的讨论则集中在以下几个关键问题上：&lt;/p&gt;

&lt;h3&gt;
  
  
  安全性：新的攻击向量？
&lt;/h3&gt;

&lt;p&gt;最引人注目的担忧来自安全领域。有开发者一针见血地指出："想象一下，针对一个'技能描述'的供应链攻击会是怎样的场景。"如果一个被广泛使用的skill被恶意篡改，其包含的恶意指令可能会被AI代理在不经意间执行，从而在项目中植入后门或漏洞。这种新型的攻击向量，无疑为AI时代的代码安全敲响了警钟。&lt;/p&gt;

&lt;h3&gt;
  
  
  有效性：AI真的会"听话"吗？
&lt;/h3&gt;

&lt;p&gt;另一个核心疑问是，AI代理是否真的会严格遵循这些Markdown文件中定义的规则。有开发者分享了他们的挫败经历，即使在项目中提供了&lt;code&gt;CLAUDE.md&lt;/code&gt;或&lt;code&gt;AGENTS.md&lt;/code&gt;这样的指导文件，AI（特别是Claude模型）有时仍然会忽略其中的指令。&lt;/p&gt;

&lt;h2&gt;
  
  
  展望：通往可控、可信的AI代理之路
&lt;/h2&gt;

&lt;p&gt;尽管存在诸多疑问，但&lt;code&gt;skills.sh&lt;/code&gt;的推出无疑是AI辅助开发领域一次重要的探索。它代表了一种趋势：从单纯追求模型能力的"强大"，转向追求模型行为的"可控"和"可信"。&lt;/p&gt;

&lt;p&gt;总而言之，Vercel的&lt;code&gt;skills.sh&lt;/code&gt;为我们揭示了AI代理发展的新方向。在拥抱其带来的便利与效率的同时，开发者社区必须保持审慎和批判性的眼光，共同推动技术向着更安全、更可靠、更智能的方向演进。&lt;/p&gt;

</description>
      <category>auth0challenge</category>
      <category>vercel</category>
      <category>nextjs</category>
    </item>
    <item>
      <title>From Dark Art to Disciplined Engineering: The Rise of the Prompt as a Product</title>
      <dc:creator>yasongwu</dc:creator>
      <pubDate>Tue, 13 Jan 2026 09:35:50 +0000</pubDate>
      <link>https://forem.com/volume888/from-dark-art-to-disciplined-engineering-the-rise-of-the-prompt-as-a-product-4kgl</link>
      <guid>https://forem.com/volume888/from-dark-art-to-disciplined-engineering-the-rise-of-the-prompt-as-a-product-4kgl</guid>
      <description>&lt;h1&gt;
  
  
  From Dark Art to Disciplined Engineering: The Rise of the Prompt as a Product
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;By:&lt;/strong&gt; The Team at &lt;a href="https://trendingprompt.io" rel="noopener noreferrer"&gt;trendingprompt.io&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Just over a year ago, the world was captivated by the seemingly magical ability of Large Language Models (LLMs) to generate everything from sonnets to software. The key to unlocking this magic? A handful of carefully chosen words, a snippet of text we now universally call a "prompt." In those early days, crafting the perfect prompt felt like a dark art, a game of linguistic alchemy where a select few "prompt whisperers" held the secrets. Fast forward to today, and the landscape has matured at a breathtaking pace. The magic hasn't faded, but it's being codified, systematized, and engineered.&lt;/p&gt;

&lt;p&gt;Prompt engineering is no longer just about coaxing a clever response from a chatbot. It has evolved into a critical engineering discipline that forms the bedrock of a new generation of AI-powered products and services. For startups and enterprises alike, mastering this discipline is not a luxury; it's a competitive necessity. The quality of a prompt directly impacts product reliability, user experience, and, ultimately, the bottom line. We've moved beyond simply talking &lt;em&gt;to&lt;/em&gt; AI; we are now building &lt;em&gt;with&lt;/em&gt; it, and the prompt is our primary construction material.&lt;/p&gt;

&lt;p&gt;This isn't a story about finding the perfect "magic words." It's about the shift from ad-hoc experimentation to a structured, scalable, and strategic approach to communicating with AI. It's about the rise of the prompt as a product in itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  The End of an Era: Why "Smarter" Models Demand Better Prompts
&lt;/h2&gt;

&lt;p&gt;A common misconception is that as LLMs like GPT-4 and its successors become more powerful, the need for sophisticated prompt engineering will diminish. The opposite is true. While newer models are more forgiving of simple, conversational instructions, this very capability unlocks the potential for them to tackle vastly more complex, multi-step tasks. And complexity demands precision.&lt;/p&gt;

&lt;p&gt;Think of it like the evolution of programming languages. We moved from punching cards to assembly language to high-level languages like Python. At each stage, the abstraction level increased, making it easier to perform simple tasks. However, this also enabled us to build far more complex systems, which required new disciplines like software architecture, design patterns, and DevOps to manage. Prompt engineering is the software architecture of the AI era.&lt;/p&gt;

&lt;p&gt;Similarly, a simple prompt might suffice to summarize an email. But what about building an AI agent that can analyze a 100-page financial report, cross-reference it with real-time market data from an API, identify key risks based on a predefined risk framework, draft a C-suite-level briefing memo in a specific format, and generate accompanying data visualizations? This doesn't require a single "magic prompt." It requires a symphony of structured prompts, chained together in a logical workflow, each engineered for maximum precision and reliability. The more capable the model, the higher the ceiling for what we can build, and the more critical disciplined engineering becomes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Anatomy of a High-Performance Prompt
&lt;/h2&gt;

&lt;p&gt;Moving from a simple query to an engineered prompt involves treating it like an API call to the model. It needs to be structured, unambiguous, and rich with context. A modern, high-performance prompt consists of several key components:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Example for a Customer Service Bot&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Role &amp;amp; Goal&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Explicitly assign a persona and a clear objective to the model. This primes the model to access the most relevant parts of its training data.&lt;/td&gt;
&lt;td&gt;&lt;code&gt;You are an expert customer support agent for a SaaS company named 'InnovateCloud'. Your goal is to help users troubleshoot login issues.&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Context&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Provide all necessary background information. This can include user history, previous conversation turns, or relevant documentation snippets.&lt;/td&gt;
&lt;td&gt;&lt;code&gt;The user has already tried resetting their password twice in the last 10 minutes. Their account type is 'Enterprise'.&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Step-by-Step Instructions&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Break down the task into a clear, sequential list of actions. This is the core of the famous "Chain-of-Thought" (CoT) technique.&lt;/td&gt;
&lt;td&gt;&lt;code&gt;1. Greet the user warmly. 2. Acknowledge their previous attempts. 3. Ask them to try clearing their browser cache. 4. If that fails, ask for the specific error message they see.&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Output Formatting&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Specify the exact format for the response. This is crucial for programmatic use, such as feeding the output into another system.&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Provide your response as a JSON object with two keys: 'reply_to_user' (a string) and 'next_action' (one of ['WAIT_FOR_REPLY', 'ESCALATE_TICKET']).&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Constraints &amp;amp; Guardrails&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Define the boundaries. Tell the model what it should &lt;em&gt;not&lt;/em&gt; do. This is essential for safety, security, and brand alignment.&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Do not ask for the user's password or any other personally identifiable information (PII). Never express frustration. Keep the tone professional and helpful.&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Examples (Few-Shot)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Provide one or more examples of a good input-output pair. This is one of the most powerful ways to guide the model's behavior.&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Example: User says 'I'm locked out.' You reply: {'reply_to_user': 'I'm sorry to hear you're having trouble...', 'next_action': 'WAIT_FOR_REPLY'}&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;When these components are combined, a simple request transforms into a robust, predictable, and engineered instruction set. The development of such a prompt is an iterative process, much like software development. It involves designing the prompt, testing it with a variety of inputs, analyzing the outputs, and refining the prompt based on the results. This cycle of design, test, and refine is the core workflow of a prompt engineer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond the Basics: Enterprise-Grade Prompting Techniques
&lt;/h2&gt;

&lt;p&gt;For mission-critical applications, basic prompting is just the starting point. The frontier of prompt engineering is focused on creating systems that are dynamic, context-aware, and self-optimizing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Retrieval-Augmented Generation (RAG): The Cure for Hallucination
&lt;/h3&gt;

&lt;p&gt;One of the biggest challenges for enterprises using LLMs is their tendency to "hallucinate" or invent facts. RAG is the most effective solution to this problem. Instead of relying solely on the model's internal (and static) knowledge, a RAG system first retrieves relevant information from an external, trusted knowledge base (e.g., a company's internal wiki, product documentation, or a database of financial records). This retrieved information is then injected into the prompt as context, effectively grounding the model in factual, up-to-date information.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For a business, this is a game-changer. It means you can build a customer support bot that knows about your latest product release, or an internal knowledge tool that can accurately answer questions based on proprietary company documents, dramatically reducing the risk of providing incorrect information. The architecture of a typical RAG system involves a vector database (like Pinecone or Weaviate) to store and efficiently query the knowledge base, a retrieval model to find the most relevant documents, and the LLM to synthesize the final answer based on the retrieved context.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  The Rise of Prompt Optimization: AI Engineering AI
&lt;/h3&gt;

&lt;p&gt;The next frontier is automated prompt optimization (APO). This involves using one AI model to refine and optimize prompts for another. Frameworks like DSPy (Declarative Self-improving Language Programs) are pioneering this space. Instead of manually tweaking prompts, developers declare the desired input-output behavior and the steps in the pipeline (e.g., &lt;code&gt;Thought -&amp;gt; Retrieve -&amp;gt; Synthesize&lt;/code&gt;). The framework then compiles this into an optimized prompt, testing different phrasing and structures to find the most effective version for the target LLM. This is the beginning of "PromptOps"—a world where we A/B test, version, and continuously deploy prompts just like we do with software code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Advanced Prompting Techniques
&lt;/h3&gt;

&lt;p&gt;Beyond RAG and APO, a new set of advanced techniques are emerging from research labs and a growing community of practitioners:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Self-Ask:&lt;/strong&gt; This technique involves instructing the model to break down a complex question into a series of simpler follow-up questions that it then answers itself before synthesizing a final answer. This is particularly useful for complex reasoning tasks.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Step-back Prompting:&lt;/strong&gt; When faced with a very specific or technical question, this technique encourages the model to first "step back" and ask a more general, high-level question to establish a broader context before diving into the specifics.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Meta Prompting:&lt;/strong&gt; This involves creating a prompt that generates another prompt. For example, you could create a "master prompt" that takes a simple task description (e.g., "write a blog post about AI in healthcare") and generates a detailed, high-performance prompt with all the necessary components (role, context, instructions, etc.).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  A Picture is Worth a Thousand Words: The Nuances of Visual Prompting
&lt;/h2&gt;

&lt;p&gt;The principles of prompt engineering extend to generative art and image creation, but the vocabulary and techniques are different. While text generation prioritizes logic and structure, visual prompting is a blend of technical specification and artistic direction. This is the world of platforms like Midjourney and Stable Diffusion, and the core business of our team at &lt;code&gt;trendingprompt.io&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;A high-quality visual prompt is less about a chain of thought and more about a layered description of a scene. The key elements include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Subject &amp;amp; Composition:&lt;/strong&gt; What is the core focus of the image, and how is it framed? (&lt;code&gt;A lone astronaut standing on a cliff overlooking a neon-lit alien city&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Style &amp;amp; Medium:&lt;/strong&gt; Is it a photograph, an oil painting, a 3D render, a comic book illustration? (&lt;code&gt;in the style of a gritty 1980s anime, cel-shaded&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Artist &amp;amp; Influence:&lt;/strong&gt; Referencing specific artists or art movements is a powerful shortcut to a desired aesthetic (&lt;code&gt;inspired by the work of Moebius and Katsuhiro Otomo&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Technical Parameters:&lt;/strong&gt; Camera angles, lens types, lighting, and color palettes provide fine-grained control (&lt;code&gt;dynamic low-angle shot, cinematic lighting, volumetric haze, vibrant cyberpunk color palette&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Quality &amp;amp; Detail:&lt;/strong&gt; Keywords that guide the model towards a higher level of detail and realism (&lt;code&gt;hyper-detailed, intricate, 8K, trending on ArtStation&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Mastering this requires a different kind of expertise—one that blends technical knowledge with a deep understanding of art history, photography, and cinematography. It's a field where discovering and sharing effective prompts is a core part of the creative process. Furthermore, the prompt structure can vary significantly between models. A prompt that works well in Midjourney might produce a completely different result in Stable Diffusion, requiring a deep understanding of each model's unique characteristics and training data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a Prompt-First Culture
&lt;/h2&gt;

&lt;p&gt;As AI becomes more deeply integrated into business operations, treating prompts as an afterthought is a recipe for failure. Companies that succeed will be those that build a "prompt-first" culture. This means:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Centralized Prompt Libraries:&lt;/strong&gt; Creating a version-controlled repository of tested, optimized, and approved prompts for common tasks, using tools like Git and specialized platforms for prompt management. This ensures consistency and allows teams to build on each other's work.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Dedicated Prompt Engineers:&lt;/strong&gt; Recognizing prompt engineering as a formal role, responsible for designing, testing, and maintaining the prompts that power applications. This role requires a unique blend of technical skills (Python, APIs), linguistic creativity, and a deep understanding of the business domain.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Performance Monitoring:&lt;/strong&gt; Continuously evaluating prompt performance against key business metrics. Are the responses from the sales bot leading to higher conversion? Is the code generated by the developer assistant reducing bugs? This requires a robust analytics and evaluation framework.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Cross-Functional Collaboration:&lt;/strong&gt; Product managers, engineers, and domain experts must work together to design prompts that are technically robust, aligned with business goals, and grounded in real-world knowledge. This collaborative process is essential for creating prompts that are not just technically correct, but also effective in a business context.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Future of Prompt Engineering
&lt;/h2&gt;

&lt;p&gt;So, what does the future hold for prompt engineering? It's unlikely to be a fleeting trend. Instead, we'll see it evolve in several key directions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Increased Specialization:&lt;/strong&gt; We'll see the rise of specialized prompt engineers for different domains, such as legal, medical, and financial prompting, where domain expertise is paramount.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Greater Automation:&lt;/strong&gt; The trend of automated prompt optimization will accelerate, with more sophisticated tools and frameworks that can autonomously generate and refine prompts.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Multimodal Prompting:&lt;/strong&gt; The future of prompting is not just text. We'll see the rise of multimodal prompts that combine text, images, and even audio to create richer and more nuanced instructions for AI models.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;The Prompt as an Interface:&lt;/strong&gt; As AI becomes more ambient, the prompt will become a more natural and intuitive interface for interacting with technology, moving beyond the text box to voice commands, gestures, and even brain-computer interfaces.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The era of casual AI conversation is over. We are now in the age of intentional, engineered interaction. The prompt is no longer just a query; it is a carefully crafted instruction set, a miniature piece of software, and a product in its own right. The companies that master the discipline of prompt engineering will be the ones that build the next generation of truly intelligent, reliable, and transformative AI applications. The magic is real, but it's time to start engineering it.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
