<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ayush</title>
    <description>The latest articles on Forem by Ayush (@mackayush).</description>
    <link>https://forem.com/mackayush</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mackayush"/>
    <language>en</language>
    <item>
      <title>Getting started with LLM and LangChain</title>
      <dc:creator>Ayush</dc:creator>
      <pubDate>Wed, 13 Mar 2024 14:02:46 +0000</pubDate>
      <link>https://forem.com/intuitdev/getting-started-with-llm-and-langchain-fni</link>
      <guid>https://forem.com/intuitdev/getting-started-with-llm-and-langchain-fni</guid>
      <description>&lt;p&gt;Large Language Models (LLMs) have revolutionized the field of Natural Language Processing (NLP) and Artificial Intelligence (AI). However, building and training LLMs can be a complex and resource-intensive process, requiring significant computational power and expertise. That's where &lt;a href="https://www.langchain.com/"&gt;langchain&lt;/a&gt; comes in—LangChain provides a range of tools and features that address various complexities with building applications using LLMs, such as prompt crafting, response structuring, and model integration.&lt;/p&gt;

&lt;p&gt;In this blog post, we will go over some 101-level information on LLMs, and then walk through a demo on how to build a very simple LLM-based application to query for information. Let's dive into the world of Large Language Models and explore the ways in which LangChain is making them accessible and affordable for everyone.&lt;/p&gt;

&lt;h2&gt;
  
  
  First, what are Large Language Models?
&lt;/h2&gt;

&lt;p&gt;Large language models (LLMs) are a class of artificial intelligence (AI) algorithms that have been trained on massive amounts of text data, for example, text from books, articles, websites, or other written content. LLMs use this data to learn how to predict the next word or words that will occur in a sequence. Given some initial words in a sentence or paragraph, an LLM can generate a natural-sounding continuation of the text.&lt;/p&gt;

&lt;p&gt;There are several different types of LLMs, each with its own unique architecture and training methodologies. Some of the most popular types of LLMs include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Recurrent Neural Networks (RNNs): These neural networks are designed to handle sequential data, meaning language models with context, such as long sentences or paragraphs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Transformers: Transformers were developed in 2017 by OpenAI models and have since become among the most popular LLM designs. Transformers are trained using a &lt;a href="https://en.wikipedia.org/wiki/Self-supervised_learning"&gt;self-supervised learning&lt;/a&gt; learning method that enables them to analyze and encode vast amounts of text data while still maintaining the context of the information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generative Pre-trained Transformer 3 (GPT-3): This is one of the most advanced LLMs currently available, with 175 billion parameters as of 2021. GPT-3 is a neural network-based language model that is incredibly powerful at generating and understanding human language.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Large language models have many applications, including natural language search engines, text-to-speech systems, chatbots and automated customer service, and language translation software. As LLMs continue to improve, they are becoming increasingly useful tools for businesses in a wide range of industries.&lt;/p&gt;

&lt;p&gt;LLMs are already starting to change the way we interact with written content, and as their capabilities continue to grow, businesses and other organizations are starting to realize the benefits of using these systems to streamline their operations. For those interested in AI’s future, large language models are an exciting area to follow, and LangChain is a great platform to learn more and build new AI skills.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is LangChain?
&lt;/h2&gt;

&lt;p&gt;LangChain offers a comprehensive framework for creating applications that leverage the power of large language models. With LangChain, users are able to develop chatbots, personal assistants, and other AI-powered tools that can perform a range of tasks such as summarization, analysis, Q&amp;amp;A generation, code comprehension, API interaction, and more.&lt;/p&gt;

&lt;p&gt;The LangChain framework comes equipped with pre-designed chains for efficiently executing complex tasks. Users can customize existing chains or build entirely new ones by utilizing the vast library of components that LangChain offers.&lt;/p&gt;

&lt;p&gt;The LangChain framework has numerous &lt;a href="https://python.langchain.com/docs/integrations/components"&gt;components&lt;/a&gt; that can be used to develop custom applications. These components include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompts&lt;/strong&gt;: Prompt templates play a crucial role in streamlining user input to ensure effective communication with a natural language model. Its primary function is to format user input in a fashion that matches the expectations of the language model. These templates are crucial in providing context and specifying inputs to help the model perform the intended task efficiently. As an illustration, a chatbot prompt template may incorporate the user's name alongside their question to enhance contextual relevance and improve the performance of the language model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chains&lt;/strong&gt;: Chains are pre-designed language model components that accomplish specific tasks. These are pre-designed sets of components that can process input and generate output. Chains can be used as-is or customized to meet the specific requirements of a project. Custom chains can be designed by combining multiple components, creating powerful and flexible applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agents&lt;/strong&gt;: Agents are responsible for language model training and communication. Agents train models by determining the optimal input for the model and optimizing its output. They are also responsible for receiving user input, processing it, and providing appropriate output from the model. The LangChain Agent handles different input/output formats, language models, and APIs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LLMs&lt;/strong&gt;: Language models (LLMs) are at the core of LangChain's functionality. They analyze, summarize, generate Q&amp;amp;A, and understand code. LangChain enables users to create their own models or use pre-trained models from the LangChain library. The flexibility of LangChain allows users to train models on any type of data or use pre-trained models for any type of use case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Callbacks&lt;/strong&gt;: Callbacks enable developers to enhance the functionality of the LangChain components by adding customized algorithms. The callbacks can be used to modify the output generated by the language model or modify the input before input pre-processing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory&lt;/strong&gt;: Memory allows users to store data across multiple sessions, enabling LangChain to provide a more personalized experience to the users. Memory can be used to store user preferences, chat history, and interaction patterns, allowing LangChain to learn from previous interactions and adapt to user's needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Output Parsers&lt;/strong&gt;: Output parsers take responsibility for organizing the response output that Language Model (LLM) generates. The parser performs some functions, such as the elimination of unwanted or irrelevant content, addition of supplementary information, or alteration of response format. As a significant component of the LangChain framework, output parsers play an essential role in maintaining outcome relevance, accuracy, and structure. It ensures that the LLM's responses are more decipherable, thus improving ease of interpretation and increases in practicality of such responses.&lt;/p&gt;

&lt;p&gt;One of the most significant benefits that LangChain offers is the ability to access LLMs models without expending high development costs or needing extensive technical expertise. LangChain’s suite of tools deliver valuable assistance for prompt crafting, response structuring, model integration and other programming-related requirements. &lt;/p&gt;

&lt;p&gt;Now that we’re familiar with the concepts, let’s walk through a simple example to better understand how various components work together. We’ll build a service that checks whether Wikipedia contains cat recordings (who doesn’t love a cat video?):&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation and Setup
&lt;/h2&gt;

&lt;p&gt;Let’s first configure the environment. Assuming Python is already installed in your working environment, the next step is to install the LangChain library using pip:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install langchain
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we are going to use the OpenAI’s language models, we will install OpenAI SDK as well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install openai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, as a critical step in connecting to OpenAI's API, it is necessary to set the OPENAI_API_KEY as an environment variable. First, sign in to your OpenAI account and navigate to account settings. Then, head over to the &lt;a href="https://help.openai.com/en/articles/4936850-where-do-i-find-my-api-key"&gt;View API Keys&lt;/a&gt; option and generate a secret key, which you will need to copy. Within your Python script, use the os module to access the dictionary of available environment variables and set the "OPENAI_API_KEY" to the secret API key you copied earlier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
os.environ["OPENAI_API_KEY"] = "your-api-key-here"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Get predictions&lt;/strong&gt;&lt;br&gt;
Import the LLM wrapper:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.llms import OpenAI
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then initialize the wrapper with any arguments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gen_llm = OpenAI(openai_api_key=”…”, verbose=False)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Initialize the APIChain to query from the API endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.chains.api import open_meteo_docs

chain_new = APIChain.from_llm_and_api_docs(gen_llm, “https://cat-fact.herokuapp.com/facts/", verbose=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we’re ready to ask a question to the LLM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chain_new.run(‘Does wikipedia have a cat recording?’)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should get an output like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; Entering new APIChain chain…
https://cat-fact.herokuapp.com/facts/


[{“status”:{“verified”:true,”sentCount”:1},”_id”:”58e008800aac31001185ed07",”user”:”58e007480aac31001185ecef”,”text”:”Wikipedia has a recording of a cat meowing, because why not?”,”__v”:0,”source”:”user”,”updatedAt”:”2020–08–23T20:20:01.611Z”,”type”:”cat”,”createdAt”:”2018–03–06T21:20:03.505Z”,”deleted”:false,”used”:false},{“status”:{“verified”:true,”sentCount”:1},”_id”:”58e008630aac31001185ed01",”user”:”58e007480aac31001185ecef”,”text”:”When cats grimace, they are usually \”taste-scenting.\” They have an extra organ that, with some breathing control, allows the cats to taste-sense the air.”,”__v”:0,”source”:”user”,”updatedAt”:”2020–08–23T20:20:01.611Z”,”type”:”cat”,”createdAt”:”2018–02–07T21:20:02.903Z”,”deleted”:false,”used”:false},{“status”:{“verified”:true,”sentCount”:1},”_id”:”58e00a090aac31001185ed16",”user”:”58e007480aac31001185ecef”,”text”:”Cats make more than 100 different sounds whereas dogs make around 10.”,”__v”:0,”source”:”user”,”updatedAt”:”2020–08–23T20:20:01.611Z”,”type”:”cat”,”createdAt”:”2018–02–11T21:20:03.745Z”,”deleted”:false,”used”:false},{“status”:{“verified”:true,”sentCount”:1},”_id”:”58e009390aac31001185ed10",”user”:”58e007480aac31001185ecef”,”text”:”Most cats are lactose intolerant, and milk can cause painful stomach cramps and diarrhea. It’s best to forego the milk and just give your cat the standard: clean, cool drinking water.”,”__v”:0,”source”:”user”,”updatedAt”:”2020–08–23T20:20:01.611Z”,”type”:”cat”,”createdAt”:”2018–03–04T21:20:02.979Z”,”deleted”:false,”used”:false},{“status”:{“verified”:true,”sentCount”:1},”_id”:”58e008780aac31001185ed05",”user”:”58e007480aac31001185ecef”,”text”:”Owning a cat can reduce the risk of stroke and heart attack by a third.”,”__v”:0,”source”:”user”,”updatedAt”:”2020–08–23T20:20:01.611Z”,”type”:”cat”,”createdAt”:”2018–03–29T20:20:03.844Z”,”deleted”:false,”used”:false}]

&amp;gt; Finished chain.

Out[4]:
‘ Yes, Wikipedia has a recording of a cat meowing.’

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Congratulations! Your basic setup of an LLM-based application using LangChain is now working successfully.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building AI-powered search using LangChain and Milvus</title>
      <dc:creator>Ayush</dc:creator>
      <pubDate>Mon, 11 Dec 2023 06:11:32 +0000</pubDate>
      <link>https://forem.com/intuitdev/building-ai-powered-search-using-langchain-and-milvus-ipc</link>
      <guid>https://forem.com/intuitdev/building-ai-powered-search-using-langchain-and-milvus-ipc</guid>
      <description>&lt;p&gt;This blog is co-authored by &lt;strong&gt;Ayush Pandey&lt;/strong&gt;, Senior Software Engineer and &lt;strong&gt;Amit Kaushal&lt;/strong&gt;, Software Manager at Intuit.&lt;/p&gt;

&lt;p&gt;Artificial Intelligence (AI) has revolutionized the way we interact with technology, and one of the most significant applications of AI is in search. With the help of AI, search tools can surface more accurate and relevant results to users. In this blog, we will discuss how to build an AI-powered search engine using LangChain and Milvus.&lt;/p&gt;

&lt;p&gt;Before we dive into the demo, let’s talk through some of the concepts and tools involved.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Langchain?
&lt;/h2&gt;

&lt;p&gt;LangChain is a framework for developing applications powered by language models. Use cases include applications for document question answering, building conversational interfaces for database interactions, and much more. We believe that the most powerful and differentiated applications will not only leverage a language model, but will also be:&lt;br&gt;
Data-aware: connect a language model to other sources of data&lt;br&gt;
Agentic: allow a language model to interact with its environment&lt;/p&gt;

&lt;p&gt;LangChain provides the modular components used to build such applications, which can be used standalone or combined for more complexity.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is Milvus?
&lt;/h2&gt;

&lt;p&gt;Milvus was created in 2019 with a singular goal: to store, index, and manage massive embedding vectors generated by deep neural networks and other machine learning (ML) models.&lt;/p&gt;

&lt;p&gt;As a database specifically designed to handle queries over input vectors, it is capable of indexing vectors on a trillion scale. Unlike existing relational databases, which mainly deal with structured data following a pre-defined pattern, Milvus is designed from the bottom-up to handle embedding vectors converted from unstructured data.&lt;/p&gt;
&lt;h2&gt;
  
  
  Vector embeddings: why the hype?
&lt;/h2&gt;

&lt;p&gt;Vector embeddings are a powerful tool for developers working with natural language processing (NLP) and ML applications. Vector embeddings are a way of representing words or phrases as vectors in a high-dimensional space, where each dimension represents a different feature of the word or phrase. This allows developers to perform complex operations on text data, such as sentiment analysis, text classification, and machine translation.&lt;/p&gt;

&lt;p&gt;Let’s go over a simple explainer of semantic features: scientists sort different types of animals in the world into categories based on certain characteristics. For example, Birds are a type of warm-blooded vertebrate that are adapted to fly. Based on these features, we created word coordinates to represent animals based on their Type and Domestication score. These scores are called “semantic features,” which capture parts of the meanings of each word. Now that the words have corresponding numerical values, we can then plot these words as points on a graph, where the x-axis represents Type, and the y-axis represents Domestication score. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6zwt4m4dsz9ehel6qbt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6zwt4m4dsz9ehel6qbt.png" alt="Word Coordinates for Animal types"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzipiud499cur3wi5ilee.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzipiud499cur3wi5ilee.png" alt="Word Coordinates for Animal types and plots in graph"&gt;&lt;/a&gt;&lt;br&gt;
We can add new words to the plot based on their meanings. For example, where should the words "Lions" and "Parrots" go? How about "Whales"? Or "Snakes"?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnvp8i1jy6jdl1xd9e0d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnvp8i1jy6jdl1xd9e0d.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are also several libraries and tools available for developers who want to work with vector embeddings. Some popular libraries include &lt;a href="https://pypi.org/project/gensim/" rel="noopener noreferrer"&gt;Gensim&lt;/a&gt;, &lt;a href="https://www.tensorflow.org/" rel="noopener noreferrer"&gt;TensorFlow&lt;/a&gt;, and &lt;a href="https://pytorch.org/" rel="noopener noreferrer"&gt;PyTorch&lt;/a&gt;. These libraries provide pre-trained models for &lt;a href="https://www.tensorflow.org/text/tutorials/word2vec" rel="noopener noreferrer"&gt;word2vec&lt;/a&gt; and &lt;a href="https://nlp.stanford.edu/projects/glove/" rel="noopener noreferrer"&gt;GloVe&lt;/a&gt;, as well as tools for training custom models on specific datasets.&lt;/p&gt;
&lt;h2&gt;
  
  
  Demo: Using a similarity search for asking questions from a wikipedia
&lt;/h2&gt;

&lt;p&gt;First, let’s go through some prerequisites.&lt;/p&gt;

&lt;p&gt;Install LangChain and Milvus on your local system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;! python -m pip install --upgrade pymilvus langchain openai tiktoken
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, import required modules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Milvus
from langchain.document_loaders import TextLoader
from langchain.document_loaders import WebBaseLoader
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, import your &lt;a href="https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key" rel="noopener noreferrer"&gt;OpenAI API key&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import getpass

os.environ['OPENAI_API_KEY'] = "your-openai-api-key"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, load in a Wikipedia document (here we’re grabbing the article for Intuit QuickBooks) using WebBaseLoader client, and split it into chunks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;loader = WebBaseLoader([
   "https://en.wikipedia.org/wiki/QuickBooks",
])

docs = loader.load()
# Split the documents into smaller chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(docs)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Afterwards, use OpenAIEmbeddings and store everything in a Milvus vector database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vector_db = Milvus.from_documents(
   docs,
   embeddings,
   connection_args={"host": "HostName", "port": "19530"},
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s time to try semantic searching! Let’s ask a question using LangChain and Milvus:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query = "What is quickbooks?"
docs = vector_db.similarity_search(query)
docs[0].page_content
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;'Retrieved from "https://en.wikipedia.org/w/index.php?title=QuickBooks&amp;amp;oldid=1155606425"\nCategories: Accounting softwareIntuit softwareHidden categories: CS1 maint: url-statusArticles with short descriptionShort description is different from WikidataUse mdy dates from March 2019Articles containing potentially dated statements from May 2014All articles containing potentially dated statements\n\n\n\n\n\n\n This page was last edited on 18 May 2023, at 23:04\xa0(UTC).\nText is available under the Creative Commons Attribution-ShareAlike License 3.0;\nadditional terms may apply.  By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.\n\n\nPrivacy policy\nAbout Wikipedia\nDisclaimers\nContact Wikipedia\nMobile view\nDevelopers\nStatistics\nCookie statement\n\n\n\n\n\n\n\n\n\n\n\nToggle limited content width'&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
The results above are decent, but need quite a lot of formatting help. Let’s try using load_qa_with_sources_chain to ask the questions instead for a cleaner output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.chains.qa_with_sources import load_qa_with_sources_chain
from langchain.llms import OpenAI

chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True)
query = "What is quickbooks?"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;code&gt;{'intermediate_steps': [' No relevant text.',&lt;br&gt;
 ' QuickBooks is an accounting software package developed and marketed by Intuit. First introduced in 1983, QuickBooks products are geared mainly toward small and medium-sized businesses and offer on-premises accounting applications as well as cloud-based versions that accept business payments, manage and pay bills, and payroll functions.',&lt;br&gt;
 " Intuit also offers a cloud service called QuickBooks Online (QBO). The user pays a monthly subscription fee rather than an upfront fee and accesses the software exclusively through a secure logon via a Web browser. QuickBooks Online is supported on Chrome, Firefox, Internet Explorer 10, Safari 6.1, and also accessible via Chrome on Android and Safari on iOS 7. Quickbooks Online offers integration with other third-party software and financial services, such as banks, payroll companies, and expense management software. QuickBooks desktop also supports a migration feature where customers can migrate their desktop data from a pro or prem SKU's to Quickbooks Online.",&lt;br&gt;
 ' QuickBooks - Wikipedia \nInitial release, Subsequent releases, QuickBooks Online, QuickBooks Point of Sale, Add-on programs.'],&lt;br&gt;
'output_text': ' QuickBooks is an accounting software package developed and marketed by Intuit. It offers on-premises accounting applications as well as cloud-based versions that accept business payments, manage and pay bills, and payroll functions. QuickBooks Online is a cloud service that offers integration with other third-party software and financial services.\nSOURCES: https://en.wikipedia.org/wiki/QuickBooks'}&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Other search-related use cases using LangChain and Milvus:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;E-commerce search engine: the language model can be trained on product descriptions and reviews, and the data can be converted into vectors using Milvus. The vectors can then be indexed in Milvus, and a search interface can be built to retrieve relevant products based on user queries.&lt;/li&gt;
&lt;li&gt;Image search engine: the language model can be trained on image captions and tags, and the images can be converted into vectors using Milvus. The vectors can then be indexed in Milvus, and a search interface can be built to retrieve relevant images based on user queries.&lt;/li&gt;
&lt;li&gt;Video search engine: the language model can be trained on video titles and descriptions, and the videos can be converted into vectors using Milvus. The vectors can then be indexed in Milvus, and a search interface can be built to retrieve relevant videos based on user queries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By following the simple steps we’ve outlined here, developers can use LangChain and Milvus to build search engines for various use cases ranging from a simple document search to applications in e-commerce, image, and video search. We hope this was a helpful starter guide, please leave a comment if you have any further questions!&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;References and further reading: *&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://python.langchain.com/docs/get_started/introduction.html" rel="noopener noreferrer"&gt;Langchain&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://milvus.io/" rel="noopener noreferrer"&gt;Milvus&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://openai.com/" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key" rel="noopener noreferrer"&gt;How to find OpenAI key&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/QuickBooks" rel="noopener noreferrer"&gt;QuickBooks Wikipedia page&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cs.cmu.edu/~dst/WordEmbeddingDemo/tutorial.html" rel="noopener noreferrer"&gt;Vector Embedding&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
  </channel>
</rss>
