<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Allen Firstenberg</title>
    <description>The latest articles on Forem by Allen Firstenberg (@afirstenberg).</description>
    <link>https://forem.com/afirstenberg</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/afirstenberg"/>
    <language>en</language>
    <item>
      <title>What's Coming with LangChainJS and Gemini?</title>
      <dc:creator>Allen Firstenberg</dc:creator>
      <pubDate>Thu, 08 Jan 2026 22:53:08 +0000</pubDate>
      <link>https://forem.com/gde/whats-coming-with-langchainjs-and-gemini-4ocf</link>
      <guid>https://forem.com/gde/whats-coming-with-langchainjs-and-gemini-4ocf</guid>
      <description>&lt;p&gt;The past few months have been &lt;strong&gt;huge&lt;/strong&gt; for both Gemini and LangChainJS! I've been busy trying to keep up with this (and a lot more), but as the year comes to a close, I wanted to take a moment and let folks know about the exciting developments going on at the junction of the two and let you know what's coming "real soon now!"&lt;/p&gt;

&lt;p&gt;LangChainJS finally hit its 1.0 milestone, and with it came a host of new features. At the same time, the API has stabilized, so we know what we're working with and what to expect going forward.&lt;/p&gt;

&lt;p&gt;Gemini also hit a milestone with Gemini 3 coming out and Gemini 2 being shut down in a few months. There have also been some fantastic multimodal models in the past few months (can we say Nano Banana enough?), and the LangChainJS Gemini libraries have barely been able to keep up with some of these developments. &lt;/p&gt;

&lt;p&gt;I want to take a quick look at how we got here with the LangChainJS Gemini libraries to understand where we're going. But if you're impatient and just want to see what's coming, skip ahead a section. I don't think you'll, be disappointed.&lt;/p&gt;

&lt;h2&gt;
  
  
  How we got &lt;em&gt;(gestures)&lt;/em&gt; here
&lt;/h2&gt;

&lt;p&gt;Currently, there are a dizzying number of packages available to use Gemini with LangChainJS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;@langchain/google-genai&lt;/strong&gt; was based on the previous version of Google's 
Generative AI package. It was designed to work just with the AI Studio API 
(often confusingly called the Gemini API) and not with Vertex AI's Gemini 
API. It has not been maintained in roughly a year and the library it uses 
is not designed to work with modern versions of the Gemini model.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;@langchain/google-gauth&lt;/strong&gt; was a REST-based library and is used if you're 
running in a Google hosted environment or another node-like system to 
access either the AI Studio API or the Vertex AI API.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;@langchain/google-webauth&lt;/strong&gt; was similar to @langchain/google-gauth, but 
was designed to work in environments where there was no access to a file 
system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;@langchain/google-vertexai&lt;/strong&gt; and &lt;strong&gt;@langchain/google-vertexai-web&lt;/strong&gt; were 
similar to the above, but defaulted to using Vertex AI, although they 
could also use the AI Studio API.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There was also another package, &lt;strong&gt;@langchain/google-common&lt;/strong&gt; which was the package that all the REST-based versions relied on to do the actual work.&lt;/p&gt;

&lt;p&gt;As the person maintaining the REST-based packages, I always saw this as somewhat frustrating. The original goal was to have just one package. It was meant to use REST, since the libraries at the time (late 2023 and early 2024) only supported either the Vertex AI or AI Studio APIs. I wanted one library to make it easier. Well... best laid plans...&lt;/p&gt;

&lt;p&gt;That one library became four. First, because we couldn't find an easy solution to support both node-based and web-based platforms and still support Google Cloud's Application Default Credentials (ADC). And then because we wanted a clear "Vertex" labeled package to match what was on the &lt;br&gt;
Python side (which were both written by Google).&lt;/p&gt;

&lt;p&gt;By the time I had that working in January of 2024, someone at Google had already written a version that just worked with the AI Studio API side. And thus began the confusion.&lt;/p&gt;

&lt;p&gt;I've been proud of the google-common based libraries. We tried many things that the community wanted - we had cross-platform compatibility for over a year before Google offered a library that did the same thing, and when Gemini 2.0 launched, we had compatibility within days, while Google took over a month to get its new JavaScript library out. We experimented with features such as a Security Manager and a Media Manager. We supported other models besides Gemini, with the first two being Gemma on the AI Studio platform and those from Anthropic on Vertex AI.&lt;/p&gt;

&lt;p&gt;But the packages were confusing, and one was outdated. So it was time to find a better solution.&lt;/p&gt;
&lt;h2&gt;
  
  
  Simplifying the packaging
&lt;/h2&gt;

&lt;p&gt;I'm thrilled that, going forward, we'll be supporting one package: &lt;br&gt;
&lt;strong&gt;@langchain/google&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;pause for cheers and applause&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As always, you'll be able to  use the package manager of your choice and install it as always:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yum add @langchain/google
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using the library should feel familiar. If you're currently using the &lt;code&gt;ChatGoogle&lt;/code&gt; class, you'll continue to do so, just from a new library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ChatGoogle&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@langchain/google&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ChatGoogle&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gemini-3-pro-preview&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll also be able to use the new LangChainJS 1 way of creating agents with the &lt;code&gt;createAgent()&lt;/code&gt; function by just specifying the model, and it will use this new library.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createAgent&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gemini-3-flash-preview&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[],&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Until these changes are in place, this may not do what you expect. So beware.)&lt;/p&gt;

&lt;p&gt;Just like the old libraries, the new library will continue to support API Keys from both AI Studio and Vertex AI Express mode, as well as Google Cloud credentials for service accounts and individuals. Credentials can be provided explicitly in the code, loaded through environment variables where available, or relying on ADC. &lt;/p&gt;

&lt;p&gt;This new library uses REST behind the scenes, so it doesn't depend on any Google library to communicate with Gemini. I learned a lot while building the original REST version, and worked closely with LangChainJS engineers to try and avoid some of the worst mistakes we made back then. Our hope is that this new library becomes a model for how REST-based libraries can look and work for other integrations.&lt;/p&gt;

&lt;p&gt;Despite this, our goal with this was, largely, to keep things that used to work continuing to work, so you wouldn't need to make big code changes. &lt;/p&gt;

&lt;p&gt;But LangChainJS 1 brings with it a lot of new features. And this new library is ready to use them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Improved (and standard!) text and multimodal support
&lt;/h2&gt;

&lt;p&gt;While there are many great features with both LangChainJS 1 and Gemini 3, I want to highlight one of the biggest new features that this library will be supporting.&lt;/p&gt;

&lt;p&gt;LangChainJS 0 was mostly oriented around text - which all models supported when it was created. As models began to support multimodal input, and eventually output, the implementation was a bit haphazard and different for each model. LangChainJS 1 sought to standardize that.&lt;/p&gt;

&lt;h3&gt;
  
  
  Better ways to handle replies - text and multimodal
&lt;/h3&gt;

&lt;p&gt;Previously, the &lt;code&gt;response.content&lt;/code&gt; field would be either a string or an array of &lt;code&gt;MessageContentComplex&lt;/code&gt; objects. Most tasks assumed it was a string, but if you needed multimodal support, this started getting messy.&lt;/p&gt;

&lt;p&gt;LangChainJS 1 keeps &lt;code&gt;response.content&lt;/code&gt; for backwards compatibility, and we've tried to respect that. So if you want text, you can still look here.&lt;/p&gt;

&lt;p&gt;But the better way to get the text parts from the response is to use &lt;code&gt;response.text&lt;/code&gt;, which now guarantees you will get a string. Something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ChatGoogle&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gemini-3-flash-preview&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AIMessage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Why is the sky blue?&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you need to differentiate between the "thinking" or &lt;br&gt;
"reasoning" parts of the response and the final response, or if you get multi-modal responses back, you can use the new &lt;code&gt;response.contentBlocks&lt;/code&gt; field. This field is guaranteed to be an array of the new, consistent, &lt;code&gt;ContentBlock.Standard&lt;/code&gt; objects.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ChatGoogle&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gemini-3-pro-image-preview&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Draw a parrot sitting on a chain-link fence.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AIMessage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;contentBlocks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;block&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ContentBlock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Standard&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nx"&gt;block&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;saveToFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;block&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Sending multimodal input to Gemini
&lt;/h3&gt;

&lt;p&gt;This &lt;code&gt;ContentBlock.Standard&lt;/code&gt; also works for sending data to Gemini. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ChatGoogle&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gemini-3-flash-preview&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;dataPath&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;src/chat_models/tests/data/blue-square.png&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;dataType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;image/png&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;readFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;dataPath&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data64&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;base64&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ContentBlock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Standard&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;What is in this image?&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;image&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;data64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;mimeType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;dataType&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;HumanMessage&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;contentBlocks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;content&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AIMessage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similar tasks work for audio and video input as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's missing, what's next, and what do you want to see?
&lt;/h2&gt;

&lt;p&gt;We plan to release an alpha version of this in early January 2026, with a final version within a month after.&lt;/p&gt;

&lt;p&gt;There is still a lot of discussion around what will happen with the old versions of the library, and the LangChain team and I welcome your thoughts. &lt;br&gt;
My current thinking is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They will receive a version bump when the new &lt;code&gt;@langchain/google&lt;/code&gt; is released. &lt;/li&gt;
&lt;li&gt;Older versions and this new version will be marked as deprecated, with the target package to be &lt;code&gt;@langchain/google&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;This final release will actually delegate all functionality to &lt;code&gt;@langchain/google&lt;/code&gt; - the old libraries will just be a thin veneer.

&lt;ul&gt;
&lt;li&gt;This will give you a little more time to migrate to the newer features without having to do extensive code changes.&lt;/li&gt;
&lt;li&gt;I can't guarantee full backwards compatibility, but the hope is that such issues will be minimal.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This first release of &lt;code&gt;@langchain/google&lt;/code&gt; is also sure to be missing some features. We'd like to hear your feedback about what is most important to you. For example, here are some features that may not be available on day 1:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Embedding support&lt;/li&gt;
&lt;li&gt;Batch support&lt;/li&gt;
&lt;li&gt;Media manager&lt;/li&gt;
&lt;li&gt;Security manager&lt;/li&gt;
&lt;li&gt;Support for non-Gemini models (which are most important to you?)&lt;/li&gt;
&lt;li&gt;Support for Veo and Imagen (how would you like to see these?)&lt;/li&gt;
&lt;li&gt;Google's Gemini Deep Thinking model and the Interactions API&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You may have other features that you think are important - if so, we'd love to hear which ones. (And if you are willing to help integrate them - let's talk.)&lt;/p&gt;

&lt;p&gt;I, personally, want many of these features. But I want to get your feedback about what my priorities should be.&lt;/p&gt;

&lt;h2&gt;
  
  
  Some Personal Thanks
&lt;/h2&gt;

&lt;p&gt;The past few months have been hectic for me, which is part of why this update has been delayed. I appreciate the support from the team at LangChain, from the community, and from my fellow GDEs. It means a lot to me when people tell me they're using Gemini with LangChainJS.&lt;/p&gt;

&lt;p&gt;Thanks to my employer, for encouraging open source work, to&lt;br&gt;
LangChain, for providing staff to assist in technical questions, and to Google for providing cloud credits to help make testing these updates possible and for sponsoring the #AISprintH2.&lt;/p&gt;

&lt;p&gt;Very special thanks to Denis, Linda, Steven, Noble, and Mark who have always been there with technical and editorial advice, as well as a friendly voice when times got rough.&lt;/p&gt;

&lt;p&gt;Very very special thanks to my family, who have always been there for me.&lt;/p&gt;

&lt;p&gt;As many of you know, although I am both a Google Developer Expert and a LangChain Champion, I work for neither company. My work for the past two years on this project has been a labor of love because I appreciate the products that both Google and LangChain have delivered, and I want to make both better. I plan to continue that work - and I hope you are also out there, trying to make the world a better place in your own way.&lt;/p&gt;

</description>
      <category>langchain</category>
      <category>gemini</category>
      <category>nanobanana</category>
    </item>
  </channel>
</rss>
