You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To make it as easy as possible to create custom chains, we've implemented a ["Runnable"](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html) protocol.
115
+
To make it as easy as possible to create custom chains, we've implemented a ["Runnable"](https://api.js.langchain.com/classes/langchain_core.runnables.Runnable.html) protocol.
116
116
Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about below.
117
117
118
118
This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way.
@@ -394,14 +394,14 @@ LangChain has many different types of output parsers. This is a list of output p
394
394
395
395
| Name | Supports Streaming | Input Type | Output Type | Description |
|[JSON](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.JsonOutputParser.html)| ✅ |`string`\|`BaseMessage`|`Promise<T>`| Returns a JSON object as specified. You can specify a Zod schema and it will return JSON for that model. |
398
-
|[XML](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.XMLOutputParser.html)| ✅ |`string`\|`BaseMessage`|`Promise<XMLResult>`| Returns a object of tags. Use when XML output is needed. Use with models that are good at writing XML (like Anthropic's). |
399
-
|[CSV](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.CommaSeparatedListOutputParser.html)| ✅ |`string`\|`BaseMessage`|`Array[string]`| Returns an array of comma separated values. |
400
-
|[Structured](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StructuredOutputParser.html)||`string`\|`BaseMessage`|`Promise<TypeOf<T>>`| Parse structured JSON from an LLM response. |
401
-
|[HTTP](https://v02.api.js.langchain.com/classes/langchain_output_parsers.HttpResponseOutputParser.html)| ✅ |`string`|`Promise<Uint8Array>`| Parse an LLM response to then send over HTTP(s). Useful when invoking the LLM on the server/edge, and then sending the content/stream back to the client. |
402
-
|[Bytes](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.BytesOutputParser.html)| ✅ |`string`\|`BaseMessage`|`Promise<Uint8Array>`| Parse an LLM response to then send over HTTP(s). Useful for streaming LLM responses from the server/edge to the client. |
403
-
|[Datetime](https://v02.api.js.langchain.com/classes/langchain_output_parsers.DatetimeOutputParser.html)||`string`|`Promise<Date>`| Parses response into a `Date`. |
404
-
|[Regex](https://v02.api.js.langchain.com/classes/langchain_output_parsers.RegexParser.html)||`string`|`Promise<Record<string, string>>`| Parses the given text using the regex pattern and returns a object with the parsed output. |
397
+
|[JSON](https://v02.api.js.langchain.com/classes/langchain_core.output_parsers.JsonOutputParser.html)| ✅ |`string`\|`BaseMessage`|`Promise<T>`| Returns a JSON object as specified. You can specify a Zod schema and it will return JSON for that model. |
398
+
|[XML](https://v02.api.js.langchain.com/classes/langchain_core.output_parsers.XMLOutputParser.html)| ✅ |`string`\|`BaseMessage`|`Promise<XMLResult>`| Returns a object of tags. Use when XML output is needed. Use with models that are good at writing XML (like Anthropic's). |
399
+
|[CSV](https://v02.api.js.langchain.com/classes/langchain_core.output_parsers.CommaSeparatedListOutputParser.html)| ✅ |`string`\|`BaseMessage`|`Array[string]`| Returns an array of comma separated values. |
400
+
|[Structured](https://v02.api.js.langchain.com/classes/langchain_core.output_parsers.StructuredOutputParser.html)||`string`\|`BaseMessage`|`Promise<TypeOf<T>>`| Parse structured JSON from an LLM response. |
401
+
|[HTTP](https://v02.api.js.langchain.com/classes/langchain.output_parsers.HttpResponseOutputParser.html)| ✅ |`string`|`Promise<Uint8Array>`| Parse an LLM response to then send over HTTP(s). Useful when invoking the LLM on the server/edge, and then sending the content/stream back to the client. |
402
+
|[Bytes](https://v02.api.js.langchain.com/classes/langchain_core.output_parsers.BytesOutputParser.html)| ✅ |`string`\|`BaseMessage`|`Promise<Uint8Array>`| Parse an LLM response to then send over HTTP(s). Useful for streaming LLM responses from the server/edge to the client. |
403
+
|[Datetime](https://v02.api.js.langchain.com/classes/langchain.output_parsers.DatetimeOutputParser.html)||`string`|`Promise<Date>`| Parses response into a `Date`. |
404
+
|[Regex](https://v02.api.js.langchain.com/classes/langchain.output_parsers.RegexParser.html)||`string`|`Promise<Record<string, string>>`| Parses the given text using the regex pattern and returns a object with the parsed output. |
405
405
406
406
For specifics on how to use output parsers, see the [relevant how-to guides here](/docs/how_to/#output-parsers).
407
407
@@ -517,7 +517,7 @@ For specifics on how to use retrievers, see the [relevant how-to guides here](/d
517
517
518
518
For some techniques, such as [indexing and retrieval with multiple vectors per document](/docs/how_to/multi_vector/), having some sort of key-value (KV) storage is helpful.
519
519
520
-
LangChain includes a [`BaseStore`](https://api.js.langchain.com/classes/langchain_core_stores.BaseStore.html) interface,
520
+
LangChain includes a [`BaseStore`](https://api.js.langchain.com/classes/langchain_core.stores.BaseStore.html) interface,
521
521
which allows for storage of arbitrary data. However, LangChain components that require KV-storage accept a
522
522
more specific `BaseStore<string, Uint8Array>` instance that stores binary data (referred to as a `ByteStore`), and internally take care of
523
523
encoding and decoding data for their specific needs.
@@ -526,7 +526,7 @@ This means that as a user, you only need to think about one type of store rather
526
526
527
527
#### Interface
528
528
529
-
All [`BaseStores`](https://api.js.langchain.com/classes/langchain_core_stores.BaseStore.html) support the following interface. Note that the interface allows
529
+
All [`BaseStores`](https://api.js.langchain.com/classes/langchain_core.stores.BaseStore.html) support the following interface. Note that the interface allows
530
530
for modifying **multiple** key-value pairs at once:
531
531
532
532
-`mget(keys: string[]): Promise<(undefined | Uint8Array)[]>`: get the contents of multiple keys, returning `None` if the key does not exist
@@ -723,7 +723,7 @@ You can subscribe to these events by using the `callbacks` argument available th
723
723
724
724
#### Callback handlers
725
725
726
-
`CallbackHandlers` are objects that implement the [`CallbackHandler`](https://api.js.langchain.com/interfaces/langchain_core_callbacks_base.CallbackHandlerMethods.html) interface, which has a method for each event that can be subscribed to.
726
+
`CallbackHandlers` are objects that implement the [`CallbackHandler`](https://api.js.langchain.com/interfaces/langchain_core.callbacks_base.CallbackHandlerMethods.html) interface, which has a method for each event that can be subscribed to.
727
727
The `CallbackManager` will call the appropriate method on each handler when the event is triggered.
728
728
729
729
#### Passing callbacks
@@ -793,7 +793,7 @@ For models (or other components) that don't support streaming natively, this ite
793
793
you could still use the same general pattern when calling them. Using `.stream()` will also automatically call the model in streaming mode
794
794
without the need to provide additional config.
795
795
796
-
The type of each outputted chunk depends on the type of component - for example, chat models yield [`AIMessageChunks`](https://api.js.langchain.com/classes/langchain_core_messages.AIMessageChunk.html).
796
+
The type of each outputted chunk depends on the type of component - for example, chat models yield [`AIMessageChunks`](https://api.js.langchain.com/classes/langchain_core.messages.AIMessageChunk.html).
797
797
Because this method is part of [LangChain Expression Language](/docs/concepts/#langchain-expression-language),
798
798
you can handle formatting differences from different outputs using an [output parser](/docs/concepts/#output-parsers) to transform
799
799
each yielded chunk.
@@ -849,10 +849,10 @@ or [this guide](/docs/how_to/callbacks_custom_events) for how to stream custom e
849
849
#### Callbacks
850
850
851
851
The lowest level way to stream outputs from LLMs in LangChain is via the [callbacks](/docs/concepts/#callbacks) system. You can pass a
852
-
callback handler that handles the [`handleLLMNewToken`](https://api.js.langchain.com/interfaces/langchain_core_callbacks_base.CallbackHandlerMethods.html#handleLLMNewToken) event into LangChain components. When that component is invoked, any
852
+
callback handler that handles the [`handleLLMNewToken`](https://api.js.langchain.com/interfaces/langchain_core.callbacks_base.CallbackHandlerMethods.html#handleLLMNewToken) event into LangChain components. When that component is invoked, any
853
853
[LLM](/docs/concepts/#llms) or [chat model](/docs/concepts/#chat-models) contained in the component calls
854
854
the callback with the generated token. Within the callback, you could pipe the tokens into some other destination, e.g. a HTTP response.
855
-
You can also handle the [`handleLLMEnd`](https://api.js.langchain.com/interfaces/langchain_core_callbacks_base.CallbackHandlerMethods.html#handleLLMEnd) event to perform any necessary cleanup.
855
+
You can also handle the [`handleLLMEnd`](https://api.js.langchain.com/interfaces/langchain_core.callbacks_base.CallbackHandlerMethods.html#handleLLMEnd) event to perform any necessary cleanup.
856
856
857
857
You can see [this how-to section](/docs/how_to/#callbacks) for more specifics on using callbacks.
858
858
@@ -1242,7 +1242,7 @@ Two approaches can address this tension: (1) [Multi Vector](/docs/how_to/multi_v
1242
1242
1243
1243
Fifth, consider ways to improve the quality of your similarity search itself. Embedding models compress text into fixed-length (vector) representations that capture the semantic content of the document. This compression is useful for search / retrieval, but puts a heavy burden on that single vector representation to capture the semantic nuance / detail of the document. In some cases, irrelevant or redundant content can dilute the semantic usefulness of the embedding.
1244
1244
1245
-
There are some additional tricks to improve the quality of your retrieval. Embeddings excel at capturing semantic information, but may struggle with keyword-based queries. Many [vector stores](docs/integrations/retrievers/supabase-hybrid/) offer built-in [hybrid-search](https://docs.pinecone.io/guides/data/understanding-hybrid-search) to combine keyword and semantic similarity, which marries the benefits of both approaches. Furthermore, many vector stores have [maximal marginal relevance](https://api.js.langchain.com/interfaces/langchain_core_vectorstores.VectorStoreInterface.html#maxMarginalRelevanceSearch), which attempts to diversify the results of a search to avoid returning similar and redundant documents.
1245
+
There are some additional tricks to improve the quality of your retrieval. Embeddings excel at capturing semantic information, but may struggle with keyword-based queries. Many [vector stores](docs/integrations/retrievers/supabase-hybrid/) offer built-in [hybrid-search](https://docs.pinecone.io/guides/data/understanding-hybrid-search) to combine keyword and semantic similarity, which marries the benefits of both approaches. Furthermore, many vector stores have [maximal marginal relevance](https://api.js.langchain.com/interfaces/langchain_core.vectorstores.VectorStoreInterface.html#maxMarginalRelevanceSearch), which attempts to diversify the results of a search to avoid returning similar and redundant documents.
Copy file name to clipboardExpand all lines: docs/core_docs/docs/how_to/assign.ipynb
+1-1
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@
27
27
"\n",
28
28
":::\n",
29
29
"\n",
30
-
"An alternate way of [passing data through](/docs/how_to/passthrough) steps of a chain is to leave the current values of the chain state unchanged while assigning a new value under a given key. The [`RunnablePassthrough.assign()`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html#assign-2) static method takes an input value and adds the extra arguments passed to the assign function.\n",
30
+
"An alternate way of [passing data through](/docs/how_to/passthrough) steps of a chain is to leave the current values of the chain state unchanged while assigning a new value under a given key. The [`RunnablePassthrough.assign()`](https://v02.api.js.langchain.com/classes/langchain_core.runnables.RunnablePassthrough.html#assign-2) static method takes an input value and adds the extra arguments passed to the assign function.\n",
31
31
"\n",
32
32
"This is useful in the common [LangChain Expression Language](/docs/concepts/#langchain-expression-language) pattern of additively creating a dictionary to use as input to a later step.\n",
Copy file name to clipboardExpand all lines: docs/core_docs/docs/how_to/binding.ipynb
+1-1
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@
27
27
"\n",
28
28
":::\n",
29
29
"\n",
30
-
"Sometimes we want to invoke a [`Runnable`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html) within a [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) with constant arguments that are not part of the output of the preceding Runnable in the sequence, and which are not part of the user input. We can use the [`Runnable.bind()`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#bind) method to set these arguments ahead of time.\n",
30
+
"Sometimes we want to invoke a [`Runnable`](https://v02.api.js.langchain.com/classes/langchain_core.runnables.Runnable.html) within a [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core.runnables.RunnableSequence.html) with constant arguments that are not part of the output of the preceding Runnable in the sequence, and which are not part of the user input. We can use the [`Runnable.bind()`](https://v02.api.js.langchain.com/classes/langchain_core.runnables.Runnable.html#bind) method to set these arguments ahead of time.\n",
Copy file name to clipboardExpand all lines: docs/core_docs/docs/how_to/callbacks_attach.ipynb
+2-2
Original file line number
Diff line number
Diff line change
@@ -16,9 +16,9 @@
16
16
"\n",
17
17
":::\n",
18
18
"\n",
19
-
"If you are composing a chain of runnables and want to reuse callbacks across multiple executions, you can attach callbacks with the [`.withConfig()`](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#withConfig) method. This saves you the need to pass callbacks in each time you invoke the chain.\n",
19
+
"If you are composing a chain of runnables and want to reuse callbacks across multiple executions, you can attach callbacks with the [`.withConfig()`](https://api.js.langchain.com/classes/langchain_core.runnables.Runnable.html#withConfig) method. This saves you the need to pass callbacks in each time you invoke the chain.\n",
20
20
"\n",
21
-
"Here's an example using LangChain's built-in [`ConsoleCallbackHandler`](https://api.js.langchain.com/classes/langchain_core_tracers_console.ConsoleCallbackHandler.html):"
21
+
"Here's an example using LangChain's built-in [`ConsoleCallbackHandler`](https://api.js.langchain.com/classes/langchain_core.tracers_console.ConsoleCallbackHandler.html):"
Copy file name to clipboardExpand all lines: docs/core_docs/docs/how_to/callbacks_backgrounding.ipynb
+1-1
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@
16
16
"\n",
17
17
"By default, LangChain.js callbacks are blocking. This means that execution will wait for the callback to either return or timeout before continuing. This is to help ensure that if you are running code in [serverless environments](https://en.wikipedia.org/wiki/Serverless_computing) such as [AWS Lambda](https://aws.amazon.com/pm/lambda/) or [Cloudflare Workers](https://workers.cloudflare.com/), these callbacks always finish before the execution context ends.\n",
18
18
"\n",
19
-
"However, this can add unnecessary latency if you are running in traditional stateful environments. If desired, you can set your callbacks to run in the background to avoid this additional latency by setting the `LANGCHAIN_CALLBACKS_BACKGROUND` environment variable to `\"true\"`. You can then import the global [`awaitAllCallbacks`](https://api.js.langchain.com/functions/langchain_core_callbacks_promises.awaitAllCallbacks.html) method to ensure all callbacks finish if necessary.\n",
19
+
"However, this can add unnecessary latency if you are running in traditional stateful environments. If desired, you can set your callbacks to run in the background to avoid this additional latency by setting the `LANGCHAIN_CALLBACKS_BACKGROUND` environment variable to `\"true\"`. You can then import the global [`awaitAllCallbacks`](https://api.js.langchain.com/functions/langchain_core.callbacks_promises.awaitAllCallbacks.html) method to ensure all callbacks finish if necessary.\n",
20
20
"\n",
21
21
"To illustrate this, we'll create a [custom callback handler](/docs/how_to/custom_callbacks) that takes some time to resolve, and show the timing with and without `LANGCHAIN_CALLBACKS_BACKGROUND` set. Here it is without the variable set:"
0 commit comments