Skip to content

Commit bfebaee

Browse files
authored
all[patch]: Fix api ref urls (#6511)
* all[patch]: Fix api ref urls * updated langchain issues * updated more redirects * more * fixed langchain redirects
1 parent 3010690 commit bfebaee

File tree

92 files changed

+679
-207
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

92 files changed

+679
-207
lines changed

docs/api_refs/vercel.json

+474-2
Large diffs are not rendered by default.

docs/core_docs/docs/concepts.mdx

+16-16
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ With LCEL, **all** steps are automatically logged to [LangSmith](https://docs.sm
112112

113113
<span data-heading-keywords="invoke,runnable"></span>
114114

115-
To make it as easy as possible to create custom chains, we've implemented a ["Runnable"](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html) protocol.
115+
To make it as easy as possible to create custom chains, we've implemented a ["Runnable"](https://api.js.langchain.com/classes/langchain_core.runnables.Runnable.html) protocol.
116116
Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about below.
117117

118118
This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way.
@@ -394,14 +394,14 @@ LangChain has many different types of output parsers. This is a list of output p
394394

395395
| Name | Supports Streaming | Input Type | Output Type | Description |
396396
| ----------------------------------------------------------------------------------------------------------------- | ------------------ | ------------------------- | --------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
397-
| [JSON](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.JsonOutputParser.html) || `string` \| `BaseMessage` | `Promise<T>` | Returns a JSON object as specified. You can specify a Zod schema and it will return JSON for that model. |
398-
| [XML](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.XMLOutputParser.html) || `string` \| `BaseMessage` | `Promise<XMLResult>` | Returns a object of tags. Use when XML output is needed. Use with models that are good at writing XML (like Anthropic's). |
399-
| [CSV](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.CommaSeparatedListOutputParser.html) || `string` \| `BaseMessage` | `Array[string]` | Returns an array of comma separated values. |
400-
| [Structured](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StructuredOutputParser.html) | | `string` \| `BaseMessage` | `Promise<TypeOf<T>>` | Parse structured JSON from an LLM response. |
401-
| [HTTP](https://v02.api.js.langchain.com/classes/langchain_output_parsers.HttpResponseOutputParser.html) || `string` | `Promise<Uint8Array>` | Parse an LLM response to then send over HTTP(s). Useful when invoking the LLM on the server/edge, and then sending the content/stream back to the client. |
402-
| [Bytes](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.BytesOutputParser.html) || `string` \| `BaseMessage` | `Promise<Uint8Array>` | Parse an LLM response to then send over HTTP(s). Useful for streaming LLM responses from the server/edge to the client. |
403-
| [Datetime](https://v02.api.js.langchain.com/classes/langchain_output_parsers.DatetimeOutputParser.html) | | `string` | `Promise<Date>` | Parses response into a `Date`. |
404-
| [Regex](https://v02.api.js.langchain.com/classes/langchain_output_parsers.RegexParser.html) | | `string` | `Promise<Record<string, string>>` | Parses the given text using the regex pattern and returns a object with the parsed output. |
397+
| [JSON](https://v02.api.js.langchain.com/classes/langchain_core.output_parsers.JsonOutputParser.html) || `string` \| `BaseMessage` | `Promise<T>` | Returns a JSON object as specified. You can specify a Zod schema and it will return JSON for that model. |
398+
| [XML](https://v02.api.js.langchain.com/classes/langchain_core.output_parsers.XMLOutputParser.html) || `string` \| `BaseMessage` | `Promise<XMLResult>` | Returns a object of tags. Use when XML output is needed. Use with models that are good at writing XML (like Anthropic's). |
399+
| [CSV](https://v02.api.js.langchain.com/classes/langchain_core.output_parsers.CommaSeparatedListOutputParser.html) || `string` \| `BaseMessage` | `Array[string]` | Returns an array of comma separated values. |
400+
| [Structured](https://v02.api.js.langchain.com/classes/langchain_core.output_parsers.StructuredOutputParser.html) | | `string` \| `BaseMessage` | `Promise<TypeOf<T>>` | Parse structured JSON from an LLM response. |
401+
| [HTTP](https://v02.api.js.langchain.com/classes/langchain.output_parsers.HttpResponseOutputParser.html) || `string` | `Promise<Uint8Array>` | Parse an LLM response to then send over HTTP(s). Useful when invoking the LLM on the server/edge, and then sending the content/stream back to the client. |
402+
| [Bytes](https://v02.api.js.langchain.com/classes/langchain_core.output_parsers.BytesOutputParser.html) || `string` \| `BaseMessage` | `Promise<Uint8Array>` | Parse an LLM response to then send over HTTP(s). Useful for streaming LLM responses from the server/edge to the client. |
403+
| [Datetime](https://v02.api.js.langchain.com/classes/langchain.output_parsers.DatetimeOutputParser.html) | | `string` | `Promise<Date>` | Parses response into a `Date`. |
404+
| [Regex](https://v02.api.js.langchain.com/classes/langchain.output_parsers.RegexParser.html) | | `string` | `Promise<Record<string, string>>` | Parses the given text using the regex pattern and returns a object with the parsed output. |
405405

406406
For specifics on how to use output parsers, see the [relevant how-to guides here](/docs/how_to/#output-parsers).
407407

@@ -517,7 +517,7 @@ For specifics on how to use retrievers, see the [relevant how-to guides here](/d
517517

518518
For some techniques, such as [indexing and retrieval with multiple vectors per document](/docs/how_to/multi_vector/), having some sort of key-value (KV) storage is helpful.
519519

520-
LangChain includes a [`BaseStore`](https://api.js.langchain.com/classes/langchain_core_stores.BaseStore.html) interface,
520+
LangChain includes a [`BaseStore`](https://api.js.langchain.com/classes/langchain_core.stores.BaseStore.html) interface,
521521
which allows for storage of arbitrary data. However, LangChain components that require KV-storage accept a
522522
more specific `BaseStore<string, Uint8Array>` instance that stores binary data (referred to as a `ByteStore`), and internally take care of
523523
encoding and decoding data for their specific needs.
@@ -526,7 +526,7 @@ This means that as a user, you only need to think about one type of store rather
526526

527527
#### Interface
528528

529-
All [`BaseStores`](https://api.js.langchain.com/classes/langchain_core_stores.BaseStore.html) support the following interface. Note that the interface allows
529+
All [`BaseStores`](https://api.js.langchain.com/classes/langchain_core.stores.BaseStore.html) support the following interface. Note that the interface allows
530530
for modifying **multiple** key-value pairs at once:
531531

532532
- `mget(keys: string[]): Promise<(undefined | Uint8Array)[]>`: get the contents of multiple keys, returning `None` if the key does not exist
@@ -723,7 +723,7 @@ You can subscribe to these events by using the `callbacks` argument available th
723723

724724
#### Callback handlers
725725

726-
`CallbackHandlers` are objects that implement the [`CallbackHandler`](https://api.js.langchain.com/interfaces/langchain_core_callbacks_base.CallbackHandlerMethods.html) interface, which has a method for each event that can be subscribed to.
726+
`CallbackHandlers` are objects that implement the [`CallbackHandler`](https://api.js.langchain.com/interfaces/langchain_core.callbacks_base.CallbackHandlerMethods.html) interface, which has a method for each event that can be subscribed to.
727727
The `CallbackManager` will call the appropriate method on each handler when the event is triggered.
728728

729729
#### Passing callbacks
@@ -793,7 +793,7 @@ For models (or other components) that don't support streaming natively, this ite
793793
you could still use the same general pattern when calling them. Using `.stream()` will also automatically call the model in streaming mode
794794
without the need to provide additional config.
795795

796-
The type of each outputted chunk depends on the type of component - for example, chat models yield [`AIMessageChunks`](https://api.js.langchain.com/classes/langchain_core_messages.AIMessageChunk.html).
796+
The type of each outputted chunk depends on the type of component - for example, chat models yield [`AIMessageChunks`](https://api.js.langchain.com/classes/langchain_core.messages.AIMessageChunk.html).
797797
Because this method is part of [LangChain Expression Language](/docs/concepts/#langchain-expression-language),
798798
you can handle formatting differences from different outputs using an [output parser](/docs/concepts/#output-parsers) to transform
799799
each yielded chunk.
@@ -849,10 +849,10 @@ or [this guide](/docs/how_to/callbacks_custom_events) for how to stream custom e
849849
#### Callbacks
850850

851851
The lowest level way to stream outputs from LLMs in LangChain is via the [callbacks](/docs/concepts/#callbacks) system. You can pass a
852-
callback handler that handles the [`handleLLMNewToken`](https://api.js.langchain.com/interfaces/langchain_core_callbacks_base.CallbackHandlerMethods.html#handleLLMNewToken) event into LangChain components. When that component is invoked, any
852+
callback handler that handles the [`handleLLMNewToken`](https://api.js.langchain.com/interfaces/langchain_core.callbacks_base.CallbackHandlerMethods.html#handleLLMNewToken) event into LangChain components. When that component is invoked, any
853853
[LLM](/docs/concepts/#llms) or [chat model](/docs/concepts/#chat-models) contained in the component calls
854854
the callback with the generated token. Within the callback, you could pipe the tokens into some other destination, e.g. a HTTP response.
855-
You can also handle the [`handleLLMEnd`](https://api.js.langchain.com/interfaces/langchain_core_callbacks_base.CallbackHandlerMethods.html#handleLLMEnd) event to perform any necessary cleanup.
855+
You can also handle the [`handleLLMEnd`](https://api.js.langchain.com/interfaces/langchain_core.callbacks_base.CallbackHandlerMethods.html#handleLLMEnd) event to perform any necessary cleanup.
856856

857857
You can see [this how-to section](/docs/how_to/#callbacks) for more specifics on using callbacks.
858858

@@ -1242,7 +1242,7 @@ Two approaches can address this tension: (1) [Multi Vector](/docs/how_to/multi_v
12421242

12431243
Fifth, consider ways to improve the quality of your similarity search itself. Embedding models compress text into fixed-length (vector) representations that capture the semantic content of the document. This compression is useful for search / retrieval, but puts a heavy burden on that single vector representation to capture the semantic nuance / detail of the document. In some cases, irrelevant or redundant content can dilute the semantic usefulness of the embedding.
12441244

1245-
There are some additional tricks to improve the quality of your retrieval. Embeddings excel at capturing semantic information, but may struggle with keyword-based queries. Many [vector stores](docs/integrations/retrievers/supabase-hybrid/) offer built-in [hybrid-search](https://docs.pinecone.io/guides/data/understanding-hybrid-search) to combine keyword and semantic similarity, which marries the benefits of both approaches. Furthermore, many vector stores have [maximal marginal relevance](https://api.js.langchain.com/interfaces/langchain_core_vectorstores.VectorStoreInterface.html#maxMarginalRelevanceSearch), which attempts to diversify the results of a search to avoid returning similar and redundant documents.
1245+
There are some additional tricks to improve the quality of your retrieval. Embeddings excel at capturing semantic information, but may struggle with keyword-based queries. Many [vector stores](docs/integrations/retrievers/supabase-hybrid/) offer built-in [hybrid-search](https://docs.pinecone.io/guides/data/understanding-hybrid-search) to combine keyword and semantic similarity, which marries the benefits of both approaches. Furthermore, many vector stores have [maximal marginal relevance](https://api.js.langchain.com/interfaces/langchain_core.vectorstores.VectorStoreInterface.html#maxMarginalRelevanceSearch), which attempts to diversify the results of a search to avoid returning similar and redundant documents.
12461246

12471247
| Name | When to use | Description |
12481248
| ------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- | ----------------------------------------------------------------------------------------------------- |

docs/core_docs/docs/how_to/assign.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@
2727
"\n",
2828
":::\n",
2929
"\n",
30-
"An alternate way of [passing data through](/docs/how_to/passthrough) steps of a chain is to leave the current values of the chain state unchanged while assigning a new value under a given key. The [`RunnablePassthrough.assign()`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html#assign-2) static method takes an input value and adds the extra arguments passed to the assign function.\n",
30+
"An alternate way of [passing data through](/docs/how_to/passthrough) steps of a chain is to leave the current values of the chain state unchanged while assigning a new value under a given key. The [`RunnablePassthrough.assign()`](https://v02.api.js.langchain.com/classes/langchain_core.runnables.RunnablePassthrough.html#assign-2) static method takes an input value and adds the extra arguments passed to the assign function.\n",
3131
"\n",
3232
"This is useful in the common [LangChain Expression Language](/docs/concepts/#langchain-expression-language) pattern of additively creating a dictionary to use as input to a later step.\n",
3333
"\n",

docs/core_docs/docs/how_to/binding.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@
2727
"\n",
2828
":::\n",
2929
"\n",
30-
"Sometimes we want to invoke a [`Runnable`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html) within a [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) with constant arguments that are not part of the output of the preceding Runnable in the sequence, and which are not part of the user input. We can use the [`Runnable.bind()`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#bind) method to set these arguments ahead of time.\n",
30+
"Sometimes we want to invoke a [`Runnable`](https://v02.api.js.langchain.com/classes/langchain_core.runnables.Runnable.html) within a [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core.runnables.RunnableSequence.html) with constant arguments that are not part of the output of the preceding Runnable in the sequence, and which are not part of the user input. We can use the [`Runnable.bind()`](https://v02.api.js.langchain.com/classes/langchain_core.runnables.Runnable.html#bind) method to set these arguments ahead of time.\n",
3131
"\n",
3232
"## Binding stop sequences\n",
3333
"\n",

docs/core_docs/docs/how_to/callbacks_attach.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,9 @@
1616
"\n",
1717
":::\n",
1818
"\n",
19-
"If you are composing a chain of runnables and want to reuse callbacks across multiple executions, you can attach callbacks with the [`.withConfig()`](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#withConfig) method. This saves you the need to pass callbacks in each time you invoke the chain.\n",
19+
"If you are composing a chain of runnables and want to reuse callbacks across multiple executions, you can attach callbacks with the [`.withConfig()`](https://api.js.langchain.com/classes/langchain_core.runnables.Runnable.html#withConfig) method. This saves you the need to pass callbacks in each time you invoke the chain.\n",
2020
"\n",
21-
"Here's an example using LangChain's built-in [`ConsoleCallbackHandler`](https://api.js.langchain.com/classes/langchain_core_tracers_console.ConsoleCallbackHandler.html):"
21+
"Here's an example using LangChain's built-in [`ConsoleCallbackHandler`](https://api.js.langchain.com/classes/langchain_core.tracers_console.ConsoleCallbackHandler.html):"
2222
]
2323
},
2424
{

docs/core_docs/docs/how_to/callbacks_backgrounding.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
"\n",
1717
"By default, LangChain.js callbacks are blocking. This means that execution will wait for the callback to either return or timeout before continuing. This is to help ensure that if you are running code in [serverless environments](https://en.wikipedia.org/wiki/Serverless_computing) such as [AWS Lambda](https://aws.amazon.com/pm/lambda/) or [Cloudflare Workers](https://workers.cloudflare.com/), these callbacks always finish before the execution context ends.\n",
1818
"\n",
19-
"However, this can add unnecessary latency if you are running in traditional stateful environments. If desired, you can set your callbacks to run in the background to avoid this additional latency by setting the `LANGCHAIN_CALLBACKS_BACKGROUND` environment variable to `\"true\"`. You can then import the global [`awaitAllCallbacks`](https://api.js.langchain.com/functions/langchain_core_callbacks_promises.awaitAllCallbacks.html) method to ensure all callbacks finish if necessary.\n",
19+
"However, this can add unnecessary latency if you are running in traditional stateful environments. If desired, you can set your callbacks to run in the background to avoid this additional latency by setting the `LANGCHAIN_CALLBACKS_BACKGROUND` environment variable to `\"true\"`. You can then import the global [`awaitAllCallbacks`](https://api.js.langchain.com/functions/langchain_core.callbacks_promises.awaitAllCallbacks.html) method to ensure all callbacks finish if necessary.\n",
2020
"\n",
2121
"To illustrate this, we'll create a [custom callback handler](/docs/how_to/custom_callbacks) that takes some time to resolve, and show the timing with and without `LANGCHAIN_CALLBACKS_BACKGROUND` set. Here it is without the variable set:"
2222
]

0 commit comments

Comments
 (0)