You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -787,14 +887,16 @@ LangChain provides a standardized interface for tool calling that is consistent
787
887
788
888
The standard interface consists of:
789
889
790
-
-`ChatModel.bindTools()`: a method for specifying which tools are available for a model to call.
890
+
-`ChatModel.bindTools()`: a method for specifying which tools are available for a model to call. This method accepts [LangChain tools](/docs/concepts/#tools).
791
891
-`AIMessage.toolCalls`: an attribute on the `AIMessage` returned from the model for accessing the tool calls requested by the model.
792
892
793
-
There are two main use cases for function/tool calling:
893
+
The following how-to guides are good practical resources for using function/tool calling:
794
894
795
895
-[How to return structured data from an LLM](/docs/how_to/structured_output/)
796
896
-[How to use a model to call tools](/docs/how_to/tool_calling/)
797
897
898
+
For a full list of model providers that support tool calling, [see this table](/docs/integrations/chat/).
899
+
798
900
### Retrieval
799
901
800
902
LangChain provides several advanced retrieval types. A full list is below, along with the following information:
"It is often useful to have a model return output that matches some specific schema. One common use-case is extracting data from arbitrary text to insert into a traditional database or use with some other downstrem system. This guide will show you a few different strategies you can use to do this.\n",
@@ -11,36 +12,33 @@ All ChatModels implement the Runnable interface, which comes with default implem
11
12
12
13
-_Streaming_ support defaults to returning an `AsyncIterator` of a single value, the final result returned by the underlying ChatModel provider. This obviously doesn't give you token-by-token streaming, which requires native support from the ChatModel provider, but ensures your code that expects an iterator of tokens can work for any of our ChatModel integrations.
13
14
-_Batch_ support defaults to calling the underlying ChatModel in parallel for each input. The concurrency can be controlled with the `maxConcurrency` key in `RunnableConfig`.
14
-
-_Map_ support defaults to calling `.invoke` across all instances of the array which it was called on.
15
15
16
16
Each ChatModel integration can optionally provide native implementations to truly enable invoke, streaming or batching requests.
17
17
18
18
Additionally, some chat models support additional ways of guaranteeing structure in their outputs by allowing you to pass in a defined schema.
19
-
[Function calling and parallel function calling](/docs/how_to/tool_calling) (tool calling) are two common ones, and those capabilities allow you to use the chat model as the LLM in certain types of agents.
19
+
[Tool calling](/docs/how_to/tool_calling) (tool calling) is one capability, and allows you to use the chat model as the LLM in certain types of agents.
20
20
Some models in LangChain have also implemented a `withStructuredOutput()` method that unifies many of these different ways of constraining output to a schema.
21
21
22
22
The table shows, for each integration, which features have been implemented with native support. Yellow circles (π‘) indicates partial support - for example, if the model supports tool calling but not tool messages for agents.
23
23
24
-
| Model | Invoke | Stream | Batch | Function Calling | Tool Calling |`withStructuredOutput()`|
0 commit comments