Skip to content

Commit 0a5988b

Browse files
docs[minor]: LangGraph Migration Guide (#5487)
* [Docs] LangGraph Migration Guide * fixup * link * Update * Update and polish * Add to how to index page --------- Co-authored-by: jacoblee93 <[email protected]>
1 parent ea22597 commit 0a5988b

File tree

7 files changed

+1715
-64
lines changed

7 files changed

+1715
-64
lines changed

deno.json

+4-2
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
"langchain/": "npm:/langchain/",
44
"@faker-js/faker": "npm:@faker-js/faker",
55
"@langchain/anthropic": "npm:@langchain/anthropic",
6-
"@langchain/community/": "npm:/@langchain/community/",
6+
"@langchain/community/": "npm:/@langchain/community@0.2.2/",
77
"@langchain/openai": "npm:@langchain/openai",
88
"@langchain/cohere": "npm:@langchain/cohere",
99
"@langchain/textsplitters": "npm:@langchain/textsplitters",
@@ -12,9 +12,11 @@
1212
"@langchain/core/": "npm:/@langchain/core/",
1313
"@langchain/pinecone": "npm:@langchain/pinecone",
1414
"@langchain/google-common": "npm:@langchain/google-common",
15+
"@langchain/langgraph": "npm:/@langchain/[email protected]",
16+
"@langchain/langgraph/": "npm:/@langchain/[email protected]/",
1517
"@microsoft/fetch-event-source": "npm:@microsoft/fetch-event-source",
1618
"@pinecone-database/pinecone": "npm:@pinecone-database/pinecone",
17-
"cheerio": "npm:/cheerio",
19+
"cheerio": "npm:cheerio",
1820
"chromadb": "npm:/chromadb",
1921
"dotenv/": "npm:/dotenv/",
2022
"zod": "npm:/zod",

docs/core_docs/.gitignore

+2
Original file line numberDiff line numberDiff line change
@@ -107,6 +107,8 @@ docs/how_to/output_parser_fixing.md
107107
docs/how_to/output_parser_fixing.mdx
108108
docs/how_to/multiple_queries.md
109109
docs/how_to/multiple_queries.mdx
110+
docs/how_to/migrate_agent.md
111+
docs/how_to/migrate_agent.mdx
110112
docs/how_to/logprobs.md
111113
docs/how_to/logprobs.mdx
112114
docs/how_to/lcel_cheatsheet.md

docs/core_docs/docs/how_to/agent_executor.ipynb

+62-59
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@
100100
{
101101
"data": {
102102
"text/plain": [
103-
"\u001b[32m`[{\"title\":\"Weather in San Francisco\",\"url\":\"https://www.weatherapi.com/\",\"content\":\"{'location': {'n`\u001b[39m... 1111 more characters"
103+
"\u001b[32m`[{\"title\":\"Weather in San Francisco\",\"url\":\"https://www.weatherapi.com/\",\"content\":\"{'location': {'n`\u001b[39m... 1347 more characters"
104104
]
105105
},
106106
"execution_count": 1,
@@ -109,6 +109,7 @@
109109
}
110110
],
111111
"source": [
112+
"import \"cheerio\"; // This is required in notebooks to use the `CheerioWebBaseLoader`\n",
112113
"import { TavilySearchResults } from \"@langchain/community/tools/tavily_search\"\n",
113114
"\n",
114115
"const search = new TavilySearchResults({\n",
@@ -152,24 +153,24 @@
152153
}
153154
],
154155
"source": [
155-
"import \"cheerio\"; // This is required in notebooks to use the `CheerioWebBaseLoader`\n",
156-
"import { CheerioWebBaseLoader } from \"langchain/document_loaders/web/cheerio\";\n",
156+
"import { CheerioWebBaseLoader } from \"@langchain/community/document_loaders/web/cheerio\";\n",
157157
"import { MemoryVectorStore } from \"langchain/vectorstores/memory\";\n",
158158
"import { OpenAIEmbeddings } from \"@langchain/openai\";\n",
159159
"import { RecursiveCharacterTextSplitter } from \"@langchain/textsplitters\";\n",
160160
"\n",
161-
"const loader = new CheerioWebBaseLoader(\"https://docs.smith.langchain.com/overview\")\n",
162-
"const docs = await loader.load()\n",
163-
"const documents = await new RecursiveCharacterTextSplitter(\n",
164-
" {\n",
165-
" chunkSize: 1000,\n",
166-
" chunkOverlap: 200\n",
167-
" }\n",
168-
").splitDocuments(docs)\n",
169-
"const vectorStore = await MemoryVectorStore.fromDocuments(documents, new OpenAIEmbeddings())\n",
161+
"const loader = new CheerioWebBaseLoader(\"https://docs.smith.langchain.com/overview\");\n",
162+
"const docs = await loader.load();\n",
163+
"const splitter = new RecursiveCharacterTextSplitter(\n",
164+
" {\n",
165+
" chunkSize: 1000,\n",
166+
" chunkOverlap: 200\n",
167+
" }\n",
168+
");\n",
169+
"const documents = await splitter.splitDocuments(docs);\n",
170+
"const vectorStore = await MemoryVectorStore.fromDocuments(documents, new OpenAIEmbeddings());\n",
170171
"const retriever = vectorStore.asRetriever();\n",
171172
"\n",
172-
"(await retriever.invoke(\"how to upload a dataset\"))[0]"
173+
"(await retriever.invoke(\"how to upload a dataset\"))[0];"
173174
]
174175
},
175176
{
@@ -258,6 +259,9 @@
258259
}
259260
],
260261
"source": [
262+
"import { ChatOpenAI } from \"@langchain/openai\";\n",
263+
"const model = new ChatOpenAI({ model: \"gpt-4\", temperature: 0 })\n",
264+
"\n",
261265
"import { HumanMessage } from \"@langchain/core/messages\";\n",
262266
"\n",
263267
"const response = await model.invoke([new HumanMessage(\"hi!\")]);\n",
@@ -336,9 +340,9 @@
336340
" {\n",
337341
" \"name\": \"tavily_search_results_json\",\n",
338342
" \"args\": {\n",
339-
" \"input\": \"weather in San Francisco\"\n",
343+
" \"input\": \"current weather in San Francisco\"\n",
340344
" },\n",
341-
" \"id\": \"call_y0nn6mbVCV5paX6RrqqFUqdC\"\n",
345+
" \"id\": \"call_VcSjZAZkEOx9lcHNZNXAjXkm\"\n",
342346
" }\n",
343347
"]\n"
344348
]
@@ -370,11 +374,7 @@
370374
"\n",
371375
"Now that we have defined the tools and the LLM, we can create the agent. We will be using a tool calling agent - for more information on this type of agent, as well as other options, see [this guide](/docs/concepts/#agent_types/).\n",
372376
"\n",
373-
"We can first choose the prompt we want to use to guide the agent.\n",
374-
"\n",
375-
"If you want to see the contents of this prompt in the hub, you can go to:\n",
376-
"\n",
377-
"[https://smith.langchain.com/hub/hwchase17/openai-functions-agent](https://smith.langchain.com/hub/hwchase17/openai-functions-agent)"
377+
"We can first choose the prompt we want to use to guide the agent:"
378378
]
379379
},
380380
{
@@ -394,19 +394,18 @@
394394
" prompt: PromptTemplate {\n",
395395
" lc_serializable: true,\n",
396396
" lc_kwargs: {\n",
397-
" template: \"You are a helpful assistant\",\n",
398397
" inputVariables: [],\n",
399398
" templateFormat: \"f-string\",\n",
400-
" partialVariables: {}\n",
399+
" template: \"You are a helpful assistant\"\n",
401400
" },\n",
402401
" lc_runnable: true,\n",
403402
" name: undefined,\n",
404403
" lc_namespace: [ \"langchain_core\", \"prompts\", \"prompt\" ],\n",
405404
" inputVariables: [],\n",
406405
" outputParser: undefined,\n",
407-
" partialVariables: {},\n",
408-
" template: \"You are a helpful assistant\",\n",
406+
" partialVariables: undefined,\n",
409407
" templateFormat: \"f-string\",\n",
408+
" template: \"You are a helpful assistant\",\n",
410409
" validateTemplate: true\n",
411410
" }\n",
412411
" },\n",
@@ -418,27 +417,26 @@
418417
" prompt: PromptTemplate {\n",
419418
" lc_serializable: true,\n",
420419
" lc_kwargs: {\n",
421-
" template: \"You are a helpful assistant\",\n",
422420
" inputVariables: [],\n",
423421
" templateFormat: \"f-string\",\n",
424-
" partialVariables: {}\n",
422+
" template: \"You are a helpful assistant\"\n",
425423
" },\n",
426424
" lc_runnable: true,\n",
427425
" name: undefined,\n",
428426
" lc_namespace: [ \"langchain_core\", \"prompts\", \"prompt\" ],\n",
429427
" inputVariables: [],\n",
430428
" outputParser: undefined,\n",
431-
" partialVariables: {},\n",
432-
" template: \"You are a helpful assistant\",\n",
429+
" partialVariables: undefined,\n",
433430
" templateFormat: \"f-string\",\n",
431+
" template: \"You are a helpful assistant\",\n",
434432
" validateTemplate: true\n",
435433
" },\n",
436434
" messageClass: undefined,\n",
437435
" chatMessageClass: undefined\n",
438436
" },\n",
439437
" MessagesPlaceholder {\n",
440438
" lc_serializable: true,\n",
441-
" lc_kwargs: { optional: true, variableName: \"chat_history\" },\n",
439+
" lc_kwargs: { variableName: \"chat_history\", optional: true },\n",
442440
" lc_runnable: true,\n",
443441
" name: undefined,\n",
444442
" lc_namespace: [ \"langchain_core\", \"prompts\", \"chat\" ],\n",
@@ -451,19 +449,18 @@
451449
" prompt: PromptTemplate {\n",
452450
" lc_serializable: true,\n",
453451
" lc_kwargs: {\n",
454-
" template: \"{input}\",\n",
455452
" inputVariables: [Array],\n",
456453
" templateFormat: \"f-string\",\n",
457-
" partialVariables: {}\n",
454+
" template: \"{input}\"\n",
458455
" },\n",
459456
" lc_runnable: true,\n",
460457
" name: undefined,\n",
461458
" lc_namespace: [ \"langchain_core\", \"prompts\", \"prompt\" ],\n",
462459
" inputVariables: [ \"input\" ],\n",
463460
" outputParser: undefined,\n",
464-
" partialVariables: {},\n",
465-
" template: \"{input}\",\n",
461+
" partialVariables: undefined,\n",
466462
" templateFormat: \"f-string\",\n",
463+
" template: \"{input}\",\n",
467464
" validateTemplate: true\n",
468465
" }\n",
469466
" },\n",
@@ -475,43 +472,45 @@
475472
" prompt: PromptTemplate {\n",
476473
" lc_serializable: true,\n",
477474
" lc_kwargs: {\n",
478-
" template: \"{input}\",\n",
479475
" inputVariables: [ \"input\" ],\n",
480476
" templateFormat: \"f-string\",\n",
481-
" partialVariables: {}\n",
477+
" template: \"{input}\"\n",
482478
" },\n",
483479
" lc_runnable: true,\n",
484480
" name: undefined,\n",
485481
" lc_namespace: [ \"langchain_core\", \"prompts\", \"prompt\" ],\n",
486482
" inputVariables: [ \"input\" ],\n",
487483
" outputParser: undefined,\n",
488-
" partialVariables: {},\n",
489-
" template: \"{input}\",\n",
484+
" partialVariables: undefined,\n",
490485
" templateFormat: \"f-string\",\n",
486+
" template: \"{input}\",\n",
491487
" validateTemplate: true\n",
492488
" },\n",
493489
" messageClass: undefined,\n",
494490
" chatMessageClass: undefined\n",
495491
" },\n",
496492
" MessagesPlaceholder {\n",
497493
" lc_serializable: true,\n",
498-
" lc_kwargs: { optional: false, variableName: \"agent_scratchpad\" },\n",
494+
" lc_kwargs: { variableName: \"agent_scratchpad\", optional: true },\n",
499495
" lc_runnable: true,\n",
500496
" name: undefined,\n",
501497
" lc_namespace: [ \"langchain_core\", \"prompts\", \"chat\" ],\n",
502498
" variableName: \"agent_scratchpad\",\n",
503-
" optional: false\n",
499+
" optional: true\n",
504500
" }\n",
505501
"]\n"
506502
]
507503
}
508504
],
509505
"source": [
510506
"import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n",
511-
"import { pull } from \"langchain/hub\";\n",
512507
"\n",
513-
"// Get the prompt to use - you can modify this!\n",
514-
"const prompt = await pull<ChatPromptTemplate>(\"hwchase17/openai-functions-agent\");\n",
508+
"const prompt = ChatPromptTemplate.fromMessages([\n",
509+
" [\"system\", \"You are a helpful assistant\"],\n",
510+
" [\"placeholder\", \"{chat_history}\"],\n",
511+
" [\"human\", \"{input}\"],\n",
512+
" [\"placeholder\", \"{agent_scratchpad}\"],\n",
513+
"]);\n",
515514
"\n",
516515
"console.log(prompt.promptMessages);"
517516
]
@@ -617,7 +616,9 @@
617616
"text/plain": [
618617
"{\n",
619618
" input: \u001b[32m\"how can langsmith help with testing?\"\u001b[39m,\n",
620-
" output: \u001b[32m\"LangSmith can help with testing by providing a platform for building production-grade LLM applicatio\"\u001b[39m... 880 more characters\n",
619+
" output: \u001b[32m\"LangSmith can be a valuable tool for testing in several ways:\\n\"\u001b[39m +\n",
620+
" \u001b[32m\"\\n\"\u001b[39m +\n",
621+
" \u001b[32m\"1. **Logging Traces**: LangSmith prov\"\u001b[39m... 960 more characters\n",
621622
"}"
622623
]
623624
},
@@ -651,7 +652,7 @@
651652
"text/plain": [
652653
"{\n",
653654
" input: \u001b[32m\"whats the weather in sf?\"\u001b[39m,\n",
654-
" output: \u001b[32m\"The current weather in San Francisco is partly cloudy with a temperature of 64.0°F (17.8°C). The win\"\u001b[39m... 112 more characters\n",
655+
" output: \u001b[32m\"The current weather in San Francisco, California is partly cloudy with a temperature of 12.2°C (54.0\"\u001b[39m... 176 more characters\n",
655656
"}"
656657
]
657658
},
@@ -753,7 +754,7 @@
753754
" }\n",
754755
" ],\n",
755756
" input: \u001b[32m\"what's my name?\"\u001b[39m,\n",
756-
" output: \u001b[32m\"Your name is Bob! How can I help you, Bob?\"\u001b[39m\n",
757+
" output: \u001b[32m\"Your name is Bob. How can I assist you further?\"\u001b[39m\n",
757758
"}"
758759
]
759760
},
@@ -785,8 +786,8 @@
785786
"\n",
786787
"Because we have multiple inputs, we need to specify two things:\n",
787788
"\n",
788-
"- `input_messages_key`: The input key to use to add to the conversation history.\n",
789-
"- `history_messages_key`: The key to add the loaded messages into.\n",
789+
"- `inputMessagesKey`: The input key to use to add to the conversation history.\n",
790+
"- `historyMessagesKey`: The key to add the loaded messages into.\n",
790791
"\n",
791792
"For more information on how to use this, see [this guide](/docs/how_to/message_history). "
792793
]
@@ -819,22 +820,22 @@
819820
" AIMessage {\n",
820821
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
821822
" lc_kwargs: {\n",
822-
" content: \u001b[32m\"Hello Bob! How can I assist you today?\"\u001b[39m,\n",
823+
" content: \u001b[32m\"Hello, Bob! How can I assist you today?\"\u001b[39m,\n",
823824
" tool_calls: [],\n",
824825
" invalid_tool_calls: [],\n",
825826
" additional_kwargs: {},\n",
826827
" response_metadata: {}\n",
827828
" },\n",
828829
" lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
829-
" content: \u001b[32m\"Hello Bob! How can I assist you today?\"\u001b[39m,\n",
830+
" content: \u001b[32m\"Hello, Bob! How can I assist you today?\"\u001b[39m,\n",
830831
" name: \u001b[90mundefined\u001b[39m,\n",
831832
" additional_kwargs: {},\n",
832833
" response_metadata: {},\n",
833834
" tool_calls: [],\n",
834835
" invalid_tool_calls: []\n",
835836
" }\n",
836837
" ],\n",
837-
" output: \u001b[32m\"Hello Bob! How can I assist you today?\"\u001b[39m\n",
838+
" output: \u001b[32m\"Hello, Bob! How can I assist you today?\"\u001b[39m\n",
838839
"}"
839840
]
840841
},
@@ -898,14 +899,14 @@
898899
" AIMessage {\n",
899900
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
900901
" lc_kwargs: {\n",
901-
" content: \u001b[32m\"Hello Bob! How can I assist you today?\"\u001b[39m,\n",
902+
" content: \u001b[32m\"Hello, Bob! How can I assist you today?\"\u001b[39m,\n",
902903
" tool_calls: [],\n",
903904
" invalid_tool_calls: [],\n",
904905
" additional_kwargs: {},\n",
905906
" response_metadata: {}\n",
906907
" },\n",
907908
" lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
908-
" content: \u001b[32m\"Hello Bob! How can I assist you today?\"\u001b[39m,\n",
909+
" content: \u001b[32m\"Hello, Bob! How can I assist you today?\"\u001b[39m,\n",
909910
" name: \u001b[90mundefined\u001b[39m,\n",
910911
" additional_kwargs: {},\n",
911912
" response_metadata: {},\n",
@@ -928,22 +929,22 @@
928929
" AIMessage {\n",
929930
" lc_serializable: \u001b[33mtrue\u001b[39m,\n",
930931
" lc_kwargs: {\n",
931-
" content: \u001b[32m\"Your name is Bob! How can I help you, Bob?\"\u001b[39m,\n",
932+
" content: \u001b[32m\"Your name is Bob. How can I assist you further?\"\u001b[39m,\n",
932933
" tool_calls: [],\n",
933934
" invalid_tool_calls: [],\n",
934935
" additional_kwargs: {},\n",
935936
" response_metadata: {}\n",
936937
" },\n",
937938
" lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
938-
" content: \u001b[32m\"Your name is Bob! How can I help you, Bob?\"\u001b[39m,\n",
939+
" content: \u001b[32m\"Your name is Bob. How can I assist you further?\"\u001b[39m,\n",
939940
" name: \u001b[90mundefined\u001b[39m,\n",
940941
" additional_kwargs: {},\n",
941942
" response_metadata: {},\n",
942943
" tool_calls: [],\n",
943944
" invalid_tool_calls: []\n",
944945
" }\n",
945946
" ],\n",
946-
" output: \u001b[32m\"Your name is Bob! How can I help you, Bob?\"\u001b[39m\n",
947+
" output: \u001b[32m\"Your name is Bob. How can I assist you further?\"\u001b[39m\n",
947948
"}"
948949
]
949950
},
@@ -954,8 +955,8 @@
954955
],
955956
"source": [
956957
"await agentWithChatHistory.invoke(\n",
957-
" { input: \"what's my name?\" },\n",
958-
" { configurable: { sessionId: \"<foo>\" }},\n",
958+
" { input: \"what's my name?\" },\n",
959+
" { configurable: { sessionId: \"<foo>\" }},\n",
959960
")"
960961
]
961962
},
@@ -972,12 +973,14 @@
972973
"id": "c029798f",
973974
"metadata": {},
974975
"source": [
975-
"## Conclusion\n",
976+
"## Next steps\n",
976977
"\n",
977978
"That's a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there's lot to learn! \n",
978979
"\n",
979980
":::{.callout-important}\n",
980-
"This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph](/docs/concepts/#langgraph)\n",
981+
"This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph](/docs/concepts/#langgraph).\n",
982+
"\n",
983+
"You can also see [this guide to help migrate to LangGraph](/docs/how_to/migrate_agent).\n",
981984
":::"
982985
]
983986
}

docs/core_docs/docs/how_to/index.mdx

+1
Original file line numberDiff line numberDiff line change
@@ -163,6 +163,7 @@ For in depth how-to guides for agents, please check out [LangGraph](https://lang
163163
:::
164164

165165
- [How to: use legacy LangChain Agents (AgentExecutor)](/docs/how_to/agent_executor)
166+
- [How to: migrate from legacy LangChain agents to LangGraph](/docs/how_to/migrate_agent)
166167

167168
### Callbacks
168169

0 commit comments

Comments
 (0)