Skip to content

Commit 387fe31

Browse files
authored
Jacob/docs (#5807)
* Add explanation to QA with sources page * Fix anchor tag * Standardize prereq format
1 parent be5739e commit 387fe31

File tree

10 files changed

+119
-55
lines changed

10 files changed

+119
-55
lines changed

docs/core_docs/docs/concepts.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -691,7 +691,7 @@ You can check out [this guide](/docs/how_to/streaming/#using-stream) for more de
691691

692692
#### `.streamEvents()`
693693

694-
<span data-heading-keywords="astream_events,stream_events,stream events"></span>
694+
<span data-heading-keywords="streamEvents,stream events"></span>
695695

696696
While the `.stream()` method is intuitive, it can only return the final generated value of your chain. This is fine for single LLM calls,
697697
but as you build more complex chains of several LLM calls together, you may want to use the intermediate values of

docs/core_docs/docs/how_to/qa_sources.ipynb

+32-18
Original file line numberDiff line numberDiff line change
@@ -69,12 +69,12 @@
6969
},
7070
{
7171
"cell_type": "code",
72-
"execution_count": 3,
72+
"execution_count": null,
7373
"metadata": {},
7474
"outputs": [],
7575
"source": [
7676
"import \"cheerio\";\n",
77-
"import { CheerioWebBaseLoader } from \"langchain/document_loaders/web/cheerio\";\n",
77+
"import { CheerioWebBaseLoader } from \"@langchain/community/document_loaders/web/cheerio\";\n",
7878
"import { RecursiveCharacterTextSplitter } from \"langchain/text_splitter\";\n",
7979
"import { MemoryVectorStore } from \"langchain/vectorstores/memory\"\n",
8080
"import { OpenAIEmbeddings, ChatOpenAI } from \"@langchain/openai\";\n",
@@ -119,7 +119,7 @@
119119
},
120120
{
121121
"cell_type": "code",
122-
"execution_count": 4,
122+
"execution_count": 2,
123123
"metadata": {},
124124
"outputs": [
125125
{
@@ -139,16 +139,16 @@
139139
},
140140
{
141141
"cell_type": "code",
142-
"execution_count": 11,
142+
"execution_count": 3,
143143
"metadata": {},
144144
"outputs": [
145145
{
146146
"data": {
147147
"text/plain": [
148-
"\u001b[32m\"Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. I\"\u001b[39m... 208 more characters"
148+
"\u001b[32m\"Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. T\"\u001b[39m... 287 more characters"
149149
]
150150
},
151-
"execution_count": 11,
151+
"execution_count": 3,
152152
"metadata": {},
153153
"output_type": "execute_result"
154154
}
@@ -213,7 +213,7 @@
213213
" }\n",
214214
" }\n",
215215
" ],\n",
216-
" answer: \u001b[32m\"Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. I\"\u001b[39m... 256 more characters\n",
216+
" answer: \u001b[32m\"Task decomposition is a technique used to break down complex tasks into smaller and simpler steps fo\"\u001b[39m... 232 more characters\n",
217217
"}"
218218
]
219219
},
@@ -223,20 +223,34 @@
223223
}
224224
],
225225
"source": [
226-
"import { RunnableMap, RunnablePassthrough, RunnableSequence } from \"@langchain/core/runnables\";\n",
226+
"import {\n",
227+
" RunnableMap,\n",
228+
" RunnablePassthrough,\n",
229+
" RunnableSequence\n",
230+
"} from \"@langchain/core/runnables\";\n",
227231
"import { formatDocumentsAsString } from \"langchain/util/document\";\n",
228232
"\n",
229-
"const ragChainFromDocs = RunnableSequence.from([\n",
230-
" RunnablePassthrough.assign({ context: (input) => formatDocumentsAsString(input.context) }),\n",
231-
" prompt,\n",
232-
" llm,\n",
233-
" new StringOutputParser()\n",
234-
"]);\n",
235-
"\n",
236-
"let ragChainWithSource = new RunnableMap({ steps: { context: retriever, question: new RunnablePassthrough() }})\n",
237-
"ragChainWithSource = ragChainWithSource.assign({ answer: ragChainFromDocs });\n",
233+
"const ragChainWithSources = RunnableMap.from({\n",
234+
" // Return raw documents here for now since we want to return them at\n",
235+
" // the end - we'll format in the next step of the chain\n",
236+
" context: retriever,\n",
237+
" question: new RunnablePassthrough(),\n",
238+
"}).assign({\n",
239+
" answer: RunnableSequence.from([\n",
240+
" (input) => {\n",
241+
" return {\n",
242+
" // Now we format the documents as strings for the prompt\n",
243+
" context: formatDocumentsAsString(input.context),\n",
244+
" question: input.question\n",
245+
" };\n",
246+
" },\n",
247+
" prompt,\n",
248+
" llm,\n",
249+
" new StringOutputParser()\n",
250+
" ]),\n",
251+
"})\n",
238252
"\n",
239-
"await ragChainWithSource.invoke(\"What is Task Decomposition\")"
253+
"await ragChainWithSources.invoke(\"What is Task Decomposition\")"
240254
]
241255
},
242256
{

docs/core_docs/docs/tutorials/agents.mdx

+10-10
Original file line numberDiff line numberDiff line change
@@ -4,23 +4,23 @@ sidebar_position: 4
44

55
# Build an Agent
66

7+
:::info Prerequisites
8+
9+
This guide assumes familiarity with the following concepts:
10+
11+
- [Chat Models](/docs/concepts/#chat-models)
12+
- [Tools](/docs/concepts/#tools)
13+
- [Agents](/docs/concepts/#agents)
14+
15+
:::
16+
717
By themselves, language models can't take actions - they just output text.
818
A big use case for LangChain is creating **agents**.
919
Agents are systems that use an LLM as a reasoning enginer to determine which actions to take and what the inputs to those actions should be.
1020
The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish.
1121

1222
In this tutorial we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it.
1323

14-
## Concepts
15-
16-
Concepts we will cover are:
17-
18-
- Using [language models](/docs/concepts/#chat-models), in particular their tool calling ability
19-
- Creating a [Retriever](/docs/concepts/#retrievers) to expose specific information to our agent
20-
- Using a Search [Tool](/docs/concepts/#tools) to look up things online
21-
- Using [LangGraph Agents](/docs/concepts/#agents) which use an LLM to think about what to do and then execute upon that
22-
- Debugging and tracing your application using [LangSmith](/docs/concepts/#langsmith)
23-
2424
## Setup: LangSmith
2525

2626
By definition, agents take a self-determined, input-dependent sequence of steps before returning a user-facing output. This makes debugging these systems particularly tricky, and observability particularly important.

docs/core_docs/docs/tutorials/chatbot.ipynb

+10-13
Original file line numberDiff line numberDiff line change
@@ -22,10 +22,19 @@
2222
"source": [
2323
"## Overview\n",
2424
"\n",
25+
":::info Prerequisites\n",
26+
"\n",
27+
"This guide assumes familiarity with the following concepts:\n",
28+
"\n",
29+
"- [Chat Models](/docs/concepts/#chat-models)\n",
30+
"- [Prompt Templates](/docs/concepts/#prompt-templates)\n",
31+
"- [Chat History](/docs/concepts/#chat-history)\n",
32+
"\n",
33+
":::\n",
34+
"\n",
2535
"We'll go over an example of how to design and implement an LLM-powered chatbot. \n",
2636
"This chatbot will be able to have a conversation and remember previous interactions.\n",
2737
"\n",
28-
"\n",
2938
"Note that this chatbot that we build will only use the language model to have a conversation.\n",
3039
"There are several other related concepts that you may be looking for:\n",
3140
"\n",
@@ -34,18 +43,6 @@
3443
"\n",
3544
"This tutorial will cover the basics which will be helpful for those two more advanced topics, but feel free to skip directly to there should you choose.\n",
3645
"\n",
37-
"\n",
38-
"## Concepts\n",
39-
"\n",
40-
"Here are a few of the high-level components we'll be working with:\n",
41-
"\n",
42-
"- [`Chat Models`](/docs/concepts/#chat-models). The chatbot interface is based around messages rather than raw text, and therefore is best suited to Chat Models rather than text LLMs.\n",
43-
"- [`Prompt Templates`](/docs/concepts/#prompt-templates), which simplify the process of assembling prompts that combine default messages, user input, chat history, and (optionally) additional retrieved context.\n",
44-
"- [`Chat History`](/docs/concepts/#chat-history), which allows a chatbot to \"remember\" past interactions and take them into account when responding to followup questions. \n",
45-
"- Debugging and tracing your application using [LangSmith](/docs/concepts/#langsmith)\n",
46-
"\n",
47-
"We'll cover how to fit the above components together to create a powerful conversational chatbot.\n",
48-
"\n",
4946
"## Setup\n",
5047
"\n",
5148
"### Installation\n",

docs/core_docs/docs/tutorials/extraction.ipynb

+11-8
Original file line numberDiff line numberDiff line change
@@ -17,18 +17,21 @@
1717
"source": [
1818
"# Build an Extraction Chain\n",
1919
"\n",
20-
"In this tutorial, we will build a chain to extract structured information from unstructured text. \n",
20+
":::info Prerequisites\n",
21+
"\n",
22+
"This guide assumes familiarity with the following concepts:\n",
23+
"\n",
24+
"- [Chat Models](/docs/concepts/#chat-models)\n",
25+
"- [Tools](/docs/concepts/#tools)\n",
26+
"- [Tool calling](/docs/concepts/#function-tool-calling)\n",
2127
"\n",
22-
":::{.callout-important}\n",
23-
"This tutorial will only work with models that support **function/tool calling**\n",
2428
":::\n",
2529
"\n",
26-
"## Concepts\n",
30+
"In this tutorial, we will build a chain to extract structured information from unstructured text. \n",
2731
"\n",
28-
"Concepts we will cover are:\n",
29-
"- Using [language models](/docs/concepts/#chat-models)\n",
30-
"- Using [function/tool calling](/docs/concepts/#function-tool-calling)\n",
31-
"- Debugging and tracing your application using [LangSmith](/docs/concepts/#langsmith)\n"
32+
":::{.callout-important}\n",
33+
"This tutorial will only work with models that support **function/tool calling**\n",
34+
":::"
3235
]
3336
},
3437
{

docs/core_docs/docs/tutorials/pdf_qa.ipynb

+12
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,18 @@
1919
"source": [
2020
"# Build a PDF ingestion and Question/Answering system\n",
2121
"\n",
22+
":::info Prerequisites\n",
23+
"\n",
24+
"This guide assumes familiarity with the following concepts:\n",
25+
"\n",
26+
"- [Document loaders](/docs/concepts/#document-loaders)\n",
27+
"- [Chat models](/docs/concepts/#chat-models)\n",
28+
"- [Embeddings](/docs/concepts/#embedding-models)\n",
29+
"- [Vector stores](/docs/concepts/#vector-stores)\n",
30+
"- [Retrieval-augmented generation](/docs/tutorials/rag/)\n",
31+
"\n",
32+
":::\n",
33+
"\n",
2234
"PDF files often hold crucial unstructured data unavailable from other sources. They can be quite lengthy, and unlike plain text files, cannot generally be fed directly into the prompt of a language model.\n",
2335
"\n",
2436
"In this tutorial, you'll create a system that can answer questions about PDF files. More specifically, you'll use a [Document Loader](/docs/concepts/#document-loaders) to load text in a format usable by an LLM, then build a retrieval-augmented generation (RAG) pipeline to answer questions, including citations from the source material.\n",

docs/core_docs/docs/tutorials/qa_chat_history.ipynb

+14
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,20 @@
66
"source": [
77
"# Conversational RAG\n",
88
"\n",
9+
":::info Prerequisites\n",
10+
"\n",
11+
"This guide assumes familiarity with the following concepts:\n",
12+
"\n",
13+
"- [Chat history](/docs/concepts/#chat-history)\n",
14+
"- [Chat models](/docs/concepts/#chat-models)\n",
15+
"- [Embeddings](/docs/concepts/#embedding-models)\n",
16+
"- [Vector stores](/docs/concepts/#vector-stores)\n",
17+
"- [Retrieval-augmented generation](/docs/tutorials/rag/)\n",
18+
"- [Tools](/docs/concepts/#tools)\n",
19+
"- [Agents](/docs/concepts/#agents)\n",
20+
"\n",
21+
":::\n",
22+
"\n",
923
"In many Q&A applications we want to allow the user to have a back-and-forth conversation, meaning the application needs some sort of \"memory\" of past questions and answers, and some logic for incorporating those into its current thinking.\n",
1024
"\n",
1125
"In this guide we focus on **adding logic for incorporating historical messages.** Further details on chat history management is [covered here](/docs/how_to/message_history).\n",

docs/core_docs/docs/tutorials/query_analysis.ipynb

+12-3
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,6 @@
55
"id": "df7d42b9-58a6-434c-a2d7-0b61142f6d3e",
66
"metadata": {},
77
"source": [
8-
"---\n",
9-
"sidebar_position: 0\n",
10-
"---\n",
118
"```{=mdx}\n",
129
"import CodeBlock from \"@theme/CodeBlock\";\n",
1310
"```"
@@ -20,6 +17,18 @@
2017
"source": [
2118
"# Build a Query Analysis System\n",
2219
"\n",
20+
":::info Prerequisites\n",
21+
"\n",
22+
"This guide assumes familiarity with the following concepts:\n",
23+
"\n",
24+
"- [Document loaders](/docs/concepts/#document-loaders)\n",
25+
"- [Chat models](/docs/concepts/#chat-models)\n",
26+
"- [Embeddings](/docs/concepts/#embedding-models)\n",
27+
"- [Vector stores](/docs/concepts/#vector-stores)\n",
28+
"- [Retrieval](/docs/concepts/#retrieval)\n",
29+
"\n",
30+
":::\n",
31+
"\n",
2332
"This page will show how to use query analysis in a basic end-to-end example. This will cover creating a simple search engine, showing a failure mode that occurs when passing a raw user question to that search, and then an example of how query analysis can help address that issue. There are MANY different query analysis techniques and this end-to-end example will not show all of them.\n",
2433
"\n",
2534
"For the purpose of this example, we will do retrieval over the LangChain YouTube videos."

docs/core_docs/docs/tutorials/rag.ipynb

+6-2
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,9 @@
1515
"LangSmith will become increasingly helpful as our application grows in\n",
1616
"complexity.\n",
1717
"\n",
18+
"If you're already familiar with basic retrieval, you might also be interested in\n",
19+
"this [high-level overview of different retrieval techinques](/docs/concepts/#retrieval).\n",
20+
"\n",
1821
"## What is RAG?\n",
1922
"\n",
2023
"RAG is a technique for augmenting LLM knowledge with additional data.\n",
@@ -35,7 +38,7 @@
3538
"The most common full sequence from raw data to answer looks like:\n",
3639
"\n",
3740
"### Indexing\n",
38-
"1. **Load**: First we need to load our data. This is done with [DocumentLoaders](/docs/concepts/#document-loaders).\n",
41+
"1. **Load**: First we need to load our data. This is done with [Document Loaders](/docs/concepts/#document-loaders).\n",
3942
"2. **Split**: [Text splitters](/docs/concepts/#text-splitters) break large `Documents` into smaller chunks. This is useful both for indexing data and for passing it in to a model, since large chunks are harder to search over and won't fit in a model's finite context window.\n",
4043
"3. **Store**: We need somewhere to store and index our splits, so that they can later be searched over. This is often done using a [VectorStore](/docs/concepts/#vectorstores) and [Embeddings](/docs/concepts/#embedding-models) model.\n",
4144
"\n",
@@ -842,7 +845,8 @@
842845
"\n",
843846
"- [Return sources](/docs/how_to/qa_sources/): Learn how to return source documents\n",
844847
"- [Streaming](/docs/how_to/qa_streaming/): Learn how to stream outputs and intermediate steps\n",
845-
"- [Add chat history](/docs/how_to/qa_chat_history_how_to/): Learn how to add chat history to your app"
848+
"- [Add chat history](/docs/how_to/qa_chat_history_how_to/): Learn how to add chat history to your app\n",
849+
"- [Retrieval conceptual guide](/docs/concepts/#retrieval): A high-level overview of specific retrieval techniques"
846850
]
847851
}
848852
],

docs/core_docs/docs/tutorials/sql_qa.mdx

+11
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,16 @@
11
# Build a Question/Answering system over SQL data
22

3+
:::info Prerequisites
4+
5+
This guide assumes familiarity with the following concepts:
6+
7+
- [Chaining runnables](/docs/how_to/sequence/)
8+
- [Chat models](/docs/concepts/#chat-models)
9+
- [Tools](/docs/concepts/#tools)
10+
- [Agents](/docs/concepts/#agents)
11+
12+
:::
13+
314
In this guide we'll go over the basic ways to create a Q&A chain and agent over a SQL database.
415
These systems will allow us to ask a question about the data in a SQL database and get back a natural language answer.
516
The main difference between the two is that our agent can query the database in a loop as many time as it needs to answer the question.

0 commit comments

Comments
 (0)