Skip to content

Commit e7c87e7

Browse files
authored
docs[minor]: Add generative UI docs (#5528)
* docs[minor]: Add generative UI docs * chore: lint files * broken link
1 parent 545152c commit e7c87e7

File tree

5 files changed

+391
-0
lines changed

5 files changed

+391
-0
lines changed

docs/core_docs/docs/concepts.mdx

+11
Original file line numberDiff line numberDiff line change
@@ -689,3 +689,14 @@ Table columns:
689689
| Code | [many languages](/docs/how_to/code_splitter/) | Code (Python, JS) specific characters | | Splits text based on characters specific to coding languages. 15 different languages are available to choose from. |
690690
| Token | [many classes](/docs/how_to/split_by_token/) | Tokens | | Splits text on tokens. There exist a few different ways to measure tokens. |
691691
| Character | [CharacterTextSplitter](/docs/how_to/character_text_splitter/) | A user defined character | | Splits text based on a user defined character. One of the simpler methods. |
692+
693+
### Generative UI
694+
695+
LangChain.js provides a few templates and examples showing off generative UI,
696+
and other ways of streaming data from the server to the client, specifically in React/Next.js.
697+
698+
You can find the template for generative UI in the official [LangChain.js Next.js template](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/generative_ui/README.md).
699+
700+
For streaming agentic responses and intermediate steps, you can find the [template and documentation here](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/ai_sdk/agent/README.md).
701+
702+
And finally, streaming tool calls and structured output can be found [here](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/ai_sdk/tools/README.md).
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
# How to build an LLM generated UI
2+
3+
This guide will walk through some high level concepts and code snippets for building generative UI's using LangChain.js. To see the full code for generative UI, [click here to visit our official LangChain Next.js template](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/generative_ui/README.md).
4+
5+
The sample implements a tool calling agent, which outputs an interactive UI element when streaming intermediate outputs of tool calls to the client.
6+
7+
We introduce two utilities which wraps the AI SDK to make it easier to yield React elements inside runnables and tool calls: [`createRunnableUI`](https://github.com/langchain-ai/langchain-nextjs-template/blob/7f764d558682214d50b064f4293667123a31e6fe/app/generative_ui/utils/server.tsx#L89)
8+
and [`streamRunnableUI`](https://github.com/langchain-ai/langchain-nextjs-template/blob/7f764d558682214d50b064f4293667123a31e6fe/app/generative_ui/utils/server.tsx#L126).
9+
10+
- The `streamRunnableUI` executes the provided Runnable with `streamEvents` method and sends every `stream` event to the client via the React Server Components stream.
11+
- The `createRunnableUI` wraps the `createStreamableUI` function from AI SDK to properly hook into the Runnable event stream.
12+
13+
The usage is then as follows:
14+
15+
```tsx ai/chain.tsx
16+
"use server";
17+
18+
const tool = new DynamicStructuredTool({
19+
// ...
20+
func: async (input, config) => {
21+
// create a new streamable UI and wire it up to the streamEvents
22+
const stream = createRunnableUI(config);
23+
stream.update(<div>Searching...</div>);
24+
25+
const result = await images(input);
26+
27+
// update the UI element with the rendered results
28+
stream.done(
29+
<Images
30+
images={result.images_results
31+
.map((image) => image.thumbnail)
32+
.slice(0, input.limit)}
33+
/>
34+
);
35+
36+
return `[Returned ${result.images_results.length} images]`;
37+
},
38+
});
39+
40+
// add LLM, prompt, etc...
41+
42+
const tools = [tool];
43+
44+
export const agentExecutor = new AgentExecutor({
45+
agent: createToolCallingAgent({ llm, tools, prompt }),
46+
tools,
47+
});
48+
```
49+
50+
```tsx agent.tsx
51+
async function agent(inputs: { input: string }) {
52+
"use server";
53+
return streamRunnableUI(agentExecutor, inputs);
54+
}
55+
56+
export const EndpointsContext = exposeEndpoints({ agent });
57+
```
58+
59+
In order to ensure all of the client components are included in the bundle, we need to wrap all of the Server Actions into `exposeEndpoints` method. These endpoints will be accessible from the client via the Context API, seen in the `useActions` hook.
60+
61+
```tsx
62+
"use client";
63+
import type { EndpointsContext } from "./agent";
64+
65+
export default function Page() {
66+
const actions = useActions<typeof EndpointsContext>();
67+
const [node, setNode] = useState();
68+
69+
return (
70+
<div>
71+
{node}
72+
73+
<button
74+
onClick={async () => {
75+
setNode(await actions.agent({ input: "cats" }));
76+
}}
77+
>
78+
Get images of cats
79+
</button>
80+
</div>
81+
);
82+
}
83+
```

docs/core_docs/docs/how_to/index.mdx

+6
Original file line numberDiff line numberDiff line change
@@ -184,6 +184,12 @@ All of LangChain components can easily be extended to support your own versions.
184184
- [How to: define a custom tool](/docs/how_to/custom_tools)
185185
- [How to: create custom callback handlers](/docs/how_to/custom_callbacks)
186186

187+
### Generative UI
188+
189+
- [How to: build an LLM generated UI](/docs/how_to/generative_ui)
190+
- [How to: stream agentic data to the client](/docs/how_to/stream_agent_client)
191+
- [How to: stream structured output to the client](/docs/how_to/stream_tool_client)
192+
187193
## Use cases
188194

189195
These guides cover use-case specific details.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,159 @@
1+
# How to stream agent data to the client
2+
3+
This guide will walk you through how we stream agent data to the client using [React Server Components](https://react.dev/reference/rsc/server-components) inside this directory.
4+
The code in this doc is taken from the `page.tsx` and `action.ts` files in this directory. To view the full, uninterrupted code, click [here for the actions file](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/ai_sdk/agent/action.ts)
5+
and [here for the client file](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/ai_sdk/agent/page.tsx).
6+
7+
:::info Prerequisites
8+
9+
This guide assumes familiarity with the following concepts:
10+
11+
- [LangChain Expression Language](/docs/concepts#langchain-expression-language)
12+
- [Chat models](/docs/concepts#chat-models)
13+
- [Tool calling](/docs/concepts#functiontool-calling)
14+
- [Agents](/docs/concepts#agents)
15+
16+
:::
17+
18+
## Setup
19+
20+
First, install the necessary LangChain & AI SDK packages:
21+
22+
```bash npm2yarn
23+
npm install langchain @langchain/core @langchain/community ai
24+
```
25+
26+
In this demo we'll be using the `TavilySearchResults` tool, which requires an API key. You can get one [here](https://app.tavily.com/), or you can swap it out for another tool of your choice, like
27+
[`WikipediaQueryRun`](/docs/integrations/tools/wikipedia) which doesn't require an API key.
28+
29+
If you choose to use `TavilySearchResults`, set your API key like so:
30+
31+
```bash
32+
export TAVILY_API_KEY=your_api_key
33+
```
34+
35+
## Get started
36+
37+
The first step is to create a new RSC file, and add the imports which we'll use for running our agent. In this demo, we'll name it `action.ts`:
38+
39+
```typescript action.ts
40+
"use server";
41+
42+
import { ChatOpenAI } from "@langchain/openai";
43+
import { ChatPromptTemplate } from "@langchain/core/prompts";
44+
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
45+
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
46+
import { pull } from "langchain/hub";
47+
import { createStreamableValue } from "ai/rsc";
48+
```
49+
50+
Next, we'll define a `runAgent` function. This function takes in a single input of `string`, and contains all the logic for our agent and streaming data back to the client:
51+
52+
```typescript action.ts
53+
export async function runAgent(input: string) {
54+
"use server";
55+
}
56+
```
57+
58+
Next, inside our function we'll define our chat model of choice:
59+
60+
```typescript action.ts
61+
const llm = new ChatOpenAI({
62+
model: "gpt-4o-2024-05-13",
63+
temperature: 0,
64+
});
65+
```
66+
67+
Next, we'll use the `createStreamableValue` helper function provided by the `ai` package to create a streamable value:
68+
69+
```typescript action.ts
70+
const stream = createStreamableValue();
71+
```
72+
73+
This will be very important later on when we start streaming data back to the client.
74+
75+
Next, lets define our async function inside which contains the agent logic:
76+
77+
```typescript action.ts
78+
(async () => {
79+
const tools = [new TavilySearchResults({ maxResults: 1 })];
80+
81+
const prompt = await pull<ChatPromptTemplate>(
82+
"hwchase17/openai-tools-agent",
83+
);
84+
85+
const agent = createToolCallingAgent({
86+
llm,
87+
tools,
88+
prompt,
89+
});
90+
91+
const agentExecutor = new AgentExecutor({
92+
agent,
93+
tools,
94+
});
95+
```
96+
97+
Here you can see we're doing a few things:
98+
99+
The first is we're defining our list of tools (in this case we're only using a single tool) and pulling in our prompt from the LangChain prompt hub.
100+
101+
After that, we're passing our LLM, tools and prompt to the `createToolCallingAgent` function, which will construct and return a runnable agent.
102+
This is then passed into the `AgentExecutor` class, which will handle the execution & streaming of our agent.
103+
104+
Finally, we'll call `.streamEvents` and pass our streamed data back to the `stream` variable we defined above,
105+
106+
```typescript action.ts
107+
const streamingEvents = agentExecutor.streamEvents(
108+
{ input },
109+
{ version: "v1" },
110+
);
111+
112+
for await (const item of streamingEvents) {
113+
stream.update(JSON.parse(JSON.stringify(item, null, 2)));
114+
}
115+
116+
stream.done();
117+
})();
118+
```
119+
120+
As you can see above, we're doing something a little wacky by stringifying and parsing our data. This is due to a bug in the RSC streaming code,
121+
however if you stringify and parse like we are above, you shouldn't experience this.
122+
123+
Finally, at the bottom of the function return the stream value:
124+
125+
```typescript action.ts
126+
return { streamData: stream.value };
127+
```
128+
129+
Once we've implemented our server action, we can add a couple lines of code in our client function to request and stream this data:
130+
131+
First, add the necessary imports:
132+
133+
```typescript page.tsx
134+
"use client";
135+
136+
import { useState } from "react";
137+
import { readStreamableValue } from "ai/rsc";
138+
import { runAgent } from "./action";
139+
```
140+
141+
Then inside our `Page` function, calling the `runAgent` function is straightforward:
142+
143+
```typescript page.tsx
144+
export default function Page() {
145+
const [input, setInput] = useState("");
146+
const [data, setData] = useState<StreamEvent[]>([]);
147+
148+
async function handleSubmit(e: React.FormEvent) {
149+
e.preventDefault();
150+
151+
const { streamData } = await runAgent(input);
152+
for await (const item of readStreamableValue(streamData)) {
153+
setData((prev) => [...prev, item]);
154+
}
155+
}
156+
}
157+
```
158+
159+
That's it! You've successfully built an agent that streams data back to the client. You can now run your application and see the data streaming in real-time.

0 commit comments

Comments
 (0)