|
| 1 | +# How to stream agent data to the client |
| 2 | + |
| 3 | +This guide will walk you through how we stream agent data to the client using [React Server Components](https://react.dev/reference/rsc/server-components) inside this directory. |
| 4 | +The code in this doc is taken from the `page.tsx` and `action.ts` files in this directory. To view the full, uninterrupted code, click [here for the actions file](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/ai_sdk/agent/action.ts) |
| 5 | +and [here for the client file](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/ai_sdk/agent/page.tsx). |
| 6 | + |
| 7 | +:::info Prerequisites |
| 8 | + |
| 9 | +This guide assumes familiarity with the following concepts: |
| 10 | + |
| 11 | +- [LangChain Expression Language](/docs/concepts#langchain-expression-language) |
| 12 | +- [Chat models](/docs/concepts#chat-models) |
| 13 | +- [Tool calling](/docs/concepts#functiontool-calling) |
| 14 | +- [Agents](/docs/concepts#agents) |
| 15 | + |
| 16 | +::: |
| 17 | + |
| 18 | +## Setup |
| 19 | + |
| 20 | +First, install the necessary LangChain & AI SDK packages: |
| 21 | + |
| 22 | +```bash npm2yarn |
| 23 | +npm install langchain @langchain/core @langchain/community ai |
| 24 | +``` |
| 25 | + |
| 26 | +In this demo we'll be using the `TavilySearchResults` tool, which requires an API key. You can get one [here](https://app.tavily.com/), or you can swap it out for another tool of your choice, like |
| 27 | +[`WikipediaQueryRun`](/docs/integrations/tools/wikipedia) which doesn't require an API key. |
| 28 | + |
| 29 | +If you choose to use `TavilySearchResults`, set your API key like so: |
| 30 | + |
| 31 | +```bash |
| 32 | +export TAVILY_API_KEY=your_api_key |
| 33 | +``` |
| 34 | + |
| 35 | +## Get started |
| 36 | + |
| 37 | +The first step is to create a new RSC file, and add the imports which we'll use for running our agent. In this demo, we'll name it `action.ts`: |
| 38 | + |
| 39 | +```typescript action.ts |
| 40 | +"use server"; |
| 41 | + |
| 42 | +import { ChatOpenAI } from "@langchain/openai"; |
| 43 | +import { ChatPromptTemplate } from "@langchain/core/prompts"; |
| 44 | +import { TavilySearchResults } from "@langchain/community/tools/tavily_search"; |
| 45 | +import { AgentExecutor, createToolCallingAgent } from "langchain/agents"; |
| 46 | +import { pull } from "langchain/hub"; |
| 47 | +import { createStreamableValue } from "ai/rsc"; |
| 48 | +``` |
| 49 | + |
| 50 | +Next, we'll define a `runAgent` function. This function takes in a single input of `string`, and contains all the logic for our agent and streaming data back to the client: |
| 51 | + |
| 52 | +```typescript action.ts |
| 53 | +export async function runAgent(input: string) { |
| 54 | + "use server"; |
| 55 | +} |
| 56 | +``` |
| 57 | + |
| 58 | +Next, inside our function we'll define our chat model of choice: |
| 59 | + |
| 60 | +```typescript action.ts |
| 61 | +const llm = new ChatOpenAI({ |
| 62 | + model: "gpt-4o-2024-05-13", |
| 63 | + temperature: 0, |
| 64 | +}); |
| 65 | +``` |
| 66 | + |
| 67 | +Next, we'll use the `createStreamableValue` helper function provided by the `ai` package to create a streamable value: |
| 68 | + |
| 69 | +```typescript action.ts |
| 70 | +const stream = createStreamableValue(); |
| 71 | +``` |
| 72 | + |
| 73 | +This will be very important later on when we start streaming data back to the client. |
| 74 | + |
| 75 | +Next, lets define our async function inside which contains the agent logic: |
| 76 | + |
| 77 | +```typescript action.ts |
| 78 | + (async () => { |
| 79 | + const tools = [new TavilySearchResults({ maxResults: 1 })]; |
| 80 | + |
| 81 | + const prompt = await pull<ChatPromptTemplate>( |
| 82 | + "hwchase17/openai-tools-agent", |
| 83 | + ); |
| 84 | + |
| 85 | + const agent = createToolCallingAgent({ |
| 86 | + llm, |
| 87 | + tools, |
| 88 | + prompt, |
| 89 | + }); |
| 90 | + |
| 91 | + const agentExecutor = new AgentExecutor({ |
| 92 | + agent, |
| 93 | + tools, |
| 94 | + }); |
| 95 | +``` |
| 96 | +
|
| 97 | +Here you can see we're doing a few things: |
| 98 | +
|
| 99 | +The first is we're defining our list of tools (in this case we're only using a single tool) and pulling in our prompt from the LangChain prompt hub. |
| 100 | +
|
| 101 | +After that, we're passing our LLM, tools and prompt to the `createToolCallingAgent` function, which will construct and return a runnable agent. |
| 102 | +This is then passed into the `AgentExecutor` class, which will handle the execution & streaming of our agent. |
| 103 | +
|
| 104 | +Finally, we'll call `.streamEvents` and pass our streamed data back to the `stream` variable we defined above, |
| 105 | +
|
| 106 | +```typescript action.ts |
| 107 | + const streamingEvents = agentExecutor.streamEvents( |
| 108 | + { input }, |
| 109 | + { version: "v1" }, |
| 110 | + ); |
| 111 | + |
| 112 | + for await (const item of streamingEvents) { |
| 113 | + stream.update(JSON.parse(JSON.stringify(item, null, 2))); |
| 114 | + } |
| 115 | + |
| 116 | + stream.done(); |
| 117 | + })(); |
| 118 | +``` |
| 119 | + |
| 120 | +As you can see above, we're doing something a little wacky by stringifying and parsing our data. This is due to a bug in the RSC streaming code, |
| 121 | +however if you stringify and parse like we are above, you shouldn't experience this. |
| 122 | + |
| 123 | +Finally, at the bottom of the function return the stream value: |
| 124 | + |
| 125 | +```typescript action.ts |
| 126 | +return { streamData: stream.value }; |
| 127 | +``` |
| 128 | + |
| 129 | +Once we've implemented our server action, we can add a couple lines of code in our client function to request and stream this data: |
| 130 | + |
| 131 | +First, add the necessary imports: |
| 132 | + |
| 133 | +```typescript page.tsx |
| 134 | +"use client"; |
| 135 | + |
| 136 | +import { useState } from "react"; |
| 137 | +import { readStreamableValue } from "ai/rsc"; |
| 138 | +import { runAgent } from "./action"; |
| 139 | +``` |
| 140 | + |
| 141 | +Then inside our `Page` function, calling the `runAgent` function is straightforward: |
| 142 | + |
| 143 | +```typescript page.tsx |
| 144 | +export default function Page() { |
| 145 | + const [input, setInput] = useState(""); |
| 146 | + const [data, setData] = useState<StreamEvent[]>([]); |
| 147 | + |
| 148 | + async function handleSubmit(e: React.FormEvent) { |
| 149 | + e.preventDefault(); |
| 150 | + |
| 151 | + const { streamData } = await runAgent(input); |
| 152 | + for await (const item of readStreamableValue(streamData)) { |
| 153 | + setData((prev) => [...prev, item]); |
| 154 | + } |
| 155 | + } |
| 156 | +} |
| 157 | +``` |
| 158 | + |
| 159 | +That's it! You've successfully built an agent that streams data back to the client. You can now run your application and see the data streaming in real-time. |
0 commit comments