You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: tiny-agents.md
+7-7
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
---
2
-
title: "Tiny Agents: a MCP-powered agent in 50 lines of code"
2
+
title: "Tiny Agents: an MCP-powered agent in 50 lines of code"
3
3
thumbnail: /blog/assets/tiny-agents/thumbnail.jpg
4
4
authors:
5
5
- user: julien-c
6
6
---
7
7
8
-
# Tiny Agents: a MCP-powered agent in 50 lines of code
8
+
# Tiny Agents: an MCP-powered agent in 50 lines of code
9
9
10
10
Over the past few weeks, I've been diving into MCP ([Model Context Protocol](https://modelcontextprotocol.io/)) to understand what the hype around it was all about.
11
11
@@ -16,7 +16,7 @@ It is fairly simple to extend an Inference Client – at HF, we have two officia
16
16
But while doing that, came my second realization:
17
17
18
18
> [!TIP]
19
-
> **Once you have a MCP Client, an Agent is literally just a while loop on top of it.**
19
+
> **Once you have an MCP Client, an Agent is literally just a while loop on top of it.**
20
20
21
21
In this short article, I will walk you through how I implemented it in Typescript (JS), how you can adopt MCP too and how it's going to make Agentic AI way simpler going forward.
22
22
@@ -127,7 +127,7 @@ As a developer, you run the tools and feed their result back into the LLM to con
127
127
> [!NOTE]
128
128
> Note that in the backend (at the inference engine level), the tools are simply passed to the model in a specially-formatted `chat_template`, like any other message, and then parsed out of the response (using model-specific special tokens) to expose them as tool calls.
129
129
130
-
## Implementing a MCP client on top of InferenceClient
130
+
## Implementing an MCP client on top of InferenceClient
131
131
132
132
Now that we know what a tool is in recent LLMs, let us implement the actual MCP client.
133
133
@@ -244,14 +244,14 @@ Finally you will add the resulting tool message to your `messages` array and bac
244
244
245
245
## Our 50-lines-of-code Agent 🤯
246
246
247
-
Now that we have a MCP client capable of connecting to arbitrary MCP servers to get lists of tools and capable of injecting them and parsing them from the LLM inference, well... what is an Agent?
247
+
Now that we have an MCP client capable of connecting to arbitrary MCP servers to get lists of tools and capable of injecting them and parsing them from the LLM inference, well... what is an Agent?
248
248
249
249
> Once you have an inference client with a set of tools, then an Agent is just a while loop on top of it.
250
250
251
251
In more detail, an Agent is simply a combination of:
252
252
- a system prompt
253
-
-a LLM Inference client
254
-
-a MCP client to hook a set of Tools into it from a bunch of MCP servers
253
+
-an LLM Inference client
254
+
-an MCP client to hook a set of Tools into it from a bunch of MCP servers
255
255
- some basic control flow (see below for the while loop)
0 commit comments