Skip to content

Commit b7e426b

Browse files
Update tiny-agents.md (#2830)
* Title → “an MCP-powered agent” * Intro (Inference Client section) → “an MCP client” * TIP block → “an MCP client” * Quote following TIP → “an MCP client” * Implementation section (“Now that we have…”) → “an MCP client” * Agent component list → “an LLM inference client”
1 parent aa98104 commit b7e426b

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

tiny-agents.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
---
2-
title: "Tiny Agents: a MCP-powered agent in 50 lines of code"
2+
title: "Tiny Agents: an MCP-powered agent in 50 lines of code"
33
thumbnail: /blog/assets/tiny-agents/thumbnail.jpg
44
authors:
55
- user: julien-c
66
---
77

8-
# Tiny Agents: a MCP-powered agent in 50 lines of code
8+
# Tiny Agents: an MCP-powered agent in 50 lines of code
99

1010
Over the past few weeks, I've been diving into MCP ([Model Context Protocol](https://modelcontextprotocol.io/)) to understand what the hype around it was all about.
1111

@@ -16,7 +16,7 @@ It is fairly simple to extend an Inference Client – at HF, we have two officia
1616
But while doing that, came my second realization:
1717

1818
> [!TIP]
19-
> **Once you have a MCP Client, an Agent is literally just a while loop on top of it.**
19+
> **Once you have an MCP Client, an Agent is literally just a while loop on top of it.**
2020
2121
In this short article, I will walk you through how I implemented it in Typescript (JS), how you can adopt MCP too and how it's going to make Agentic AI way simpler going forward.
2222

@@ -127,7 +127,7 @@ As a developer, you run the tools and feed their result back into the LLM to con
127127
> [!NOTE]
128128
> Note that in the backend (at the inference engine level), the tools are simply passed to the model in a specially-formatted `chat_template`, like any other message, and then parsed out of the response (using model-specific special tokens) to expose them as tool calls.
129129
130-
## Implementing a MCP client on top of InferenceClient
130+
## Implementing an MCP client on top of InferenceClient
131131

132132
Now that we know what a tool is in recent LLMs, let us implement the actual MCP client.
133133

@@ -244,14 +244,14 @@ Finally you will add the resulting tool message to your `messages` array and bac
244244

245245
## Our 50-lines-of-code Agent 🤯
246246

247-
Now that we have a MCP client capable of connecting to arbitrary MCP servers to get lists of tools and capable of injecting them and parsing them from the LLM inference, well... what is an Agent?
247+
Now that we have an MCP client capable of connecting to arbitrary MCP servers to get lists of tools and capable of injecting them and parsing them from the LLM inference, well... what is an Agent?
248248

249249
> Once you have an inference client with a set of tools, then an Agent is just a while loop on top of it.
250250
251251
In more detail, an Agent is simply a combination of:
252252
- a system prompt
253-
- a LLM Inference client
254-
- a MCP client to hook a set of Tools into it from a bunch of MCP servers
253+
- an LLM Inference client
254+
- an MCP client to hook a set of Tools into it from a bunch of MCP servers
255255
- some basic control flow (see below for the while loop)
256256

257257
> [!TIP]

0 commit comments

Comments
 (0)