You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat(api)!: messages is generally available (#287)
This is a breaking change as we've removed the `beta` namespace from
the messages API. To migrate you'll just need to remove all `.beta`
references, everything else is the same!
@@ -104,7 +106,7 @@ import Anthropic from '@anthropic-ai/sdk';
104
106
const anthropic =newAnthropic();
105
107
106
108
asyncfunction main() {
107
-
const stream =anthropic.beta.messages
109
+
const stream =anthropic.messages
108
110
.stream({
109
111
model: 'claude-2.1',
110
112
max_tokens: 1024,
@@ -126,9 +128,9 @@ async function main() {
126
128
main();
127
129
```
128
130
129
-
Streaming with `client.beta.messages.stream(...)` exposes [various helpers for your convenience](helpers.md) including event handlers and accumulation.
131
+
Streaming with `client.messages.stream(...)` exposes [various helpers for your convenience](helpers.md) including event handlers and accumulation.
130
132
131
-
Alternatively, you can use `client.beta.messages.create({ ..., stream: true })` which only returns an async iterable of the events in the stream and thus uses less memory (it does not build up a final message object for you).
133
+
Alternatively, you can use `client.messages.create({ ..., stream: true })` which only returns an async iterable of the events in the stream and thus uses less memory (it does not build up a final message object for you).
132
134
133
135
## Handling errors
134
136
@@ -139,10 +141,10 @@ a subclass of `APIError` will be thrown:
139
141
<!-- prettier-ignore -->
140
142
```ts
141
143
asyncfunction main() {
142
-
constcompletion=awaitanthropic.completions
144
+
constmessage=awaitanthropic.messages
143
145
.create({
144
-
prompt: `${Anthropic.HUMAN_PROMPT} Your prompt here${Anthropic.AI_PROMPT}`,
@@ -188,16 +190,9 @@ const anthropic = new Anthropic({
188
190
});
189
191
190
192
// Or, configure per-request:
191
-
awaitanthropic.completions.create(
192
-
{
193
-
prompt:`${Anthropic.HUMAN_PROMPT} Can you help me effectively ask for a raise at work?${Anthropic.AI_PROMPT}`,
194
-
max_tokens_to_sample:300,
195
-
model:'claude-2.1',
196
-
},
197
-
{
198
-
maxRetries:5,
199
-
},
200
-
);
193
+
awaitanthropic.messages.create({ max_tokens:1024, messages: [{ role:'user', content:'Can you help me effectively ask for a raise at work?' }], model:'claude-2.1' }, {
194
+
maxRetries:5,
195
+
});
201
196
```
202
197
203
198
### Timeouts
@@ -212,16 +207,9 @@ const anthropic = new Anthropic({
212
207
});
213
208
214
209
// Override per-request:
215
-
awaitanthropic.completions.create(
216
-
{
217
-
prompt: `${Anthropic.HUMAN_PROMPT} Where can I get a good coffee in my neighbourhood?${Anthropic.AI_PROMPT}`,
218
-
max_tokens_to_sample: 300,
219
-
model: 'claude-2.1',
220
-
},
221
-
{
222
-
timeout: 5*1000,
223
-
},
224
-
);
210
+
awaitanthropic.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Where can I get a good coffee in my neighbourhood?' }], model: 'claude-2.1' }, {
211
+
timeout: 5*1000,
212
+
});
225
213
```
226
214
227
215
On timeout, an `APIConnectionTimeoutError` is thrown.
@@ -241,11 +229,11 @@ import Anthropic from '@anthropic-ai/sdk';
const { data: message, response: raw } =awaitanthropic.messages
265
+
.create({
266
+
max_tokens: 1024,
267
+
messages: [{ role: 'user', content: 'Where can I get a good coffee in my neighbourhood?' }],
268
+
model: 'claude-2.1',
269
+
})
270
+
.withResponse();
271
+
console.log(raw.headers.get('X-My-Header'));
272
+
console.log(message.content);
279
273
```
280
274
281
275
## Customizing the fetch client
@@ -325,7 +319,6 @@ If you would like to disable or customize this behavior, for example to use the
325
319
<!-- prettier-ignore -->
326
320
```ts
327
321
importhttpfrom'http';
328
-
importAnthropicfrom'@anthropic-ai/sdk';
329
322
importHttpsProxyAgentfrom'https-proxy-agent';
330
323
331
324
// Configure the default for all requests:
@@ -334,17 +327,10 @@ const anthropic = new Anthropic({
334
327
});
335
328
336
329
// Override per-request:
337
-
awaitanthropic.completions.create(
338
-
{
339
-
prompt: `${Anthropic.HUMAN_PROMPT} How does a court case get to the Supreme Court?${Anthropic.AI_PROMPT}`,
340
-
max_tokens_to_sample: 300,
341
-
model: 'claude-2.1',
342
-
},
343
-
{
344
-
baseURL: 'http://localhost:8080/test-api',
345
-
httpAgent: newhttp.Agent({ keepAlive: false }),
346
-
},
347
-
);
330
+
awaitanthropic.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Where can I get a good coffee in my neighbourhood?' }], model: 'claude-2.1' }, {
0 commit comments