Skip to content

Commit 7c53ded

Browse files
feat(api): reference claude-2 in examples (#50)
Co-authored-by: Stainless Bot <[email protected]>
1 parent 8dfff26 commit 7c53ded

File tree

2 files changed

+17
-95
lines changed

2 files changed

+17
-95
lines changed

src/resources/completions.ts

Lines changed: 14 additions & 92 deletions
Original file line numberDiff line numberDiff line change
@@ -66,51 +66,11 @@ export namespace CompletionCreateParams {
6666
* The model that will complete your prompt.
6767
*
6868
* As we improve Claude, we develop new versions of it that you can query. This
69-
* controls which version of Claude answers your request. Right now we are offering
70-
* two model families: Claude and Claude Instant.
71-
*
72-
* Specifiying any of the following models will automatically switch to you the
73-
* newest compatible models as they are released:
74-
*
75-
* - `"claude-1"`: Our largest model, ideal for a wide range of more complex tasks.
76-
* - `"claude-1-100k"`: An enhanced version of `claude-1` with a 100,000 token
77-
* (roughly 75,000 word) context window. Ideal for summarizing, analyzing, and
78-
* querying long documents and conversations for nuanced understanding of complex
79-
* topics and relationships across very long spans of text.
80-
* - `"claude-instant-1"`: A smaller model with far lower latency, sampling at
81-
* roughly 40 words/sec! Its output quality is somewhat lower than the latest
82-
* `claude-1` model, particularly for complex tasks. However, it is much less
83-
* expensive and blazing fast. We believe that this model provides more than
84-
* adequate performance on a range of tasks including text classification,
85-
* summarization, and lightweight chat applications, as well as search result
86-
* summarization.
87-
* - `"claude-instant-1-100k"`: An enhanced version of `claude-instant-1` with a
88-
* 100,000 token context window that retains its performance. Well-suited for
89-
* high throughput use cases needing both speed and additional context, allowing
90-
* deeper understanding from extended conversations and documents.
91-
*
92-
* You can also select specific sub-versions of the above models:
93-
*
94-
* - `"claude-1.3"`: Compared to `claude-1.2`, it's more robust against red-team
95-
* inputs, better at precise instruction-following, better at code, and better
96-
* and non-English dialogue and writing.
97-
* - `"claude-1.3-100k"`: An enhanced version of `claude-1.3` with a 100,000 token
98-
* (roughly 75,000 word) context window.
99-
* - `"claude-1.2"`: An improved version of `claude-1`. It is slightly improved at
100-
* general helpfulness, instruction following, coding, and other tasks. It is
101-
* also considerably better with non-English languages. This model also has the
102-
* ability to role play (in harmless ways) more consistently, and it defaults to
103-
* writing somewhat longer and more thorough responses.
104-
* - `"claude-1.0"`: An earlier version of `claude-1`.
105-
* - `"claude-instant-1.1"`: Our latest version of `claude-instant-1`. It is better
106-
* than `claude-instant-1.0` at a wide variety of tasks including writing,
107-
* coding, and instruction following. It performs better on academic benchmarks,
108-
* including math, reading comprehension, and coding tests. It is also more
109-
* robust against red-teaming inputs.
110-
* - `"claude-instant-1.1-100k"`: An enhanced version of `claude-instant-1.1` with
111-
* a 100,000 token context window that retains its lightning fast 40 word/sec
112-
* performance.
113-
* - `"claude-instant-1.0"`: An earlier version of `claude-instant-1`.
69+
* parameter controls which version of Claude answers your request. Right now we
70+
* are offering two model families: Claude, and Claude Instant. You can use them by
71+
* setting `model` to `"claude-2"` or `"claude-instant-1"`, respectively. See
72+
* [models](https://docs.anthropic.com/claude/reference/selecting-a-model) for
73+
* additional details.
11474
*/
11575
model: string;
11676

@@ -124,7 +84,8 @@ export namespace CompletionCreateParams {
12484
* const prompt = `\n\nHuman: ${userQuestion}\n\nAssistant:`;
12585
* ```
12686
*
127-
* See our [comments on prompts](https://console.anthropic.com/docs/prompt-design)
87+
* See our
88+
* [comments on prompts](https://docs.anthropic.com/claude/docs/introduction-to-prompt-design)
12889
* for more context.
12990
*/
13091
prompt: string;
@@ -208,51 +169,11 @@ export namespace CompletionCreateParams {
208169
* The model that will complete your prompt.
209170
*
210171
* As we improve Claude, we develop new versions of it that you can query. This
211-
* controls which version of Claude answers your request. Right now we are offering
212-
* two model families: Claude and Claude Instant.
213-
*
214-
* Specifiying any of the following models will automatically switch to you the
215-
* newest compatible models as they are released:
216-
*
217-
* - `"claude-1"`: Our largest model, ideal for a wide range of more complex tasks.
218-
* - `"claude-1-100k"`: An enhanced version of `claude-1` with a 100,000 token
219-
* (roughly 75,000 word) context window. Ideal for summarizing, analyzing, and
220-
* querying long documents and conversations for nuanced understanding of complex
221-
* topics and relationships across very long spans of text.
222-
* - `"claude-instant-1"`: A smaller model with far lower latency, sampling at
223-
* roughly 40 words/sec! Its output quality is somewhat lower than the latest
224-
* `claude-1` model, particularly for complex tasks. However, it is much less
225-
* expensive and blazing fast. We believe that this model provides more than
226-
* adequate performance on a range of tasks including text classification,
227-
* summarization, and lightweight chat applications, as well as search result
228-
* summarization.
229-
* - `"claude-instant-1-100k"`: An enhanced version of `claude-instant-1` with a
230-
* 100,000 token context window that retains its performance. Well-suited for
231-
* high throughput use cases needing both speed and additional context, allowing
232-
* deeper understanding from extended conversations and documents.
233-
*
234-
* You can also select specific sub-versions of the above models:
235-
*
236-
* - `"claude-1.3"`: Compared to `claude-1.2`, it's more robust against red-team
237-
* inputs, better at precise instruction-following, better at code, and better
238-
* and non-English dialogue and writing.
239-
* - `"claude-1.3-100k"`: An enhanced version of `claude-1.3` with a 100,000 token
240-
* (roughly 75,000 word) context window.
241-
* - `"claude-1.2"`: An improved version of `claude-1`. It is slightly improved at
242-
* general helpfulness, instruction following, coding, and other tasks. It is
243-
* also considerably better with non-English languages. This model also has the
244-
* ability to role play (in harmless ways) more consistently, and it defaults to
245-
* writing somewhat longer and more thorough responses.
246-
* - `"claude-1.0"`: An earlier version of `claude-1`.
247-
* - `"claude-instant-1.1"`: Our latest version of `claude-instant-1`. It is better
248-
* than `claude-instant-1.0` at a wide variety of tasks including writing,
249-
* coding, and instruction following. It performs better on academic benchmarks,
250-
* including math, reading comprehension, and coding tests. It is also more
251-
* robust against red-teaming inputs.
252-
* - `"claude-instant-1.1-100k"`: An enhanced version of `claude-instant-1.1` with
253-
* a 100,000 token context window that retains its lightning fast 40 word/sec
254-
* performance.
255-
* - `"claude-instant-1.0"`: An earlier version of `claude-instant-1`.
172+
* parameter controls which version of Claude answers your request. Right now we
173+
* are offering two model families: Claude, and Claude Instant. You can use them by
174+
* setting `model` to `"claude-2"` or `"claude-instant-1"`, respectively. See
175+
* [models](https://docs.anthropic.com/claude/reference/selecting-a-model) for
176+
* additional details.
256177
*/
257178
model: string;
258179

@@ -266,7 +187,8 @@ export namespace CompletionCreateParams {
266187
* const prompt = `\n\nHuman: ${userQuestion}\n\nAssistant:`;
267188
* ```
268189
*
269-
* See our [comments on prompts](https://console.anthropic.com/docs/prompt-design)
190+
* See our
191+
* [comments on prompts](https://docs.anthropic.com/claude/docs/introduction-to-prompt-design)
270192
* for more context.
271193
*/
272194
prompt: string;

tests/api-resources/completions.test.ts

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,20 +12,20 @@ describe('resource completions', () => {
1212
test('create: only required params', async () => {
1313
const response = await anthropic.completions.create({
1414
max_tokens_to_sample: 256,
15-
model: 'claude-1',
15+
model: 'claude-2',
1616
prompt: '\n\nHuman: Hello, world!\n\nAssistant:',
1717
});
1818
});
1919

2020
test('create: required and optional params', async () => {
2121
const response = await anthropic.completions.create({
2222
max_tokens_to_sample: 256,
23-
model: 'claude-1',
23+
model: 'claude-2',
2424
prompt: '\n\nHuman: Hello, world!\n\nAssistant:',
2525
metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
2626
stop_sequences: ['string', 'string', 'string'],
2727
stream: false,
28-
temperature: 0.7,
28+
temperature: 1,
2929
top_k: 5,
3030
top_p: 0.7,
3131
});

0 commit comments

Comments
 (0)