Skip to content

Commit f2b4f66

Browse files
feat(api): webhook and deep research support
1 parent aed2587 commit f2b4f66

File tree

15 files changed

+1188
-52
lines changed

15 files changed

+1188
-52
lines changed

.stats.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
configured_endpoints: 111
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-ef4ecb19eb61e24c49d77fef769ee243e5279bc0bdbaee8d0f8dba4da8722559.yml
3-
openapi_spec_hash: 1b8a9767c9f04e6865b06c41948cdc24
4-
config_hash: cae2d1f187b5b9f8dfa00daa807da42a
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-cca460eaf5cc13e9d6e5293eb97aac53d66dc1385c691f74b768c97d165b6e8b.yml
3+
openapi_spec_hash: 9ec43d443b3dd58ca5aa87eb0a7eb49f
4+
config_hash: e74d6791681e3af1b548748ff47a22c2

README.md

Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -124,6 +124,83 @@ await client.files.create({
124124
});
125125
```
126126

127+
## Webhook Verification
128+
129+
Verifying webhook signatures is _optional but encouraged_.
130+
131+
### Parsing webhook payloads
132+
133+
For most use cases, you will likely want to verify the webhook and parse the payload at the same time. To achieve this, we provide the method `client.webhooks.unwrap()`, which parses a webhook request and verifies that it was sent by OpenAI. This method will throw an error if the signature is invalid.
134+
135+
Note that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). The `.unwrap()` method will parse this JSON for you into an event object after verifying the webhook was sent from OpenAI.
136+
137+
```ts
138+
import { headers } from 'next/headers';
139+
import OpenAI from 'openai';
140+
141+
const client = new OpenAI({
142+
webhookSecret: process.env.OPENAI_WEBHOOK_SECRET, // env var used by default; explicit here.
143+
});
144+
145+
export async function webhook(request: Request) {
146+
const headersList = headers();
147+
const body = await request.text();
148+
149+
try {
150+
const event = client.webhooks.unwrap(body, headersList);
151+
152+
switch (event.type) {
153+
case 'response.completed':
154+
console.log('Response completed:', event.data);
155+
break;
156+
case 'response.failed':
157+
console.log('Response failed:', event.data);
158+
break;
159+
default:
160+
console.log('Unhandled event type:', event.type);
161+
}
162+
163+
return Response.json({ message: 'ok' });
164+
} catch (error) {
165+
console.error('Invalid webhook signature:', error);
166+
return new Response('Invalid signature', { status: 400 });
167+
}
168+
}
169+
```
170+
171+
### Verifying webhook payloads directly
172+
173+
In some cases, you may want to verify the webhook separately from parsing the payload. If you prefer to handle these steps separately, we provide the method `client.webhooks.verifySignature()` to _only verify_ the signature of a webhook request. Like `.unwrap()`, this method will throw an error if the signature is invalid.
174+
175+
Note that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). You will then need to parse the body after verifying the signature.
176+
177+
```ts
178+
import { headers } from 'next/headers';
179+
import OpenAI from 'openai';
180+
181+
const client = new OpenAI({
182+
webhookSecret: process.env.OPENAI_WEBHOOK_SECRET, // env var used by default; explicit here.
183+
});
184+
185+
export async function webhook(request: Request) {
186+
const headersList = headers();
187+
const body = await request.text();
188+
189+
try {
190+
client.webhooks.verifySignature(body, headersList);
191+
192+
// Parse the body after verification
193+
const event = JSON.parse(body);
194+
console.log('Verified event:', event);
195+
196+
return Response.json({ message: 'ok' });
197+
} catch (error) {
198+
console.error('Invalid webhook signature:', error);
199+
return new Response('Invalid signature', { status: 400 });
200+
}
201+
}
202+
```
203+
127204
## Handling errors
128205

129206
When the library is unable to connect to the API,

api.md

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -351,6 +351,31 @@ Methods:
351351
- <code>client.vectorStores.files.<a href="./src/resources/vector-stores/files.ts">upload</a>(vectorStoreId, file, options?) -> Promise&lt;VectorStoreFile&gt;</code>
352352
- <code>client.vectorStores.files.<a href="./src/resources/vector-stores/files.ts">uploadAndPoll</a>(vectorStoreId, file, options?) -> Promise&lt;VectorStoreFile&gt;</code>
353353

354+
# Webhooks
355+
356+
Types:
357+
358+
- <code><a href="./src/resources/webhooks.ts">BatchCancelledWebhookEvent</a></code>
359+
- <code><a href="./src/resources/webhooks.ts">BatchCompletedWebhookEvent</a></code>
360+
- <code><a href="./src/resources/webhooks.ts">BatchExpiredWebhookEvent</a></code>
361+
- <code><a href="./src/resources/webhooks.ts">BatchFailedWebhookEvent</a></code>
362+
- <code><a href="./src/resources/webhooks.ts">EvalRunCanceledWebhookEvent</a></code>
363+
- <code><a href="./src/resources/webhooks.ts">EvalRunFailedWebhookEvent</a></code>
364+
- <code><a href="./src/resources/webhooks.ts">EvalRunSucceededWebhookEvent</a></code>
365+
- <code><a href="./src/resources/webhooks.ts">FineTuningJobCancelledWebhookEvent</a></code>
366+
- <code><a href="./src/resources/webhooks.ts">FineTuningJobFailedWebhookEvent</a></code>
367+
- <code><a href="./src/resources/webhooks.ts">FineTuningJobSucceededWebhookEvent</a></code>
368+
- <code><a href="./src/resources/webhooks.ts">ResponseCancelledWebhookEvent</a></code>
369+
- <code><a href="./src/resources/webhooks.ts">ResponseCompletedWebhookEvent</a></code>
370+
- <code><a href="./src/resources/webhooks.ts">ResponseFailedWebhookEvent</a></code>
371+
- <code><a href="./src/resources/webhooks.ts">ResponseIncompleteWebhookEvent</a></code>
372+
- <code><a href="./src/resources/webhooks.ts">UnwrapWebhookEvent</a></code>
373+
374+
Methods:
375+
376+
- <code>client.webhooks.<a href="./src/resources/webhooks.ts">unwrap</a>(payload, headers, secret?, tolerance?) -> UnwrapWebhookEvent</code>
377+
- <code>client.webhooks.<a href="./src/resources/webhooks.ts">verifySignature</a>(payload, headers, secret?, tolerance?) -> void</code>
378+
354379
# Beta
355380

356381
## Realtime
@@ -701,6 +726,7 @@ Types:
701726
- <code><a href="./src/resources/responses/responses.ts">ResponseWebSearchCallSearchingEvent</a></code>
702727
- <code><a href="./src/resources/responses/responses.ts">Tool</a></code>
703728
- <code><a href="./src/resources/responses/responses.ts">ToolChoiceFunction</a></code>
729+
- <code><a href="./src/resources/responses/responses.ts">ToolChoiceMcp</a></code>
704730
- <code><a href="./src/resources/responses/responses.ts">ToolChoiceOptions</a></code>
705731
- <code><a href="./src/resources/responses/responses.ts">ToolChoiceTypes</a></code>
706732
- <code><a href="./src/resources/responses/responses.ts">WebSearchTool</a></code>

src/client.ts

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -74,6 +74,7 @@ import {
7474
ModerationTextInput,
7575
Moderations,
7676
} from './resources/moderations';
77+
import { Webhooks } from './resources/webhooks';
7778
import { Audio, AudioModel, AudioResponseFormat } from './resources/audio/audio';
7879
import { Beta } from './resources/beta/beta';
7980
import { Chat } from './resources/chat/chat';
@@ -196,6 +197,11 @@ export interface ClientOptions {
196197
*/
197198
project?: string | null | undefined;
198199

200+
/**
201+
* Defaults to process.env['OPENAI_WEBHOOK_SECRET'].
202+
*/
203+
webhookSecret?: string | null | undefined;
204+
199205
/**
200206
* Override the default base URL for the API, e.g., "https://api.example.com/v2/"
201207
*
@@ -276,6 +282,7 @@ export class OpenAI {
276282
apiKey: string;
277283
organization: string | null;
278284
project: string | null;
285+
webhookSecret: string | null;
279286

280287
baseURL: string;
281288
maxRetries: number;
@@ -295,6 +302,7 @@ export class OpenAI {
295302
* @param {string | undefined} [opts.apiKey=process.env['OPENAI_API_KEY'] ?? undefined]
296303
* @param {string | null | undefined} [opts.organization=process.env['OPENAI_ORG_ID'] ?? null]
297304
* @param {string | null | undefined} [opts.project=process.env['OPENAI_PROJECT_ID'] ?? null]
305+
* @param {string | null | undefined} [opts.webhookSecret=process.env['OPENAI_WEBHOOK_SECRET'] ?? null]
298306
* @param {string} [opts.baseURL=process.env['OPENAI_BASE_URL'] ?? https://api.openai.com/v1] - Override the default base URL for the API.
299307
* @param {number} [opts.timeout=10 minutes] - The maximum amount of time (in milliseconds) the client will wait for a response before timing out.
300308
* @param {MergedRequestInit} [opts.fetchOptions] - Additional `RequestInit` options to be passed to `fetch` calls.
@@ -309,6 +317,7 @@ export class OpenAI {
309317
apiKey = readEnv('OPENAI_API_KEY'),
310318
organization = readEnv('OPENAI_ORG_ID') ?? null,
311319
project = readEnv('OPENAI_PROJECT_ID') ?? null,
320+
webhookSecret = readEnv('OPENAI_WEBHOOK_SECRET') ?? null,
312321
...opts
313322
}: ClientOptions = {}) {
314323
if (apiKey === undefined) {
@@ -321,6 +330,7 @@ export class OpenAI {
321330
apiKey,
322331
organization,
323332
project,
333+
webhookSecret,
324334
...opts,
325335
baseURL: baseURL || `https://api.openai.com/v1`,
326336
};
@@ -351,6 +361,7 @@ export class OpenAI {
351361
this.apiKey = apiKey;
352362
this.organization = organization;
353363
this.project = project;
364+
this.webhookSecret = webhookSecret;
354365
}
355366

356367
/**
@@ -369,6 +380,7 @@ export class OpenAI {
369380
apiKey: this.apiKey,
370381
organization: this.organization,
371382
project: this.project,
383+
webhookSecret: this.webhookSecret,
372384
...options,
373385
});
374386
}
@@ -900,6 +912,7 @@ export class OpenAI {
900912
static InternalServerError = Errors.InternalServerError;
901913
static PermissionDeniedError = Errors.PermissionDeniedError;
902914
static UnprocessableEntityError = Errors.UnprocessableEntityError;
915+
static InvalidWebhookSignatureError = Errors.InvalidWebhookSignatureError;
903916

904917
static toFile = Uploads.toFile;
905918

@@ -914,6 +927,7 @@ export class OpenAI {
914927
fineTuning: API.FineTuning = new API.FineTuning(this);
915928
graders: API.Graders = new API.Graders(this);
916929
vectorStores: API.VectorStores = new API.VectorStores(this);
930+
webhooks: API.Webhooks = new API.Webhooks(this);
917931
beta: API.Beta = new API.Beta(this);
918932
batches: API.Batches = new API.Batches(this);
919933
uploads: API.Uploads = new API.Uploads(this);
@@ -932,6 +946,7 @@ OpenAI.Models = Models;
932946
OpenAI.FineTuning = FineTuning;
933947
OpenAI.Graders = Graders;
934948
OpenAI.VectorStores = VectorStores;
949+
OpenAI.Webhooks = Webhooks;
935950
OpenAI.Beta = Beta;
936951
OpenAI.Batches = Batches;
937952
OpenAI.Uploads = UploadsAPIUploads;
@@ -1070,6 +1085,8 @@ export declare namespace OpenAI {
10701085
type VectorStoreSearchParams as VectorStoreSearchParams,
10711086
};
10721087

1088+
export { Webhooks as Webhooks };
1089+
10731090
export { Beta as Beta };
10741091

10751092
export {

src/core/error.ts

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -152,3 +152,9 @@ export class ContentFilterFinishReasonError extends OpenAIError {
152152
super(`Could not parse response content as the request was rejected by the content filter`);
153153
}
154154
}
155+
156+
export class InvalidWebhookSignatureError extends Error {
157+
constructor(message: string) {
158+
super(message);
159+
}
160+
}

src/index.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ export {
2020
InternalServerError,
2121
PermissionDeniedError,
2222
UnprocessableEntityError,
23+
InvalidWebhookSignatureError,
2324
} from './core/error';
2425

2526
export { AzureOpenAI } from './azure';

src/resources/chat/completions/completions.ts

Lines changed: 47 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -273,25 +273,25 @@ export interface ChatCompletion {
273273
object: 'chat.completion';
274274

275275
/**
276-
* Specifies the latency tier to use for processing the request. This parameter is
277-
* relevant for customers subscribed to the scale tier service:
276+
* Specifies the processing type used for serving the request.
278277
*
279-
* - If set to 'auto', and the Project is Scale tier enabled, the system will
280-
* utilize scale tier credits until they are exhausted.
281-
* - If set to 'auto', and the Project is not Scale tier enabled, the request will
282-
* be processed using the default service tier with a lower uptime SLA and no
283-
* latency guarantee.
284-
* - If set to 'default', the request will be processed using the default service
285-
* tier with a lower uptime SLA and no latency guarantee.
286-
* - If set to 'flex', the request will be processed with the Flex Processing
287-
* service tier.
288-
* [Learn more](https://platform.openai.com/docs/guides/flex-processing).
278+
* - If set to 'auto', then the request will be processed with the service tier
279+
* configured in the Project settings. Unless otherwise configured, the Project
280+
* will use 'default'.
281+
* - If set to 'default', then the requset will be processed with the standard
282+
* pricing and performance for the selected model.
283+
* - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
284+
* 'priority', then the request will be processed with the corresponding service
285+
* tier. [Contact sales](https://openai.com/contact-sales) to learn more about
286+
* Priority processing.
289287
* - When not set, the default behavior is 'auto'.
290288
*
291-
* When this parameter is set, the response body will include the `service_tier`
292-
* utilized.
289+
* When the `service_tier` parameter is set, the response body will include the
290+
* `service_tier` value based on the processing mode actually used to serve the
291+
* request. This response value may be different from the value set in the
292+
* parameter.
293293
*/
294-
service_tier?: 'auto' | 'default' | 'flex' | 'scale' | null;
294+
service_tier?: 'auto' | 'default' | 'flex' | 'scale' | 'priority' | null;
295295

296296
/**
297297
* This fingerprint represents the backend configuration that the model runs with.
@@ -524,25 +524,25 @@ export interface ChatCompletionChunk {
524524
object: 'chat.completion.chunk';
525525

526526
/**
527-
* Specifies the latency tier to use for processing the request. This parameter is
528-
* relevant for customers subscribed to the scale tier service:
527+
* Specifies the processing type used for serving the request.
529528
*
530-
* - If set to 'auto', and the Project is Scale tier enabled, the system will
531-
* utilize scale tier credits until they are exhausted.
532-
* - If set to 'auto', and the Project is not Scale tier enabled, the request will
533-
* be processed using the default service tier with a lower uptime SLA and no
534-
* latency guarantee.
535-
* - If set to 'default', the request will be processed using the default service
536-
* tier with a lower uptime SLA and no latency guarantee.
537-
* - If set to 'flex', the request will be processed with the Flex Processing
538-
* service tier.
539-
* [Learn more](https://platform.openai.com/docs/guides/flex-processing).
529+
* - If set to 'auto', then the request will be processed with the service tier
530+
* configured in the Project settings. Unless otherwise configured, the Project
531+
* will use 'default'.
532+
* - If set to 'default', then the requset will be processed with the standard
533+
* pricing and performance for the selected model.
534+
* - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
535+
* 'priority', then the request will be processed with the corresponding service
536+
* tier. [Contact sales](https://openai.com/contact-sales) to learn more about
537+
* Priority processing.
540538
* - When not set, the default behavior is 'auto'.
541539
*
542-
* When this parameter is set, the response body will include the `service_tier`
543-
* utilized.
540+
* When the `service_tier` parameter is set, the response body will include the
541+
* `service_tier` value based on the processing mode actually used to serve the
542+
* request. This response value may be different from the value set in the
543+
* parameter.
544544
*/
545-
service_tier?: 'auto' | 'default' | 'flex' | 'scale' | null;
545+
service_tier?: 'auto' | 'default' | 'flex' | 'scale' | 'priority' | null;
546546

547547
/**
548548
* This fingerprint represents the backend configuration that the model runs with.
@@ -1446,25 +1446,25 @@ export interface ChatCompletionCreateParamsBase {
14461446
seed?: number | null;
14471447

14481448
/**
1449-
* Specifies the latency tier to use for processing the request. This parameter is
1450-
* relevant for customers subscribed to the scale tier service:
1449+
* Specifies the processing type used for serving the request.
14511450
*
1452-
* - If set to 'auto', and the Project is Scale tier enabled, the system will
1453-
* utilize scale tier credits until they are exhausted.
1454-
* - If set to 'auto', and the Project is not Scale tier enabled, the request will
1455-
* be processed using the default service tier with a lower uptime SLA and no
1456-
* latency guarantee.
1457-
* - If set to 'default', the request will be processed using the default service
1458-
* tier with a lower uptime SLA and no latency guarantee.
1459-
* - If set to 'flex', the request will be processed with the Flex Processing
1460-
* service tier.
1461-
* [Learn more](https://platform.openai.com/docs/guides/flex-processing).
1451+
* - If set to 'auto', then the request will be processed with the service tier
1452+
* configured in the Project settings. Unless otherwise configured, the Project
1453+
* will use 'default'.
1454+
* - If set to 'default', then the requset will be processed with the standard
1455+
* pricing and performance for the selected model.
1456+
* - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
1457+
* 'priority', then the request will be processed with the corresponding service
1458+
* tier. [Contact sales](https://openai.com/contact-sales) to learn more about
1459+
* Priority processing.
14621460
* - When not set, the default behavior is 'auto'.
14631461
*
1464-
* When this parameter is set, the response body will include the `service_tier`
1465-
* utilized.
1462+
* When the `service_tier` parameter is set, the response body will include the
1463+
* `service_tier` value based on the processing mode actually used to serve the
1464+
* request. This response value may be different from the value set in the
1465+
* parameter.
14661466
*/
1467-
service_tier?: 'auto' | 'default' | 'flex' | 'scale' | null;
1467+
service_tier?: 'auto' | 'default' | 'flex' | 'scale' | 'priority' | null;
14681468

14691469
/**
14701470
* Not supported with latest reasoning models `o3` and `o4-mini`.
@@ -1478,6 +1478,8 @@ export interface ChatCompletionCreateParamsBase {
14781478
* Whether or not to store the output of this chat completion request for use in
14791479
* our [model distillation](https://platform.openai.com/docs/guides/distillation)
14801480
* or [evals](https://platform.openai.com/docs/guides/evals) products.
1481+
*
1482+
* Supports text and image inputs. Note: image inputs over 10MB will be dropped.
14811483
*/
14821484
store?: boolean | null;
14831485

0 commit comments

Comments
 (0)