|
272 | 272 | },
|
273 | 273 | {
|
274 | 274 | "cell_type": "markdown",
|
275 |
| - "id": "013b6300", |
| 275 | + "id": "20b60ccb", |
276 | 276 | "metadata": {},
|
277 | 277 | "source": [
|
278 |
| - "You can also pass other `ClientOptions` parameters accepted by the official SDK here.\n", |
| 278 | + "The `configuration` field also accepts other `ClientOptions` parameters accepted by the official SDK.\n", |
279 | 279 | "\n",
|
280 | 280 | "If you are hosting on Azure OpenAI, see the [dedicated page instead](/docs/integrations/chat/azure).\n",
|
281 | 281 | "\n",
|
| 282 | + "## Custom headers\n", |
| 283 | + "\n", |
| 284 | + "You can specify custom headers in the same `configuration` field:" |
| 285 | + ] |
| 286 | + }, |
| 287 | + { |
| 288 | + "cell_type": "code", |
| 289 | + "execution_count": null, |
| 290 | + "id": "cd612609", |
| 291 | + "metadata": {}, |
| 292 | + "outputs": [], |
| 293 | + "source": [ |
| 294 | + "import { ChatOpenAI } from \"@langchain/openai\";\n", |
| 295 | + "\n", |
| 296 | + "const llmWithCustomHeaders = new ChatOpenAI({\n", |
| 297 | + " temperature: 0.9,\n", |
| 298 | + " configuration: {\n", |
| 299 | + " defaultHeaders: {\n", |
| 300 | + " \"Authorization\": `Bearer SOME_CUSTOM_VALUE`,\n", |
| 301 | + " },\n", |
| 302 | + " },\n", |
| 303 | + "});\n", |
| 304 | + "\n", |
| 305 | + "await llmWithCustomHeaders.invoke(\"Hi there!\");" |
| 306 | + ] |
| 307 | + }, |
| 308 | + { |
| 309 | + "cell_type": "markdown", |
| 310 | + "id": "013b6300", |
| 311 | + "metadata": {}, |
| 312 | + "source": [ |
282 | 313 | "## Calling fine-tuned models\n",
|
283 | 314 | "\n",
|
284 | 315 | "You can call fine-tuned OpenAI models by passing in your corresponding `modelName` parameter.\n",
|
|
411 | 442 | },
|
412 | 443 | {
|
413 | 444 | "cell_type": "markdown",
|
414 |
| - "id": "bc5ecebd", |
| 445 | + "id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3", |
415 | 446 | "metadata": {},
|
416 | 447 | "source": [
|
417 | 448 | "## Tool calling\n",
|
|
420 | 451 | "\n",
|
421 | 452 | "- [How to: disable parallel tool calling](/docs/how_to/tool_calling_parallel/)\n",
|
422 | 453 | "- [How to: force a tool call](/docs/how_to/tool_choice/)\n",
|
423 |
| - "- [How to: bind model-specific tool formats to a model](/docs/how_to/tool_calling#binding-model-specific-formats-advanced)." |
424 |
| - ] |
425 |
| - }, |
426 |
| - { |
427 |
| - "cell_type": "markdown", |
428 |
| - "id": "3392390e", |
429 |
| - "metadata": {}, |
430 |
| - "source": [ |
431 |
| - "### ``strict: true``\n", |
432 |
| - "\n", |
433 |
| - "```{=mdx}\n", |
| 454 | + "- [How to: bind model-specific tool formats to a model](/docs/how_to/tool_calling#binding-model-specific-formats-advanced).\n", |
434 | 455 | "\n",
|
435 |
| - ":::info Requires ``@langchain/openai >= 0.2.6``\n", |
436 |
| - "\n", |
437 |
| - "As of Aug 6, 2024, OpenAI supports a `strict` argument when calling tools that will enforce that the tool argument schema is respected by the model. See more here: https://platform.openai.com/docs/guides/function-calling\n", |
438 |
| - "\n", |
439 |
| - "**Note**: If ``strict: true`` the tool definition will also be validated, and a subset of JSON schema are accepted. Crucially, schema cannot have optional args (those with default values). Read the full docs on what types of schema are supported here: https://platform.openai.com/docs/guides/structured-outputs/supported-schemas. \n", |
440 |
| - ":::\n", |
441 |
| - "\n", |
442 |
| - "\n", |
443 |
| - "```" |
444 |
| - ] |
445 |
| - }, |
446 |
| - { |
447 |
| - "cell_type": "code", |
448 |
| - "execution_count": 1, |
449 |
| - "id": "90f0d465", |
450 |
| - "metadata": {}, |
451 |
| - "outputs": [ |
452 |
| - { |
453 |
| - "name": "stdout", |
454 |
| - "output_type": "stream", |
455 |
| - "text": [ |
456 |
| - "[\n", |
457 |
| - " {\n", |
458 |
| - " name: 'get_current_weather',\n", |
459 |
| - " args: { location: 'Hanoi' },\n", |
460 |
| - " type: 'tool_call',\n", |
461 |
| - " id: 'call_aB85ybkLCoccpzqHquuJGH3d'\n", |
462 |
| - " }\n", |
463 |
| - "]\n" |
464 |
| - ] |
465 |
| - } |
466 |
| - ], |
467 |
| - "source": [ |
468 |
| - "import { ChatOpenAI } from \"@langchain/openai\";\n", |
469 |
| - "import { tool } from \"@langchain/core/tools\";\n", |
470 |
| - "import { z } from \"zod\";\n", |
471 |
| - "\n", |
472 |
| - "const weatherTool = tool((_) => \"no-op\", {\n", |
473 |
| - " name: \"get_current_weather\",\n", |
474 |
| - " description: \"Get the current weather\",\n", |
475 |
| - " schema: z.object({\n", |
476 |
| - " location: z.string(),\n", |
477 |
| - " }),\n", |
478 |
| - "})\n", |
479 |
| - "\n", |
480 |
| - "const llmWithStrictTrue = new ChatOpenAI({\n", |
481 |
| - " model: \"gpt-4o\",\n", |
482 |
| - "}).bindTools([weatherTool], {\n", |
483 |
| - " strict: true,\n", |
484 |
| - " tool_choice: weatherTool.name,\n", |
485 |
| - "});\n", |
486 |
| - "\n", |
487 |
| - "// Although the question is not about the weather, it will call the tool with the correct arguments\n", |
488 |
| - "// because we passed `tool_choice` and `strict: true`.\n", |
489 |
| - "const strictTrueResult = await llmWithStrictTrue.invoke(\"What is 127862 times 12898 divided by 2?\");\n", |
490 |
| - "\n", |
491 |
| - "console.dir(strictTrueResult.tool_calls, { depth: null });" |
492 |
| - ] |
493 |
| - }, |
494 |
| - { |
495 |
| - "cell_type": "markdown", |
496 |
| - "id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3", |
497 |
| - "metadata": {}, |
498 |
| - "source": [ |
499 | 456 | "## API reference\n",
|
500 | 457 | "\n",
|
501 | 458 | "For detailed documentation of all ChatOpenAI features and configurations head to the API reference: https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html"
|
|
0 commit comments