@@ -340,7 +340,7 @@ def create_and_run(
340
340
341
341
response_format: Specifies the format that the model must output. Compatible with
342
342
[GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
343
- all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.
343
+ all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
344
344
345
345
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
346
346
message the model generates is valid JSON.
@@ -366,7 +366,7 @@ def create_and_run(
366
366
tool_choice: Controls which (if any) tool is called by the model. `none` means the model will
367
367
not call any tools and instead generates a message. `auto` is the default value
368
368
and means the model can pick between generating a message or calling a tool.
369
- Specifying a particular tool like `{"type": "TOOL_TYPE "}` or
369
+ Specifying a particular tool like `{"type": "file_search "}` or
370
370
`{"type": "function", "function": {"name": "my_function"}}` forces the model to
371
371
call that tool.
372
372
@@ -382,6 +382,11 @@ def create_and_run(
382
382
model considers the results of the tokens with top_p probability mass. So 0.1
383
383
means only the tokens comprising the top 10% probability mass are considered.
384
384
385
+ We generally recommend altering this or temperature but not both.
386
+
387
+ truncation_strategy: Controls for how a thread will be truncated prior to the run. Use this to
388
+ control the intial context window of the run.
389
+
385
390
extra_headers: Send extra headers
386
391
387
392
extra_query: Add additional query parameters to the request
@@ -481,7 +486,7 @@ def create_and_run(
481
486
482
487
response_format: Specifies the format that the model must output. Compatible with
483
488
[GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
484
- all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.
489
+ all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
485
490
486
491
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
487
492
message the model generates is valid JSON.
@@ -503,7 +508,7 @@ def create_and_run(
503
508
tool_choice: Controls which (if any) tool is called by the model. `none` means the model will
504
509
not call any tools and instead generates a message. `auto` is the default value
505
510
and means the model can pick between generating a message or calling a tool.
506
- Specifying a particular tool like `{"type": "TOOL_TYPE "}` or
511
+ Specifying a particular tool like `{"type": "file_search "}` or
507
512
`{"type": "function", "function": {"name": "my_function"}}` forces the model to
508
513
call that tool.
509
514
@@ -519,6 +524,11 @@ def create_and_run(
519
524
model considers the results of the tokens with top_p probability mass. So 0.1
520
525
means only the tokens comprising the top 10% probability mass are considered.
521
526
527
+ We generally recommend altering this or temperature but not both.
528
+
529
+ truncation_strategy: Controls for how a thread will be truncated prior to the run. Use this to
530
+ control the intial context window of the run.
531
+
522
532
extra_headers: Send extra headers
523
533
524
534
extra_query: Add additional query parameters to the request
@@ -618,7 +628,7 @@ def create_and_run(
618
628
619
629
response_format: Specifies the format that the model must output. Compatible with
620
630
[GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
621
- all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.
631
+ all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
622
632
623
633
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
624
634
message the model generates is valid JSON.
@@ -640,7 +650,7 @@ def create_and_run(
640
650
tool_choice: Controls which (if any) tool is called by the model. `none` means the model will
641
651
not call any tools and instead generates a message. `auto` is the default value
642
652
and means the model can pick between generating a message or calling a tool.
643
- Specifying a particular tool like `{"type": "TOOL_TYPE "}` or
653
+ Specifying a particular tool like `{"type": "file_search "}` or
644
654
`{"type": "function", "function": {"name": "my_function"}}` forces the model to
645
655
call that tool.
646
656
@@ -656,6 +666,11 @@ def create_and_run(
656
666
model considers the results of the tokens with top_p probability mass. So 0.1
657
667
means only the tokens comprising the top 10% probability mass are considered.
658
668
669
+ We generally recommend altering this or temperature but not both.
670
+
671
+ truncation_strategy: Controls for how a thread will be truncated prior to the run. Use this to
672
+ control the intial context window of the run.
673
+
659
674
extra_headers: Send extra headers
660
675
661
676
extra_query: Add additional query parameters to the request
@@ -1296,7 +1311,7 @@ async def create_and_run(
1296
1311
1297
1312
response_format: Specifies the format that the model must output. Compatible with
1298
1313
[GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
1299
- all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.
1314
+ all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
1300
1315
1301
1316
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
1302
1317
message the model generates is valid JSON.
@@ -1322,7 +1337,7 @@ async def create_and_run(
1322
1337
tool_choice: Controls which (if any) tool is called by the model. `none` means the model will
1323
1338
not call any tools and instead generates a message. `auto` is the default value
1324
1339
and means the model can pick between generating a message or calling a tool.
1325
- Specifying a particular tool like `{"type": "TOOL_TYPE "}` or
1340
+ Specifying a particular tool like `{"type": "file_search "}` or
1326
1341
`{"type": "function", "function": {"name": "my_function"}}` forces the model to
1327
1342
call that tool.
1328
1343
@@ -1338,6 +1353,11 @@ async def create_and_run(
1338
1353
model considers the results of the tokens with top_p probability mass. So 0.1
1339
1354
means only the tokens comprising the top 10% probability mass are considered.
1340
1355
1356
+ We generally recommend altering this or temperature but not both.
1357
+
1358
+ truncation_strategy: Controls for how a thread will be truncated prior to the run. Use this to
1359
+ control the intial context window of the run.
1360
+
1341
1361
extra_headers: Send extra headers
1342
1362
1343
1363
extra_query: Add additional query parameters to the request
@@ -1437,7 +1457,7 @@ async def create_and_run(
1437
1457
1438
1458
response_format: Specifies the format that the model must output. Compatible with
1439
1459
[GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
1440
- all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.
1460
+ all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
1441
1461
1442
1462
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
1443
1463
message the model generates is valid JSON.
@@ -1459,7 +1479,7 @@ async def create_and_run(
1459
1479
tool_choice: Controls which (if any) tool is called by the model. `none` means the model will
1460
1480
not call any tools and instead generates a message. `auto` is the default value
1461
1481
and means the model can pick between generating a message or calling a tool.
1462
- Specifying a particular tool like `{"type": "TOOL_TYPE "}` or
1482
+ Specifying a particular tool like `{"type": "file_search "}` or
1463
1483
`{"type": "function", "function": {"name": "my_function"}}` forces the model to
1464
1484
call that tool.
1465
1485
@@ -1475,6 +1495,11 @@ async def create_and_run(
1475
1495
model considers the results of the tokens with top_p probability mass. So 0.1
1476
1496
means only the tokens comprising the top 10% probability mass are considered.
1477
1497
1498
+ We generally recommend altering this or temperature but not both.
1499
+
1500
+ truncation_strategy: Controls for how a thread will be truncated prior to the run. Use this to
1501
+ control the intial context window of the run.
1502
+
1478
1503
extra_headers: Send extra headers
1479
1504
1480
1505
extra_query: Add additional query parameters to the request
@@ -1574,7 +1599,7 @@ async def create_and_run(
1574
1599
1575
1600
response_format: Specifies the format that the model must output. Compatible with
1576
1601
[GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
1577
- all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.
1602
+ all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
1578
1603
1579
1604
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
1580
1605
message the model generates is valid JSON.
@@ -1596,7 +1621,7 @@ async def create_and_run(
1596
1621
tool_choice: Controls which (if any) tool is called by the model. `none` means the model will
1597
1622
not call any tools and instead generates a message. `auto` is the default value
1598
1623
and means the model can pick between generating a message or calling a tool.
1599
- Specifying a particular tool like `{"type": "TOOL_TYPE "}` or
1624
+ Specifying a particular tool like `{"type": "file_search "}` or
1600
1625
`{"type": "function", "function": {"name": "my_function"}}` forces the model to
1601
1626
call that tool.
1602
1627
@@ -1612,6 +1637,11 @@ async def create_and_run(
1612
1637
model considers the results of the tokens with top_p probability mass. So 0.1
1613
1638
means only the tokens comprising the top 10% probability mass are considered.
1614
1639
1640
+ We generally recommend altering this or temperature but not both.
1641
+
1642
+ truncation_strategy: Controls for how a thread will be truncated prior to the run. Use this to
1643
+ control the intial context window of the run.
1644
+
1615
1645
extra_headers: Send extra headers
1616
1646
1617
1647
extra_query: Add additional query parameters to the request
0 commit comments