Skip to content

Commit 97d93de

Browse files
authored
Add int8_float16 as possible quantization value for CTranslate2 (#2116)
1 parent e2628a2 commit 97d93de

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

onmt/bin/release_model.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ def main():
1515
default="pytorch",
1616
help="The format of the released model")
1717
parser.add_argument("--quantization", "-q",
18-
choices=["int8", "int16", "float16"],
18+
choices=["int8", "int16", "float16", "int8_float16"],
1919
default=None,
2020
help="Quantization type for CT2 model.")
2121
opt = parser.parse_args()

0 commit comments

Comments
 (0)