Skip to content

Commit 37cad1f

Browse files
committed
fixes
1 parent 764f298 commit 37cad1f

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ devices on NVIDIA GPUs, x86 and ARM CPU, or AWS Inferentia. Triton Inference
4646
Server delivers optimized performance for many query types, including real time,
4747
batched, ensembles and audio/video streaming. Triton inference Server is part of
4848
[NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/),
49-
an software platform that accelerates the data science pipeline and streamlines
49+
a software platform that accelerates the data science pipeline and streamlines
5050
the development and deployment of production AI.
5151

5252
Major features include:

docs/index.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ CPU, or AWS Inferentia. Triton Inference Server delivers optimized performance
6868
for many query types, including real time, batched, ensembles and audio/video
6969
streaming. Triton inference Server is part of
7070
[NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/),
71-
an software platform that accelerates the data science pipeline and streamlines
71+
a software platform that accelerates the data science pipeline and streamlines
7272
the development and deployment of production AI.
7373

7474
Major features include:

0 commit comments

Comments
 (0)