Skip to content

Commit 781cab1

Browse files
committed
Address pre-commit error
1 parent 9eb3938 commit 781cab1

File tree

3 files changed

+41
-41
lines changed

3 files changed

+41
-41
lines changed

README.md

+8-8
Original file line numberDiff line numberDiff line change
@@ -38,15 +38,15 @@ and corresponds to the 23.07 container release on
3838

3939
----
4040
Triton Inference Server is an open source inference serving software that
41-
streamlines AI inferencing. Triton enables teams to deploy any AI model from
42-
multiple deep learning and machine learning frameworks, including TensorRT,
43-
TensorFlow, PyTorch, ONNX, OpenVINO, Python, RAPIDS FIL, and more. Triton
44-
Inference Server supports inference across cloud, data center, edge and embedded
45-
devices on NVIDIA GPUs, x86 and ARM CPU, or AWS Inferentia. Triton Inference
46-
Server delivers optimized performance for many query types, including real time,
41+
streamlines AI inferencing. Triton enables teams to deploy any AI model from
42+
multiple deep learning and machine learning frameworks, including TensorRT,
43+
TensorFlow, PyTorch, ONNX, OpenVINO, Python, RAPIDS FIL, and more. Triton
44+
Inference Server supports inference across cloud, data center, edge and embedded
45+
devices on NVIDIA GPUs, x86 and ARM CPU, or AWS Inferentia. Triton Inference
46+
Server delivers optimized performance for many query types, including real time,
4747
batched, ensembles and audio/video streaming. Triton inference Server is part of
48-
[NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/),
49-
a software platform that accelerates the data science pipeline and streamlines
48+
[NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/),
49+
a software platform that accelerates the data science pipeline and streamlines
5050
the development and deployment of production AI.
5151

5252
Major features include:

docs/index.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -60,15 +60,15 @@ Triton Inference Server is an open source inference serving software that stream
6060

6161
# Triton Inference Server
6262

63-
Triton Inference Server enables teams to deploy any AI model from multiple deep
64-
learning and machine learning frameworks, including TensorRT, TensorFlow,
63+
Triton Inference Server enables teams to deploy any AI model from multiple deep
64+
learning and machine learning frameworks, including TensorRT, TensorFlow,
6565
PyTorch, ONNX, OpenVINO, Python, RAPIDS FIL, and more. Triton supports inference
6666
across cloud, data center, edge and embedded devices on NVIDIA GPUs, x86 and ARM
6767
CPU, or AWS Inferentia. Triton Inference Server delivers optimized performance
68-
for many query types, including real time, batched, ensembles and audio/video
68+
for many query types, including real time, batched, ensembles and audio/video
6969
streaming. Triton inference Server is part of
70-
[NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/),
71-
a software platform that accelerates the data science pipeline and streamlines
70+
[NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/),
71+
a software platform that accelerates the data science pipeline and streamlines
7272
the development and deployment of production AI.
7373

7474
Major features include:

docs/user_guide/faq.md

+28-28
Original file line numberDiff line numberDiff line change
@@ -165,41 +165,41 @@ the backtrace to better help us resolve the problem.
165165

166166
## What are the benefits of using [Triton Inference Server](https://developer.nvidia.com/triton-inference-server) as part of the [NVIDIA AI Enterprise Software Suite](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/)?
167167

168-
NVIDIA AI Enterprise enables enterprises to implement full AI workflows by
168+
NVIDIA AI Enterprise enables enterprises to implement full AI workflows by
169169
delivering an entire end-to-end AI platform. Four key benefits:
170170

171171
### Enterprise-Grade Support, Security & API Stability:
172172

173-
Business-critical AI projects stay on track with NVIDIA Enterprise Support,
174-
available globally to assist both IT teams with deploying and managing the
175-
lifecycle of AI applications and the developer teams with building AI
176-
applications. Support includes maintenance updates, dependable SLAs and
177-
response times. Regular security reviews and priority notifications mitigate
178-
potential risk of unmanaged opensource and ensure compliance with corporate
179-
standards. Finally, long term support and regression testing ensures API
173+
Business-critical AI projects stay on track with NVIDIA Enterprise Support,
174+
available globally to assist both IT teams with deploying and managing the
175+
lifecycle of AI applications and the developer teams with building AI
176+
applications. Support includes maintenance updates, dependable SLAs and
177+
response times. Regular security reviews and priority notifications mitigate
178+
potential risk of unmanaged opensource and ensure compliance with corporate
179+
standards. Finally, long term support and regression testing ensures API
180180
stability between releases.
181181

182-
### Speed time to production with AI Workflows & Pretrained Models:
183-
To reduce the complexity of developing common AI applications, NVIDIA AI
184-
Enterprise includes
185-
[AI workflows](https://www.nvidia.com/en-us/launchpad/ai/workflows/) which are
186-
reference applications for specific business outcomes such as Intelligent
187-
Virtual Assistants and Digital Fingerprinting for real-time cybersecurity threat
188-
detection. AI workflow reference applications may include
189-
[AI frameworks](https://docs.nvidia.com/deeplearning/frameworks/index.html) and
190-
[pretrained models](https://developer.nvidia.com/ai-models),
191-
[Helm Charts](https://catalog.ngc.nvidia.com/helm-charts),
192-
[Jupyter Notebooks](https://developer.nvidia.com/run-jupyter-notebooks) and
182+
### Speed time to production with AI Workflows & Pretrained Models:
183+
To reduce the complexity of developing common AI applications, NVIDIA AI
184+
Enterprise includes
185+
[AI workflows](https://www.nvidia.com/en-us/launchpad/ai/workflows/) which are
186+
reference applications for specific business outcomes such as Intelligent
187+
Virtual Assistants and Digital Fingerprinting for real-time cybersecurity threat
188+
detection. AI workflow reference applications may include
189+
[AI frameworks](https://docs.nvidia.com/deeplearning/frameworks/index.html) and
190+
[pretrained models](https://developer.nvidia.com/ai-models),
191+
[Helm Charts](https://catalog.ngc.nvidia.com/helm-charts),
192+
[Jupyter Notebooks](https://developer.nvidia.com/run-jupyter-notebooks) and
193193
[documentation](https://docs.nvidia.com/ai-enterprise/index.html#overview).
194194

195-
### Performance for Efficiency and Cost Savings:
196-
Using accelerated compute for AI workloads such as data process with
197-
[NVIDIA RAPIDS Accelerator](https://developer.nvidia.com/rapids) for Apache
198-
Spark and inference with Triton Inference Sever delivers better performance
199-
which also improves efficiency and reduces operation and infrastructure costs,
195+
### Performance for Efficiency and Cost Savings:
196+
Using accelerated compute for AI workloads such as data process with
197+
[NVIDIA RAPIDS Accelerator](https://developer.nvidia.com/rapids) for Apache
198+
Spark and inference with Triton Inference Sever delivers better performance
199+
which also improves efficiency and reduces operation and infrastructure costs,
200200
including savings from reduced time and energy consumption.
201201

202-
### Optimized and Certified to Deploy Everywhere:
203-
Cloud, Data Center, Edge Optimized and certified to ensure reliable performance
204-
whether it’s running your AI in the public cloud, virtualized data centers, or
205-
on DGX systems.
202+
### Optimized and Certified to Deploy Everywhere:
203+
Cloud, Data Center, Edge Optimized and certified to ensure reliable performance
204+
whether it’s running your AI in the public cloud, virtualized data centers, or
205+
on DGX systems.

0 commit comments

Comments
 (0)