You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/index.md
+11-2
Original file line number
Diff line number
Diff line change
@@ -58,9 +58,18 @@ Triton Inference Server is an open source inference serving software that stream
58
58
<iframewidth="560"height="315"src="https://www.youtube.com/embed/NQDtfSi5QF4"title="YouTube video player"frameborder="0"allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"allowfullscreen></iframe>
59
59
</div>
60
60
61
-
# Triton
61
+
# Triton Inference Server
62
62
63
-
Triton enables teams to deploy any AI model from multiple deep learning and machine learning frameworks, including TensorRT, TensorFlow, PyTorch, ONNX, OpenVINO, Python, RAPIDS FIL, and more. Triton supports inference across cloud, data center,edge and embedded devices on NVIDIA GPUs, x86 and ARM CPU, or AWS Inferentia. Triton delivers optimized performance for many query types, including real time, batched, ensembles and audio/video streaming.
63
+
Triton Inference Server enables teams to deploy any AI model from multiple deep
64
+
learning and machine learning frameworks, including TensorRT, TensorFlow,
65
+
PyTorch, ONNX, OpenVINO, Python, RAPIDS FIL, and more. Triton supports inference
66
+
across cloud, data center, edge and embedded devices on NVIDIA GPUs, x86 and ARM
67
+
CPU, or AWS Inferentia. Triton Inference Server delivers optimized performance
68
+
for many query types, including real time, batched, ensembles and audio/video
69
+
streaming. Triton inference Server is part of
70
+
[NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/),
71
+
a software platform that accelerates the data science pipeline and streamlines
Copy file name to clipboardExpand all lines: docs/user_guide/faq.md
+41
Original file line number
Diff line number
Diff line change
@@ -162,3 +162,44 @@ looking at the gdb trace for the segfault.
162
162
163
163
When opening a GitHub issue for the segfault with Triton, please include
164
164
the backtrace to better help us resolve the problem.
165
+
166
+
## What are the benefits of using [Triton Inference Server](https://developer.nvidia.com/triton-inference-server) as part of the [NVIDIA AI Enterprise Software Suite](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/)?
167
+
168
+
NVIDIA AI Enterprise enables enterprises to implement full AI workflows by
169
+
delivering an entire end-to-end AI platform. Four key benefits:
170
+
171
+
### Enterprise-Grade Support, Security & API Stability:
172
+
173
+
Business-critical AI projects stay on track with NVIDIA Enterprise Support,
174
+
available globally to assist both IT teams with deploying and managing the
175
+
lifecycle of AI applications and the developer teams with building AI
176
+
applications. Support includes maintenance updates, dependable SLAs and
177
+
response times. Regular security reviews and priority notifications mitigate
178
+
potential risk of unmanaged opensource and ensure compliance with corporate
179
+
standards. Finally, long term support and regression testing ensures API
180
+
stability between releases.
181
+
182
+
### Speed time to production with AI Workflows & Pretrained Models:
183
+
To reduce the complexity of developing common AI applications, NVIDIA AI
184
+
Enterprise includes
185
+
[AI workflows](https://www.nvidia.com/en-us/launchpad/ai/workflows/) which are
186
+
reference applications for specific business outcomes such as Intelligent
187
+
Virtual Assistants and Digital Fingerprinting for real-time cybersecurity threat
188
+
detection. AI workflow reference applications may include
189
+
[AI frameworks](https://docs.nvidia.com/deeplearning/frameworks/index.html) and
0 commit comments