NVIDIA/TensorRT-LLM
TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.
0Active
On the radar — signal detected
Stars
13.2k
Forks
2.2k
Contributors
354
Language
Python
Score updated Mar 26, 2026
// SUBSCRIBE
The repos that moved this week, why they matter, and what to watch next. One email. No noise.