June 24, 2021
This article is a high-level introduction to an efficient worfklow for optimizing runtime performance of machine learning systems running on the GPU. Using traces from Nsight Systems to show real production scenarios, I introduce a set of common utilization patterns and outline effective approaches to improve performance.
October 17, 2020
In this article we take performance of the SSD300 model even further, leaving Python behind and moving towards true production deployment technologies: TorchScript, TensorRT and DeepStream. We also identify and understand several limitations in Nvidia’s DeepStream framework, and then remove them by modifying how the nvinfer
element works.
September 30, 2020
Making code run fast on GPUs requires a very different approach to making code run fast on CPUs because the hardware architecture is fundamentally different. Machine learning engineers of all kinds should care about squeezing performance from their models and hardware — not just for production purposes, but also for research and training. In research as in development, a fast iteration loop leads to faster improvement. This article is a practical deep dive into making a specific deep learning model (Nvidia’s SSD300) run fast on a powerful GPU server, but the general principles apply to all GPU programming.