From its founding in 1993, NVIDIA has revolutionized the world of computer graphics, having introduced the first-ever GPU in 1999, sparking the growth of the PC gaming market, redefining modern computer graphics, and revolutionizing parallel computing.
The company is no stranger to innovation, and over the years, has continued to reimagine what’s possible in computing. Seven years ago, a new opportunity came to the fore with Alex Krizhevsky’s 2012 submission to Standford’s ImageNet challenge, and the creation of AlexNet, the first deep neural network powered by GPUs. The experiment showed a new way to approach AI that far surpassed past successes relying on CPU-based solutions. It also showed that the strengths in GPUs to parallelize problems, such as 3D renderings, were perfectly matched for solving similar parallelizable problems in deep neural networks.
“That was the year that the idea of deep learning, as opposed to more traditional kinds of machine learning, really took off,” says Dave Salvator, Senior Product Marketing Manager, Accelerated Compute, NVIDIA.” The pace of innovation and growth since then has been simply amazing, and it all started with our compute unified device architecture (CUDA) software.”
Creating an AI Acceleration Platform
NVIDIA’s CUDA software had been around for over a decade, but now the company had a new opportunity to help shape the future of AI computing. NVIDIA realized that while AI as a discipline has existed for decades, its GPU technology could help usher a new age of AI—one where its transformative potential is no longer hype but reality, with the power to reshape entire industries.
As NVIDIA began to invest in deepening the AI capabilities of its products, it recognized that AI represented a new approach to software development and that they would need to provide not only hardware solutions but software, process, and infrastructure guidance including support from the broader AI ecosystem.
Alex Tsado, NVIDIA Go-to-Market Lead for the Microsoft account, explains, “We made a commitment to innovate at both a hardware and software level and take a platform approach to our technology. To us, building an accelerator is just table stakes—unless you have a great software stack, that accelerator is not going to accomplish much by itself.”
For this reason, as NVIDIA advanced its products, it committed to accelerating all frameworks, from TensorFlow to PyTorch to others, to make its GPUs much more efficient at processing AI calculations. That effort has included acceleration for both training and inference and containerizing those software optimizations in its NVIDIA GPU Cloud (NGC) software registry to provide developers a free, easy-to-manage, end-to-end stack to quickly get results from AI with little to no tuning.
Today, the company has more software than hardware engineers on board—a testament to its commitment to offer clients a holistic, platform solution for AI.