Nvidia released a tool today that’s designed to help developers and data scientists build and test machine learning systems on their personal computers before moving to production on a more powerful machine.
The Nvidia GPU Cloud provides researchers with software containers that are designed to provide developers with the fastest execution environment for training machine learning systems using the chipmaker’s silicon. Those containers were already available for use with the machine learning-oriented DGX-1 and DGX Station computers, along with cloud instances powered by Nvidia Volta chips that run on Amazon Web Services.
But now, customers can use them on consumer hardware — in this case, Nvidia’s Titan series of chips. Those high-end consumer GPUs won’t provide as much firepower as a massive, machine learning oriented machine, but they’re less costly and more readily available.
Because the Nvidia GPU Cloud software is all kept inside software containers, it’s possible for developers to take the systems that they’ve trained on a personal machine and more easily deploy them on one of Nvidia’s larger-scale AI machines, or in the cloud.
All told, it’s a move by the chipmaker that’s supposed to help people get off the ground with machine learning systems more easily, and iterate faster on systems that could help them solve business problems and drive the field of AI forward.
While large scale tech giants have no trouble throwing dozens, if not hundreds of GPUs at a single machine learning problem, developers and researchers will frequently begin testing their systems on smaller, personal machines without as much firepower. This announcement should help give them a bit of a speed boost compared to what they were capable of previously.
The news comes as part of the Conference on Neural Information Processing Systems (NIPS), which is taking place this week in Long Beach, California. That show brings together some of the brightest minds in AI to share key developments from their research.