At the recent GTC technology conference, Nvidia announced the release of NIM, a set of microservices that are part of the comprehensive NVIDIA AI Enterprise cloud platform. It is designed to simplify and accelerate the introduction of artificial intelligence technologies into production. This tool will allow developers to significantly reduce the time it takes to run AI models.

Creating such software solutions for artificial intelligence, as Nvidia emphasizes, usually takes a lot of time – up to several months. And this is provided that the company has qualified artificial intelligence specialists on its staff. NIM aims to make this task easier. It offers an ecosystem of pre-built AI containers powered by Nvidia and providing a robust set of microservices for end-to-end software.

The NIM platform supports models from Nvidia, A121, Adept, Cohere, Getty Images, Shutterstock, and open models from Google, Hugging Face, Meta, Microsoft, Mistral AI, Stability AI. Additionally, Nvidia is working with Amazon, Google, and Microsoft to integrate NIM microservices into services such as SageMaker, Kubernetes Engine, and Azure AI. The list of supported platforms will expand over time.

Manuvir Das, head of enterprise computing at Nvidia, noted that Nvidia GPUs are the optimal solution for working with AI models. Especially now, when the NIM platform will provide developers with everything they need to effectively create applications. According to him, the company takes care of the technological component, giving creators the opportunity to focus on development.

Nvidia enables faster development through Triton, TensorRT, and TensorRT-LLM servers. It provides access to microservices such as Riva (for working with speech models and translators), cuOpt (for route optimization), and Earth-2 (for climate modeling). The company plans to expand the functionality, including the addition of the LLM Nvidia RAG operator to simplify the creation of chatbots that process user data.