This week’s Nvidia GTC 2025 conference showcased a series of innovative product launches, including several developments in collaboration with major cloud providers like Microsoft Azure and Google Cloud.
Nvidia announced this week at the GTC event in San Jose, California, that it will be introducing its Blackwell Ultra GPUs and Nvidia RTX PRO 6000 Blackwell Server Edition to Microsoft Azure.
In a statement, Ian Buck, Nvidia’s vice president of hyperscale and HPC, remarked, “Our collaboration with Azure and the launch of the Nvidia Blackwell platform signify a major advancement.”
Microsoft announced that it will introduce Blackwell Ultra GPU-based virtual machines (VMs) in late 2025. These VMs are set to provide outstanding performance and efficiency for the next generation of agentic and generative AI workloads.
At GTC 2025, Microsoft also announced the Azure ND GB200 V6 series, which is accelerated by Nvidia GB200 NVL72 and Nvidia Quantum InfiniBand networking, is now generally available.
Microsoft stated that this new addition to the Azure AI Infrastructure portfolio, along with existing VMs that utilize Nvidia H200 and Nvidia H100 GPUs, underscores its dedication to optimizing infrastructure for the next generation of complex AI tasks such as planning, reasoning, and real-time adaptation.
Nvidia Microservices Are Now Part of Azure AI Foundry
Nvidia and Microsoft unveiled Nvidia Inference Microservices as part of Azure AI Foundry, marking a development in agentic AI.
Nvidia’s NIM microservices will now be available on Azure AI Foundry, providing optimized containers for dozens of widely used foundation models and enabling developers to rapidly deploy generative AI applications and agents.
Highlighted characteristics comprise optimized model throughput for Nvidia accelerated computing platforms, prebuilt microservices that can be deployed anywhere, and improved accuracy for particular use cases.
Google and Nvidia Enhance Gemma 3 and Gemini
Google and Nvidia have collaborated to enhance Gemma, Google’s collection of lightweight open models, for operation on Nvidia GPUs. The companies stated that the introduction of Gemma 3 by Google represents a major advancement for open innovation.
Nvidia contributed to making Gemma even easier for developers to access by offering it as a highly optimized Nvidia NIM microservice.
Furthermore, this partnership in engineering will also involve improving Gemini-based workloads on Nvidia accelerated computing through Vertex AI.
Innovation in robotic AI is being propelled by Google and Nvidia. Nvidia partners with Google DeepMind to create Newton, a physics engine designed to replicate robotic movements in actual environments.
Nvidia stated that Newton is intended to assist robots in becoming more “expressive” and in “learning how to handle complex tasks with greater precision.” The physics engine aims to assist developers in simulating the interactions between robots and the natural environment.
Nvidia added that Newton will work with Google DeepMind’s ecosystem of robotic development tools, including MuJoCo, its physics engine that simulates multi-joint robot movements. The goal of utilizing foundation models in robotics is to greatly decrease the time needed for application development and to enhance flexibility, featuring AI that can easily adapt.
Nvidia intends to launch an open-source version of Newton in late 2025.
Finally, Google Cloud will be one of the early adopters of the Nvidia GB300 NVL72 rack-scale solution and the Nvidia RTX PRO 6000 Blackwell Server Edition GPU.