Nvidia reveals AI baseline models running on RTX AI PCs
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn more
Nvidia today announced base models running locally on Nvidia RTX AI PCs that power digital people, content creation, productivity and development.
GeForce has long been a vital platform for AI developers. The first GPU-accelerated deep learning network, AlexNet, was trained on a GeForce GTXTM 580 in 2012. — and last year over 30% of published AI research papers cited the use of GeForce RTX. Jensen Huang, CEO of Nvidia, made the announcement during his CES 2025 introductory keynote.
Now, with generative AI and RTX AI computing, anyone can be a developer. A new wave of low-code and no-code tools, such as AnythingLLM, ComfyUI, Langflow, and LM Studio, enable enthusiasts to use AI models in complex workflows through simple graphical user interfaces.
NIM microservices associated with these GUIs will facilitate access and deployment of the latest generative AI models. Nvidia AI Blueprints, built on top of NIM microservices, provide easy-to-use, preconfigured reference workflows for digital humans, content creation, and more.
To meet the growing demand from AI developers and enthusiasts, every leading PC manufacturer and system builder is releasing NIM-ready RTX AI PCs.
“AI is advancing at the speed of light, from perceptual AI to generative AI and now agent AI,” Huang said. “NIM microservices and AI Blueprints give PC developers and enthusiasts the building blocks to explore the magic of AI.”
NIM’s microservices will also be available with Nvidia Digits, a personal AI supercomputer that gives AI researchers, data scientists and students around the world access to the power of Nvidia Grace Blackwell. Project Digits features the new Nvidia GB10 Grace Blackwell Superchip, offering petaflop AI computing performance for prototyping, fine-tuning and working with large AI models.
Making AI NIMble

Core models—neural networks trained on vast amounts of raw data—are the building blocks for generative AI.
Nvidia will release a set of NIM microservices for RTX AI PCs from top model developers such as Black Forest Labs, Meta, Mistral and Stability AI. Use cases span Large Language Models (LLM), Visual Language Models, Image Generation, Speech, Retrieval Augmented Generation (RAG) Embedding Models, PDF Retrieval, and Computer Vision.
“Making FLUX an Nvidia NIM microservice increases the speed at which AI can be deployed and experienced by more users, while delivering incredible performance,” said Robin Rombach, CEO of Black Forest Labs, in a statement.
Nvidia today also announced the Llama Nemotoron family of open models that deliver high accuracy across a wide range of agent tasks. The Llama Nemotoron Nano model will be available as a NIM microservice for PCs and workstations with RTX AI and features AI agent tasks such as following instructions, calling functions, chatting, coding and math. NIM microservices include the key components to run AI on PCs and are optimized for deployment on NVIDIA GPUs – whether in RTX PCs and workstations or in
cloud.
Developers and enthusiasts will be able to quickly download, configure, and run these NIM microservices on Windows 11 PCs running Windows Subsystem for Linux (WSL).
“AI is driving innovation on Windows 11 PC at a rapid pace, and Windows Subsystem for Linux (WSL) offers a great cross-platform AI development environment in Windows 11 along with Windows Copilot Runtime,” said Pavan Davuluri, corporate vice president of Windows at Microsoft, in a statement . “Nvidia NIM microservices optimized for Windows PCs give developers and enthusiasts ready-to-integrate AI models for their Windows applications, further accelerating the deployment of AI capabilities for Windows users.”
NIM microservices running on RTX AI PCs will be compatible with top AI and agent development frameworks, including AI Toolkit for VSCode, AnythingLLM, ComfyUI, CrewAI, Flowise AI, LangChain, Langflow and LM Studio. Developers can connect applications and workflows built on these frameworks to AI models running on NIM microservices through standard endpoints, enabling them to leverage the latest technologies with a unified interface across cloud, data center, workstation and PC .
Enthusiasts will also be able to enjoy a set of NIM microservices using an upcoming release of the Nvidia ChatRTX technology demo.
Putting the face of Agentic AI

To demonstrate how enthusiasts and developers can use NIM to build AI agents and assistants, Nvidia today previewed Project R2X, a vision-enabled computer avatar that can put information at the user’s fingertips, help with desktop applications and video conferencing conversations, read and summarize documents, and more.
The avatar is rendered using Nvidia RTX Neural Faces, a new generative AI algorithm that complements traditional rasterization with fully generated pixels. The face is then animated by a new diffusion-based NVIDIA Audio2FaceTM-3D model that enhances lip and tongue movement. R2X can connect to cloud AI services, such as OpenAI’s GPT4o and xAI’s Grok, and NIM microservices and AI Blueprints, such as PDF retrievers or alternative LLMs, through developer frameworks such as CrewAI, Flowise AI, and Langflow.
AI blueprints are coming to PC

NIM microservices are also available to PC users through AI Blueprints — reference AI workflows that can be run locally on RTX PCs. With these blueprints, developers can create podcasts from PDF documents, generate stunning visuals driven by 3D scenes, and more.
The PDF to Podcast plan extracts text, images, and tables from PDF to create a user-editable podcast script. It can also generate full audio from the script using voices available in the plan or based on a user’s voice sample. In addition, users can have a real-time conversation with the AI ​​podcast host to learn more.
The plan uses NIM microservices such as Mistral-Nemo-12B-Instruct for language, Nvidia Riva for text-to-speech and automatic speech recognition, and the NeMo Retriever collection of microservices for PDF retrieval.
AI Blueprint for 3D-Driven Generative AI gives artists finer control over image generation. While AI can generate amazing images from simple text prompts, controlling image composition with words alone can be challenging. With this plan, creators can use simple 3D objects placed in a 3D renderer like Blender to guide the generation of AI images.
The artist can create 3D assets by hand or generate them using AI, place them in the scene and set the camera to the 3D viewport. Then, a prepackaged workflow powered by the FLUX NIM microservice will use the current composition to generate high-quality images that match the 3D scene.
Nvidia NIM microservices and AI Blueprints will be available from February. NIM RTX AI-ready PCs will be available from Acer, ASUS, Dell, GIGABYTE, HP, Lenovo, MSI, Razer and Samsung and from local system manufacturers Corsair, Falcon Northwest, LDLC, Maingear, Mifcon, Origin PC, PCS and Scan .