AI Supercomputer on Your Desk Nvidia introduced Project Digits, a $3,000 home supercomputer for mid-sized AI models

Published
Reading time
2 min read
GB10 Superchip architecture with Blackwell GPU and Grace CPU.
Loading the Elevenlabs Text to Speech AudioNative Player...

Nvidia’s new desktop computer is built specifically to run large AI models.

What’s new: Project Digits is a personal supercomputer intended to help developers fine-tune and run large models locally. Project Digits, which is small enough to hold in one hand, will be available in May, starting at $3000.

How it works: Project Digits is designed to run models of up to 200 billion parameters — roughly five times the size that fits comfortably on typical consumer hardware — provided they’re quantized to 4 bits of precision. Two units can be connected to run models such as Meta’s Llama 3.1 405B. Complete specifications are not yet available. 

  • Project Digits runs Nvidia’s DGX operating system, a flavor of Ubuntu Linux.
  • The system is based on a GB10 system-on-a-chip that combines the Nvidia Blackwell GPU architecture (which serves as the basis for its latest B100 GPUs) and Grace CPU architecture (designed to manage AI workloads in data centers), connected via high-bandwidth NVLink interconnect.
  • It comes with 128 GB of unified memory and 4 terabytes of solid-state storage.
  • The system connects to Nvidia’s DGX Cloud service to enable developers to deploy models from a local machine to cloud infrastructure.

Behind the news: In a blitz of announcements at the Consumer Electronics Show (CES), Nvidia also launched a platform for developing robotics, autonomous vehicles, and other physical AI systems. Cosmos includes pretrained language and vision models that range from 4 billion to 14 billion parameters for generating synthetic training data for robots or building policy models that translate a robot’s state into its next action. Nvidia also released Cosmos Nemotron, a 34 billion-parameter, vision-language model designed for use by AI agents, plus a video tokenizer and other tools for robotics developers. 

Why it matters: It’s common to train models on Nvidia A100 or H100 GPUs, which come with a price tag of at least $8,000 or $20,000 respectively, along with 40 gigabytes to 80 gigabytes of memory. These hefty requirements push many developers to buy access to computing infrastructure from a cloud provider. Coming in at $3,000 with 128 gigabytes of memory, Project Digits is designed to empower machine learning engineers to train and run larger models on their own machines.

We’re thinking: We look forward to seeing cost/throughput comparisons between running a model on Project Digits, A100, and H100.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox