Run LLMs Natively in Docker – Coming Soon!

 

Introducing Docker Model Runner

Docker Model Runner makes running LLMs effortless by removing complex setup, handling dependencies, and optimizing hardware—offering a secure, low-latency alternative to cloud-based inference, all seamlessly integrated into Docker Desktop.

  • Pull AI models directly from Docker Hub
  • Run them locally using familiar Docker CLI commands
  • OpenAI’s API format for easy application integration
  • Native GPU acceleration supported for Apple Silicon and NVIDIA GPUs
  • Run AI workloads securely in Docker containers

Stay updated on the launch

Name(Required)
By providing my contact information, I authorize Docker, Inc to contact me with communications about Docker's product and services. See our Privacy Policy for more details or to opt-out.