Run LLMs Natively in Docker – Coming Soon!
Introducing Docker Model Runner
Docker Model Runner makes running LLMs effortless by removing complex setup, handling dependencies, and optimizing hardware—offering a secure, low-latency alternative to cloud-based inference, all seamlessly integrated into Docker Desktop.
- Pull AI models directly from Docker Hub
- Run them locally using familiar Docker CLI commands
- OpenAI’s API format for easy application integration
- Native GPU acceleration supported for Apple Silicon and NVIDIA GPUs
- Run AI workloads securely in Docker containers
Stay updated on the launch