Engineering
-
Llama.cpp Gets an Upgrade: Resumable Model Downloads
New: resumable GGUF downloads in llama.cpp. Learn how Docker Model Runner makes models versioned, shareable, and OCI-native for seamless dev-to-prod
Read now
-
Fine-Tuning Local Models with Docker Offload and Unsloth
Learn how to fine-tune models locally with Docker Offload and Unsloth and how smaller models can become practical assistants for real-world problems.
Read now
-
The Trust Paradox: When Your AI Gets Catfished
Learn how MCP prompt-injection exploits trusted tools—and how to defend with context isolation, AI behavior checks, and human-in-the-loop review.
Read now
-
Introducing the Docker Premium Support and TAM service
The Docker Customer Success and Technical Account Management organizations are excited to introduce the Premium Support and TAM service — a new service designed to extend Docker’s support to always-on 24/7, priority SLAs, expert guidance, and TAM add-on services. We have carefully designed these new services to support our valued customers’ developers and global business…
Read now
-
Run, Test, and Evaluate Models and MCP Locally with Docker + Promptfoo
Learn how promptfoo and Docker help developers compare models, evaluate MCP servers, and even perform LLM red-teaming.
Read now
-
Silent Component Updates & Redesigned Update Experience
Automatic updates for Docker Compose, Docker Scout, Ask Gordon, and Model Runner—plus a new update experience and admin controls in Docker Desktop 4.46.
Read now
-
Beyond Containers: llama.cpp Now Pulls GGUF Models Directly from Docker Hub
Learn how llama.cpp is using Docker Hub as a powerful, versioned, and centralized repository for your AI models.
Read now
-
MCP Security: A Developer’s Guide
MCP security refers to the controls and risks that govern how agents discover, connect to, and execute MCP servers.
Read now