On-Demand Training

All Things Compose

Transcript

This presentation is Docker Compose. During this presentation, we’ll cover the following topics:

  • The Docker Engine and its API
  • Docker Compose Basics
  • The Docker Compose file
  • Using variables with the Docker Compose
  • Advanced Compose Usage

Table of Contents

The Engine and its API (0:24)

In the first section, the Engine and its API, we’ll learn about how Docker Compose and the Docker Engine work together. Docker Compose is a useful tool to help us spin up complicated application stacks. Let’s see an example of Compose in action by opening a Compose-based application with multiple services and run it easily from Docker Desktop. From Docker Desktop, I can easily select a project that I have containing a Docker Compose file. You can see I can easily open it, create a new project, and run the entire Docker Compose stack. You can see here that the Compose file is represented by the MySQL, the PHP MyAdmin, and the Python services that are not yet running. If I hit ‘Run project,’ those services should spin up. I can see the application logs here in this window. When these services are running correctly, as they are now, I can go and quickly click on those containers. If I want to hit the front and Python app, I can easily do that using Docker Compose from within Docker Desktop.

This is fantastic, but how does Docker Compose actually work? To answer that, let’s take a look at the Docker Engine itself. The Docker Engine, or commonly referred to as the Docker Daemon, is the heart of container orchestration. It is responsible for managing everything, images, containers, volumes, networks, etc. The engine exposes a REST API endpoint to allow you to list images, launch containers, attach volumes, and so on. This means that the Docker CLI is, simply put, a REST client to that API. When you run Docker PS, it queries the API endpoint and presents the JSON response in a human-friendly text interface. This decoupled architecture is very flexible, and also allows the CLI to point to remote Docker engines through different Docker contexts. The Engine API is versioned and fully documented on the Docker website. This means you can build your own scripts and tooling to extend and add additional capabilities if you would like to do so.

Example (3:14)

Here’s an example of the Docker Engine API reference guide on docs.docker.com. You can see here that it provides endpoints for just about any action you’d like to perform. Listing containers, creating new containers, inspecting containers. This guide should help you understand more about the Engine API. To see an example of interacting with the API directly, we can use Curl to hit the local API endpoint running at /var/run/docker.sock. If I bring up my terminal and query it directly, you’ll see that it acts just like any other API endpoint. I’m going to type in the socket location and the endpoint. I’m also going to use the JQ tool to format the JSON responses for me. You can see here that the endpoint has returned a lot of information about the running engine.

Let’s just show a more in-depth example. I can query the API endpoint to see all the names of my running containers. I’ll do much the same thing. I’ll use Curl. Tell it that I would like to query unix-socket. Specify the location of the socket. Then give it the REST endpoint that I’d like to query. In this case, it’s going to be a versioned endpoint. Engine version 1.46. I’m going to ask it to give me all the containers in JSON format. Then I’m going to use JQ to return those names to me. This should return all of the running container names. You can see here that the engine is very important for the underlying orchestration and management of containers and their resources. It’s important to note that if this engine is ever exposed publicly on a network port, be sure to add authentication to it. We have instructions for securing the engine on our documentation website.

You might be asking yourself, why does this all matter? Why are we talking about the API and the engine? Well, that’s because Docker Compose is merely a REST client that queries the underlying Docker engine through the engine API. The Docker binary utilizes a plugin architecture to provide a general framework to query the engine API. So technically, Docker Compose is a plugin to the Docker CLI. This means that Docker Compose is using the plugin framework to call the underlying Docker Engine API to orchestrate the containers. Let’s take a look at some other plugins on our system. You can list all external CLI plugins by typing in Docker plugin ls.

Let’s do that now. So Docker plugin ls should show all external or third party plugins to Docker and we don’t have any. But I can see all of the plugins that are shipped with Docker by just listing the plugins contained in the CLI plugins directory. You can see there are quite a few, including at the top Docker Compose. This means you can create your own version of Docker Compose by following the Docker plugin guidelines outlined in the documentation. But that might be a little too much work.

Compose basics (8:18)

Let’s dig into Compose a little bit further. In this section, Compose basics, we will understand how the Docker Engine works and review Compose basics. If we think about each single container as an isolated workload running on our machine, we can imagine it as a Lego brick. Most applications have multiple bricks. One brick might be a database, another might be a front end, another a REST API, etc. The great thing is that despite the number of different services or bricks you might need, Docker Compose can manage all of them with easy to use commands. Typing docker compose up will bring up the entire application stack and all associated resources. Typing docker compose down will stop the entire stack cleanly. And docker compose logs will provide informational logs from the individual containers. This is very useful during development.

In fact, Docker Compose is the standard way for developers to run complex containerized applications locally. It’s fast, simple, uses a standard YAML syntax and is included by default with Docker Desktop. Let’s get more familiar with Docker Compose syntax. We know how to run a single container using Docker Run. We know Docker Run accepts different flags to run the container with various configuration options. In this example, the Docker Run command does the following things. It starts the container using MySQL 8.2.0. It exposes port 3306 from the container onto the host. It also uses environment variables to set a root password in the default database name for the running container. And lastly, we configure a volume. So all data written to the database is stored in a volume named MySQL-data. Using a volume like this allows us to persist data locally, which means that the next time we run this example, the same data will be available to the application. Now, this is a standard command to run a container and anyone with Docker desktop installed can run it and they’ll have a MySQL database running.

But running and managing multiple containers is harder. There are lots of pieces to keep track of, including container configuration, virtual networks, open ports, and container monitoring. We could write a script to do some of this, but it would be hard to maintain and wouldn’t scale well for new containers. For example, if we needed to add more services or if we wanted to configure custom networking, it’d be hard to maintain a custom script in a standard way. So for this use case, it makes sense to convert this Docker command to a compose file. Let’s take a look at how we might create a Docker compose file to run similar commands.

The compose file (11:38)

In this section, the compose file, we will demonstrate how to create a compose file. So let’s create a Docker compose file to run our container. On the left, we see our original Docker run command and on the right, we see our new compose file. A Docker compose file is just a YAML document that describes the application stack. Each application is defined as a service and each service specifies the container we want to run. In this case, we named the service, mySQL service database, but we can pick any name. The service name database is just an arbitrary label that can be used as a reference for other things in the compose file like networking, which we will touch on later. Under the database service, we also provide the container image that we want to run. In this case, mySQL, and we can also specify a way to build and use a local image, which we’ll show later. Next, we’ll migrate ports over from the run command to the compose file. Since ports are associated with the service, we list them under the database entry. There are two different methods ports can be specified, either the long or short forms syntax, and you can see both here. If you’re having a hard time remembering which side is the host port and which side is the container port, the long form syntax might make more sense. Even though both are supported, we’ll continue to use short form syntax in this presentation.

Next, we’ll pull over the environment variables. There’s two different ways you can define environment variables. The first uses a key value mapping and the second is an array of strings. We’ll use the first approach in the demos, but again, it’s up to you how you’d like to define them. Finally, we’ll define a volume. One difference you’ll see here is that we also have top-level volumes as a keyword. That’s because a volume has a separate life cycle and can be configured in a variety of ways. In this example, we’re only using the defaults. And one advanced tip to mention, volumes can also be shared between containers. So if you have use cases where one container will write data that another needs to read, you can share those files using a volume. When we are running multiple containers, we have to think about how they are going to connect to each other. One of the things about the isolation mechanisms of containers is that each container gets its own IP address. In fact, if we run Docker Inspect and database, we can see in the network config that the container has the IP address of x.y.a.b. However, if we restart the container, that IP address is likely to be different. So to solve this, Docker runs its own DNS resolver to help support service discovery. Each container has a set of network aliases which end up becoming DNS entries. The DNS resolution is also scoped to the network. So with Compose, each container is automatically given an alias for the name of the service. Therefore, in this example, phpmyadmin can connect to the database by simply using the name database or db in this example. When the container looks up the host name, the Docker DNS resolver will resolve it to the IP address of the container. This makes it super easy to connect containers together.

The last step here is to add a service to be used for our actual application. We can start off by defining a service named Python. For this service, we aren’t going to use an off-the-shelf image, but build an image. With Compose, we can tell it to build an image using a local Docker file. It will then auto-build the image and use that image for a new container. If we’re also using multi-stage images, we can also configure Compose to target a specific stage in the Docker file, which is exactly what we’re going to do in this example. From there, we can expose the app’s ports and finally mount the source code into the container, which will allow us to test the code immediately.

Compose and Variables (16:38)

In the Docker Compose and Variables section, we’ll discuss ways to use variables in Compose. Hard-coding parameters and values in a container image is a bad idea. It makes the container less flexible and also poses security risks. But by using environment variables, we can pass values into the container at runtime. We can get them from shell variables that are set. We can read them from an ENV file or we can pass them in directly from the Docker run command. Although using environment variables makes our container more flexible by removing hard-coded values, if those variables contain sensitive information, they’ll be available to other running containers and might show up in logs. By using Secrets, we can securely use sensitive values in the container. We do this by creating a Secrets file and referencing it in the Docker file. You can find more information about Secrets in the official Compose documentation.

Advanced Compose (17:48)

In the Advanced Compose section, we’ll dive into a few of the advanced features and capabilities of Compose. Compose Watch. Normally, when Compose runs, it will need to be restarted if changes have been made to the application. Compose Watch tells Compose when important files have been changed, allowing them to automatically be reloaded, or to rebuild without stopping Docker Compose. This is great news for applications that have many files like most Node or PHP projects. Standard bind mounts incur performance penalties because the files have to be read through the bind mount. With the Compose Sync approach, the container will read files natively and sync any changes as they occur. This approach uses a little bit more storage space because the files are in multiple locations, but much faster performance.

In this example, we’re going to clone a Remote Git repository. Select the folder that we want it to live in, and then use Compose Watch to watch for changes in the file system. I think I already have this running. Let me select another folder, clone the project, call it avatars, and we’re going to open this. We can see if we look at the Compose file, that we have Watch enabled that will watch the requirements.text in the API. If the requirements change, it’s going to rebuild the image, and if any of the app API endpoints change, it will sync those files directly into the running application. Let’s see this in action.

It’s pulling the images now, and it’s running. We can see from the running containers, it should be running locally. If we open this up, we should see that the web is available here. We have facial features, hair, a fully working app. Now again, Docker Compose specifies that if anything changes under the API path, it will resync this into the running application. Let’s go ahead and bring up app.py and we can make a change here. We’ll add accessories to this and save it. Now we should be able to see this reflected in our running application. If we bring this back up, we should see, with a refresh, the accessory item shows up in the web page. The point here is that we’re easily able to git clone and Docker compose up, and immediately start to work on the project with no pre-configuration needed.

Compose merge (22:06)

Compose Merge is a way to merge a set of Docker Compose files together to create one composite file. Compose can read in an arbitrary number of files and will merge them in the order they are specified. It’s important to be aware of the following detail, to make sure that all paths in the files are relative to the base compose file. That’s the first compose file specified with the -f flag. This is required because override files don’t have to be valid compose files. They can just be small fragments of configuration or value overrides. Tracking which fragments of a service relative to which path is difficult and confusing. To keep paths easier to understand, all the paths must be defined relative to this base file. However, to address this relative path shortcoming, Compose also supports including other complete Compose files that exist locally or even remotely. This means that your Compose file can be, well, composable. The remote file feature provides the ability to build a library of Compose files and include only the ones you need. Be mindful, though, that you should only include other Compose files from trusted sources as changes to them could have potential security risks.

Let’s take a look at some actual Docker Compose include functionality. I have an example repository here. I have a web app contained in its own directory and it has a Docker Compose file. This Docker Compose file specifies that we should include a file from the database directory. We can think of these as separate repositories. If I spin up this file, it will automatically include that and spin up two resources. Number one, the web app and number two, the database. Let’s go ahead and try that. If I open it in terminal, make sure I’m in the right directory, I’m in the web app directory, and if I docker compose up, it is going to run both of these. We can see the database and the web app are being loaded right now. If we switch back to our running containers, we can see that both of these are running from that include statement in Docker Compose.

Compose debugging (24:53)

Docker compose debugging. Debugging advanced compose functionality can be tricky. You can use the config option to parse, resolve, and render the final compose file. For example, if we have multiple files and we add the config directive at the end of it, we can see the fully rendered text in our terminal output. Let’s give that a try. Just like our previous example, we see that we have a database in a web app. I go into the web app and use Docker compose config. We’ll see that the final rendered compose file consists of a database and a web app, despite the fact that the compose file that we have locally, compose.yaml consists of an include statement and just the web app.

This concludes the Docker Compose presentation. Thank you very much for watching.

Learn more

Containers are pretty easy to run individually, but most applications require more than one container to run properly. Discover how to create end-to-end environments using Docker Compose. Compose provides easy orchestration of containers to assemble them into a complete application. Compose takes care of all the parameters the containers need to run, local networking and dependencies between containers, security, and more!

 

 

Our speakers

Todd Densmore

Senior Solutions Architect
Docker