On-Demand Training

Docker Intro and Overview

 

Transcript

Hello, my name is Michael Irwin and I’m a member of the Developer Relations team here at Docker. In this session, we’re going to be going over the Docker intro and overview. We’re going to talk about a lot of different things about Docker and how Docker can help streamline developer experience using containers, and how you can also prepare your application for production and a containerized environment. Now, I’ll go ahead and give the warning up front that we’re going to talk about a lot of different things. And you’re going to see a lot of demos and a lot of examples and all of our different products and services fitting together. It’s going to be a lot. We’re not going to dive very deep. So this is going to be very much a breadth versus depth conversation. And so in order to learn how to actually implement and go and do, you have to check out our other sessions as well.

But with that, what are we going to talk about today? We’re going to start off by just doing a container 101. Everything that Docker does is focused on containers. So let’s make sure we’re all on the same page. We’ll start off with that. Then we’re going to jump into the different ways that you can use Docker. We’ll talk about two of the adoption tracks, specifically container supported development – basically, how can I use containers to develop applications, to test applications, provide consistent environments across my team, etc? – and then secondly, how can I prepare my application for production in a containerized environment? So how do I containerize my application? How do I build it efficiently? How do I secure that image, etc? And so we’ll talk about all the different tools and services along the way.

 

Table of Contents

 

Container 101 (1:34)

So with that, let’s jump into container 101. And the first question that we are going to answer is, what is a container? The best way to answer this is: I’m actually going to use an analogy. So we all have smartphones, or most of us do anyways. In our smartphones, we have lots of different applications installed on them. In this case, I’ve got four different apps – a green, blue, yellow, and a tealish color. And I want you to think about, when is the last time you’ve ever had to worry about how to actually install one of these apps – to configure your phone, to set up the right dependencies and configuration and everything? Well, the answer is probably never. You’ve never once had to worry about that. Because what do we do? We open up the app store, we click the install button, stuff gets downloaded, things are installed, and within a couple of seconds, we have an app on our phone. And then from there, I can click on that red application here, and I’m going to get that app running on my phone. And again, I don’t have to worry about other apps on my phone. I don’t have to worry about how, when I launch the blue app, it may affect the red app. They’re all running these isolated environments. Well, containers are much like the same thing. Sure, they’re implemented a little bit differently, but they provide a similar set of isolation where every containerized process can run in an isolated environment, completely independent of other containers, but also independent of the host.

So let’s look at an example here. Imagine that I wanted to run PostgreSQL. Well, with a containerized environment, I can go to Docker Hub, and I can say “find PostgreSQL.” And in fact, if I switch over to the Docker Hub and the browser here, I can see the PostgreSQL image, and I can see lots of details about this image, such as all the different versions that exist. I can use the latest 17.1, or if I need an old version, I can run a 12.21. I can also go into the tags tab and see a lot of other ones. There’s additional documentation on how I can run Postgres, but what this allows me to do is jump over to a terminal here and I’m going to run a `docker run` command. And this is very similar to the app install and run that we were talking about with the smartphone analogy. But what this is going to do is say, I want to run a container, specify an environment variable (I’ll talk about the port mapping here in just a second), and run the Postgres image in this specific version of the image 17.1.

And when I press Enter, Postgres is going to start. I already had the image downloaded. If I didn’t, we would see the download occur first. But now with all the log output – okay, Postgres is running. Now, if you remember, I mentioned that each of these containers are running in an isolated environment. And so what this port mapping is doing is basically kind of poking a hole through the network isolation. So that way I can access it through my local machine. Otherwise, I wouldn’t be able to access the database. So what this is saying is connect port 5432 of my host to port 5432 of the container. So what this allows me to do then is I can open up another tab here. And I’m just going to use the natively installed PSQL command line tool and just say, hey, I want to connect to a local host. I’m going to use the password that I configured, which was just dev. And now I’m connected to the database. Again, I didn’t have to install anything. I didn’t have to configure anything. And it just works. I can navigate around. Obviously, there’s no tables here. It’s an empty database. But then when I’m done, I can just press Ctrl-C and the database is done. It stopped. The container is not running anymore. And then if I wanted to delete the image, basically that package that had everything, it would be gone. There’s nothing left installed on my machine. So again, it’s this idea of being able to have these portable, isolated processes, much like smartphone apps. But one of the big differences is that I can start to connect multiple of these containers together to create a development environment. And we’ll see that here in just a few moments.

So jumping back to my slides here, to give proper definitions around this, a container at the end of the day is just an isolated process on your machine, just like the smartphone apps. It’s not a full virtual machine. So there’s no kernel, there’s no hypervisor involved, etc. It’s just an isolated process on the machine. The image, think of that as the install package that provides everything that’s needed to run that containerized environment. So it provides all the binaries, all the files, all the configuration dependencies, and so on. So if I had a Python app, it would have the Python runtime, and so on from there.

 

Finding images (6:22)

So digging in a little bit more, where do I find these images? Well, we just saw one example of Postgres coming from Docker Hub. And Docker Hub is the world’s largest marketplace. It’s a service that we run, that we provide. And there’s lots of different images available. In fact, if I switch back over to my browser here on Docker Hub, I can explore many of the images here. And if we look at the set of Docker official images, we’ll see that there’s a lot of different types of images here. We can see much like the Postgres container that just ran. Some of these are images that provide ready to run software, such as Redis and Postgres, nginx here as well, too. But some of these others, like, what’s it mean to just run it? Like, for example, Python. The Python image provides a good base for me to extend and now build my own applications. Because again, I’m not just going to run Python, but I want to have a Python image that has the Python runtime, it has pip and the other tools I would expect for a Python environment. Then I can use that as a base for my own images and extend from there. And again, there’s a lot of images here available to you. We’ve got Docker Official Images. These are images that we produce, we maintain, we secure, and they provide a great base for many organizations to leverage.

 

Building images (7:40)

When you’re ready to build your own images, again, there’s an entire other session that’s focusing on building images and the best practices around them. But typically, you’re going to use a Dockerfile. And the Dockerfile provides the instruction set on how you build that image. In this example, starting from Python, copying in some files, running some commands to install those dependencies, etc. And again, it’s creating that install package that contains everything that’s needed to run, in this case, a Python application. I can build it using the CLI. I do a Docker build. I can give that image a name. In this case, it’s moby/my-app. And then I can push it to a registry. And this registry could be Docker Hub, it could be AWS ECR, it can be an internal JFrog Artifactory. There’s a lot of different registries. And there’s lots of opportunities there.

 

Using Docker (8:40)

So again, containers, isolated processes, images, contain everything that you need to run an application. How does Docker fit in this? What are the different services and tools that Docker provides? Well, the first one I’ll mention is one that most people are already familiar with.

  • Docker Desktop: the single one-install solution for me as a developer. I install it on my machine, it gives me all the tools that I need to develop, test, build everything with containers. And we’ll see more examples of that here in just a little bit. 
  • Docker Hub: a marketplace where I can find images, I can share images as well, and create image repositories there. 
  • Docker Scout: a product that we provide that basically answers the question of: I built an image, how do I know if I built a good image? Did that image meet the compliance policies of my organization? And so with Scout, I can analyze my image against various policies. We’ll see some examples of it here in just a little bit. And then if I’ve got issues with it, Scout can give me recommendations on how I can fix them. And so as a developer, I can test, I can build those images, I can test those images, analyze them, etc, and make sure that they’re going to be compliant long before I push the code and CI/CD processes go. So again, trying to shorten the feedback loop there. 
  • Docker Build Cloud: helps me build my images faster. So with managed builders in the cloud, it lets me build faster and leverage caches from previous builds. And we’ll see some examples of that later on. 
  • Testcontainers Cloud: helps me delegate the running of containers to cloud-based environments, which is super helpful to run integration testing, especially in CI environments. And again, we’ll see some examples of that shortly. 

 

Docker Desktop (10:20)

Diving into Docker Desktop a little bit more, I just want to clarify that because there’s a lot of confusion out there, Docker Desktop again, it’s more than just the GUI up here in the top right. It provides everything that’s needed to work with containers on my local machine. I can build images, I can run containers, I can spin up a Kubernetes cluster and test out Helm charts, it has cross-platform support, integrates with all of our other products and services as well. It does provide the GUI in the CLI, but also most importantly, especially for those of you in the enterprise space, it provides enterprise control so that an enterprise can ensure that development environments are kept safe and secure. They can include settings and controls around what registries folks can pull from, as well as other settings to keep developer workstations secure.

Now to actually adopt and use containers, and this is where the fun is going to start and a lot of the demos, there’s kind of two different tracks. The first one is what we call container-supported development. This is basically how you use containers in development and testing it, everything basically before you push code and CI pipelines kick off. How can I set up my development environments? How can I help my teams be consistent? How can I test with containers? etc. And this provides consistency, it allows me to decouple myself for remote environments and we’ll see some examples of that here in just a moment. One of the things I’ll call out is that you can use containers in local development, whether your main app in development is actually containerized or not. And we’ll see some examples of that here in just a second as well too.

The other track then is – how do I actually build my application in a container and how do I prepare for production? How do I build it? How do I follow the best practices? How do I help it be secure? And this is typically going to involve more teams – our CI teams, our platform teams, SREs – because it changes the way that we actually deploy our applications.

 

Container-supported development (12:30)

Jumping in, we’re going to start with container supportive development. How can I use containers in local development? We saw earlier that I ran a Postgres container, but let’s expand on that storyline a little bit. Imagine I have an application, that’s my catalog service, and this application just presents an API. The application stores its data in a Postgres database, just like we saw earlier. I can run that in a container. But my application, since the catalog needs to store images, and those images, let’s say that we store them in AWS S3. My application also leverages external data services, for example, an inventory service. When I look up the catalog, I can know how many of a specific item exists at a specific location. As changes occur within the catalog, the application will then publish updates to a Kafka cluster where then other downstream services can respond to those events, send out notifications or whatever.

So if we think about this, there’s a lot of moving pieces here. And so the question is, how do we set up a development environment with this? Now, one thing that we see a lot is that folks will say, cool, I’m going to spin up the API in the local development environment, but I’m going to have it connect to services that are deployed, maybe in a shared AWS account for development. My app needs S3, it needs the external inventory services. Let me just allow the app to connect to the services that are running somewhere else. And most of the time this works out pretty well. But once you realize that that single developer is on a team of developers, now there’s a lot of other developers that are fighting over the same resources. And what happens when the IAM policy for the S3 bucket gets messed up and now nobody can push or pull objects out of the bucket or somebody accidentally deletes the tables from the Postgres database. Again, there’s a lot of opportunities that folks can step on each other’s toes, permissions can be messed up. And at the end of the day during development, I don’t care about most of that stuff.

Can I simplify my development environment? Well, obviously the answer is yes – that’s why we’re talking about this. And so as an alternative, what if on my developer workstation, instead of actually connecting to the AWS account in my dev environment, I ran an S3-compatible service locally? At the end of the day, my code is just talking to an API. It doesn’t really care if it’s actually S3 behind it. So what if I swapped out that S3 with a S3 API compatible service and if I ran that locally? Well, if I do that, why can’t I do that with my inventory service as well? At the end of the day, the inventory service is just an API. So what if what if I mocked all that? And what opportunities does that open up? We’ll see that here in just a second. I can also run Kafka locally. I can run Postgres locally, etc. And so even just with this, I’m already starting to kind of mock my development with what was a remote development environment is now local. But I can start to also add additional capabilities to this. What if all of my developers aren’t very good at Kafka and they don’t understand all the different CLI/command line arguments and all that kind of stuff? Well, wouldn’t it be nice if I can just spin up another service that allows me to interact with a Kafka cluster to maybe test the publishing of messages or validate that the messages are being produced correctly? Alternatively, I can also say, let me spin up an open source tool to visualize what’s going on in the database, maybe pgAdmin. And so again, I can start to enhance my development environment with capabilities that would have been hard to do otherwise.

 

Demo (16:18)

Let’s take a look at this. All right. I’m going to switch over to VS Code now. And here in VS Code, I’m going to start off with this Compose file. Now, I know I haven’t talked about Compose yet and we’ll talk more about it in another session. But think of Compose much like the docker run command where if I go back to the docker run command I did for Postgres earlier, I specified an environment variable and ports and then the image. Well, I’m doing much the same thing here, just in a different format. Here’s the image, here’s the ports, some environment variables, there’s some other file volumes, which we’ll talk about later. But again, it’s letting me outline, here’s one container, here’s another container and another container. And so there’s lots of containers that are being defined here to set up my environment. And what this allows me to do then is I can just run `docker compose up`. I’m adding `-d` to basically run this in the background. And this is going to spin up the containers. Now again, some of these containers are providing the database. I’m running AWS using an open source tool called LocalStack. And this will give me the ability to basically have the S3 compatible API locally. I’m going to configure that AWS account with a bucket. And so on, there’s the Kafka configuration. And so now within a few seconds now my app’s dependencies are up and running.

For this demo, I’m actually going to run the app natively on my machine. I’m just going to run `yarn dev` because it’s a Node app. And what’s going to happen is, since the database is exposed on my host, my app is just going to connect to that database through that exposed port. I’m going to open up another tab here and I’m going to go to some helper scripts that I’ve got that can create the products. So in this case, I just created three products. Okay, it’s just interacted with the API that’s exposed by this application. Now, I mentioned that I’ve got a couple other services that are part of the stack. If I jump back to my browser, I can see that on localhost:5050 (that’s just the port I chose here), I’ve got pgAdmin running now. pgAdmin is an open source tool that helps me visualize Postgres databases and I can open up the server and this password is configured in the compose file. And then from here, I can continue to navigate through all the schemas and the tables. Let’s just view all the data for that table. I see the three items that I created just a moment ago. Let’s go and change one of these. I’ll just change it to “Another product” and then commit the transaction. And now if I go back to VS Code, I could say, “Hey, get me product 2” and I could see that the name has been changed here as well. Okay, so again, that database visualizer, I didn’t have to spend any time to figure out “how do I set it up? how to install it? how to configure it?”. It’s just an open source tool that the creator of that tool publishes a container image and now I can just run it. It’s that simple.

Now, you’ll also see that there’s some inventory data here. Where’s that coming from? Well, again, I’m running an inventory mock that’s using a tool called WireMock and this tool allows me to put in some file mappings. And so when my API makes a request to the inventory service for a product number 2 again, (I can configure this however I want) for that look up, I’m going to return a 200 status with this JSON body. And we can see that that gets merged in. Now, one of the big advantages of using mocks then is I can start to test out other error states. So for example, when I go to this other endpoint, I’m actually going to have the inventory service send me a 500 with a JSON body with a message indicating the error that occurred. And so I can validate that my code handles that 500 error case – it doesn’t blow up. It provides an error message back and then still provides the rest of the relevant details that I want. So again, I can start to test out all these other conditions where it’s hard for me to do if I’m actually talking to the real service. So again, containers allow me to just spin up these different services. Okay, I want to mock here, I want a database visualizer here, etc.

Actually, another little demo. If I go to localhost:8080, I’ve got another open source tool called kafbat. I can visualize what’s going on in my Kafka cluster. And I could see the messages that were published when those products were created. And so again, I can validate that the messages were published, that they have the right schema, so on, and so forth. So again, containers allow me to start thinking of my application stack. Going back to VS Code, you know, maybe instead of smartphone apps like that I start off with almost like a bunch of Lego bricks. And what are the different services and what are the different components that I can plug together to make a development environment to help me be more productive, make it easier to troubleshoot and debug what’s going on, etc.

So a couple of tips – This works because I focus on the protocols and abstractions between the services. I can run a database locally because it’s the same binary protocols as what a managed database service is going to be providing in my cloud environment. But my API is as well too. The API should be well understood, well documented, and then I can mock those out locally. In this case, I map the container ports to the host, which allows me to then connect to the application. Even though my app is running natively, you know, I could actually run that in a container as well. So for example, I’ve got another Compose file here. In this case, the app itself runs in a container and it’s basically building a Node environment. So instead of `yarn dev` running directly in a CLI on my host, it’s basically running inside a containerized environment. And in that case, I could clone this repository and, if I use this particular Compose file, everything would be running in containers. I would have to have nothing installed on my machine other than Docker Desktop. So again, it opens up lots of good opportunities there.

And then again, when you’re thinking about your development environment, think about what are some additional tools and capabilities I can provide to that development environment that allows me to expand the usefulness and make troubleshooting and debugging easier in databases and Kafka clusters or whatever else it is that you might be using.

Challenges of integration testing (23:05)

So the question is, when I’m ready to go beyond development and I want to start testing. Then what? Integration testing is challenging. And a lot of it is because we have to manage the lifecycle of the services. How do we start them? How do we stop them? I need a database that’s running in order to do the test, etc. And how do we do so in a way that we’re not just having long running infrastructure where you know test run one over here is going to be influenced by test run two? Again, it’s a mess. And traditionally we try to follow the testing pyramid where we have lots of unit tests, then some integration tests above that and then even fewer end-to-end tests. But since integration testing is hard, a lot of times folks end up with the testing hourglass where they kind of say hey, that part’s hard. So we’re just going to push it upwards in the stack and we’re just going to do some more QA testing and more end-to-end testing, etc. But end-to-end testing is very time-consuming. It’s also very brittle. It’s hard to do right. So the question is how can we leverage the power of containers to help with integration testing? And that’s where the Testcontainers framework comes in.

 

Introducing Testcontainers (24:15)

Now I’ll go ahead and clarify upfront – Testcontainers is not a different type of container. It’s still this idea of an isolated environment, but it’s bringing the power of containers into the testing world. And so what Testcontainers provides is an open source collection of libraries that allow me to manage my containers programmatically. So as an example here – I’ve got a Java snippet that allows me to just say, “Hey, I need a Postgres container. Here’s the specific image I want to use and assign it to this variable.” But this being a programmatic approach, if I put this in my test startup scripts, my startup scripts can spin up all the different services that I need. I can run my test, again talking to those real services, and then when I’m done, the Testcontainers framework will help ensure that everything gets cleaned up. So all the containers, all the volumes networks, everything else, it’ll make sure everything gets removed. And the other advantage is, with it being a programmatic approach, you can start to create your own abstraction layers around it. So that maybe you want to say, I want a postgres container with this particular data snapshot. Again, you can start to do that.

Testcontainers does come with a lot of ready-to-go modules. So it can be really easy to spin up some really complicated stuff. And there’s a wide library of them today and it’s constantly growing as well.

 

Demo: Testcontainers (25:40)

So I’m going to show off a little bit of this as well too. We’re going to go back to that same project that we were looking at just a second ago. And if I open up in the test directory, I’ve got an integration test here that what we can see is before all the test run, I’m going to create and I’m going to bootstrap a Postgres container and I’m also going to create and bootstrap a Kafka container. If we look at this Postgres container, again, this is using the Node SDK. I’m going to create a Postgres container. I’m going to bind mount some database initialization scripts that will populate a schema and then start it. Once this promise is resolved, then I can extract the details out and set some environment variables that then the code base will pick up and leverage. So I extract out the username and password of the host port and database. The nice thing here is that the Testcontainers framework is by default going to expose the port on an ephemeral port. So even though I’m saying expose port 5432 on my host, it may be 55017 or whatever. And so this also allows me to run lots of tests in parallel. And then I’ve got a similar thing for the Kafka cluster where I’m going to start the Kafka container. And then before I return, I’m actually going to go ahead and create the topics that my code needs to run. And then from there, looking at one of these tests, for example, I’m going to create a consumer. But then when I actually create a product, I’m going to validate that I actually get the message published that I want. And again, this is going to be validating that everything is working. Let’s go ahead and run this.

All right, so I’m just going to run this from the CLI and while it does so I’m going to actually pull up the Docker Desktop GUI next to it so we can see it launching the containers. In fact, I’ll go ahead and collapse this stack here so we can see a little bit more easily. So I’ll start the test and we can see in the log output here that it’s starting containers and we can see it showing up here in the Docker Desktop dashboard as well. So yeah, lots of log output here. I’ve got the debug log enabled right now. And again, so it’s spinning up the containers. It’s running the test. And then once it’s done, you see that it’s removed those containers. This last Testcontainer container is used to help clean things up and you see that it goes away automatically as well too.

So again, with the Testcontainers framework, it allows me to leverage the power of containers in my testing environment. And if I jump over to the browser, there’s support in a lot of different languages. Now, obviously the way that you write Testcontainer code is going to look a little bit different depending on the language. And they try to keep the idioms of that particular language as well too. With that, the last thing I’ll mention here is that a lot of teams may not have the ability to launch containers in their CI environment because maybe they’re using Kubernetes pods that can’t do Docker in Docker or privileged modes.

And so this is where Testcontainers Cloud comes in, where with Testcontainers Cloud, I can basically delegate the running of those containers to cloud based resources. And so there’s no more need for Docker in Docker, your pipelines are more secure, etc. And in fact, I can run those same tests that I did before, and I just flip the toggle to use Testcontainers Cloud. And now I’m going to run the same test. And we’ll see this time that it’s actually going to be using the cloud-based environment. So you see some pulls going on here and you don’t see any action going on in the dashboard as well too. Again, Testcontainers Cloud, allows me to delegate the running of those to cloud based services, which is especially helpful in CI environments or maybe if I needed a GPU for AI machine learning workloads, etc.

 

App containerization (29:57)

So let’s talk about how we containerize our applications. Now, we’ve talked about development, we’ve talked about testing. Now how do I actually bundle and build my application and prepare it to deploy and how does Docker help me do that? The first thing that I want to get into is that a lot of teams, when they’re starting this journey, feel like they have to follow all the best practices at first. And it’s really easy to get analysis paralysis. I want to just clear it right up front that it’s okay to take little baby steps. And so what I encourage a lot of teams to do first is just get your app in a container. Okay, it doesn’t have to be pretty. It doesn’t have to be the most optimized. It doesn’t have to be the smallest image in the world. Just get something working. So the first goal is just to get your app in a container.

After that, how do you automate it? So that every time I push code, it can start to build it. And so that will integrate into your CI pipelines. And again, automate the building of that image so then you can start to see changes over time. Then from there, then focus on optimization. How do you make the image smaller and faster and leaner?, etc. And then, leveraging caches, all that kind of stuff. And again, we’ve got entire sessions focused around building best practices and whatnot. And cross cutting across all this, and I should have this up front, but you want to make sure that you’re meeting your organization’s compliance needs and policies. So that way, even with the first build, you also don’t want to open yourself up to lots of vulnerabilities and whatnot. So what are the tools to help us out with each of these different steps?

 

Docker Init (31:41)

Docker Init is a tool that’s designed to help bootstrap your containerization effort. It has support for seven different languages, and it can detect which language that that particular project is and will give you a Dockerfile and a compose file to help you get started. Now, I’ll go ahead and say at the front, it may not be perfect, but it at least will give you a good step forward and get you started on your journey. And then of course, since it’s just files in your local file system, from there, then you can start to make changes or swap out the base image for a different one, etc.So this has been Best Practices with Docker. Hopefully you’ve gotten a lot of new areas to think about at the organizational level, at the team level, and at the individual level. Certainly there’s always more best practices out there. I hope you enjoy this presentation. Thank you.

 

GitHub Actions (32:18)

When you’re ready to start in your CI environments, we’ve got a lot of tools to help out with that. We’ve got a lot of official GitHub Actions that can help with the building and pushing of images, integration with Docker Build Cloud, which I’ll talk about a little bit more here in just second, Docker Scout as well. We talked about Testcontainers Cloud, just a few moments ago. And so if you want to run your test using Testcontainers Cloud again, there’s official actions to help support a lot of that as well. And we’re starting to see some folks actually using these GitHub Actions, even in GitLab CI environments now that GitLab CI can leverage some of GitHub Actions as well too. And then from there, one of the things that I just want to mention – a couple of quick tips and best practices.

 

Quick tips/best practices (32:59)

Now there’s a lot more that we can dig into here, but just to whet your appetite a little bit. First off, when you’re starting off this journey, it’s quite common for folks to just say, hey, I need credentials, whether I need to get dependencies from a private NPM registry or any SSH credential or whatever else it might be. But the thing I want to advise you is never include secrets in your images. There are ways to provide secrets at build time using build secrets. Just don’t bake them into your image. I know we haven’t talked about image layering and how it’s actually structured. If you put your secrets in one layer and then in the next you say, hey, I’m going to delete that file. At the end of the day, you’re actually still shipping the credentials. So just be very mindful of that. Don’t include secrets in your images.

Start with trusted base images, whether those are ones that we provide or whether there are ones that your organization provides, start from a good trusted base.

Using multi-stage builds helps you separate build time dependencies from runtime dependencies and help you create much smaller, leaner and more secure images for production. And again, we can dive much deeper into that in future sessions.

You also want to reduce the image size wherever possible. The smaller the images are, the faster they are going to be to push and pull and also less storage space they take up. And one way to help out with that is to structure the layers or basically the instructions in your Dockerfile to promote cache usage.

Now each of these items probably deserve a whole 5 or 10 minute section and demos and everything with it. Again, we’ll dive into that in the future. But again, some things to think about here. Some of the tools to help out with a lot of this.

As I mentioned earlier, we’ve got a tool to help out with builds and that’s Docker Build Cloud. The idea with Build Cloud is that when I perform a build rather than just using the builder on my local machine every time I do a build, I can leverage basically builders that are running in the cloud. And so if I’ve got a team of five people, one person’s build, again, assuming that I’m using the cloud builder, will populate the caches that everybody else on my team can leverage. This is especially helpful in CI environments where every time I do a job, I’m starting from a clean slate. And so how can I leverage the caches from previous builds if I’m starting from a clean slate every time? Well, with the cloud builder, it helps me to have those caches because that cloud builder is constantly running.

And then the second tool to help out with compliance is Docker Scout. And Docker Scout gives me one, the ability to set organizational policies, but then two to be able to analyze those policies in my local development environment. And so with that, I’m going to let’s do a little bit of showing off here.

 

Demo (35:58)

I’m going to switch back to VS Code. And I’m going to open up the Dockerfile here. And I’m not going to spend a lot of time here, but just to show a little bit, I’m going to do the build. And I’m just going to call this demo app and I’m going to tag it V1. And the final dot is to say here’s the location of it. Now this build will happen pretty much instantaneously because I just built it earlier. And so you see a lot of cached output here. But what this allows me to do is I can run Docker Scout quickview and then get some further details about this. And so what Scout’s going to do is it’s going to basically analyze my image and look at the SBOM, or the Software Bill of Materials, and look to see what’s in the box. What dependencies are there? What OS packages? What other application dependencies are there? Are there issues with any of those? What open source licenses are attached to them? But also, how is my application itself configured? Is the container configured to run as a non root user by default? There can be a lot of other policies that are involved here. But one of the big advantages of Scout is that it recognizes my image hierarchy. So it knows that in this case I’ve built from node:20 and then it can tell me when it generates this report, did I inherit the issues or did I actually introduce those issues myself?

It’s actually good timing here. And we see here that my image has issues with it. Okay, so I currently have 7 high vulnerabilities, 8 medium, and 108 low. Okay, so quite a bit. And it’s telling me that well, the node:20 image, the base image I’m leveraging and brought along 3 of those high, 4 of the medium, 107 low. I’m inheriting a lot of issues. And so if I change my base image, it can tell me basically the change that occurs just by changing the base image. I can also get some additional details about the CVEs that it discovered. In fact, I’ll go ahead and run that real quick. And so this will give me more details about, hey, here are the specific vulnerabilities that were found. Here’s the fix versions associated with them and then links to dive in and understand them a little bit more fully. While it’s doing that, I can also show that in the GUI, I can also see the same thing here where it’s going to give me an analysis and be able to see what’s going on in this particular image. I think this is going just a little bit longer than it normally does just because I’m doing recording so my machine’s doing a lot of other processing right now as well too, but this will give me a GUI based approach to a lot of things that I’m seeing in the CLI. And so I’ll be able to click through the vulnerabilities and see the fixed versions for them. It’ll give me the recommendations on how I can fix the image and make my image more compliant.

Now this should be finishing up here in just a second. And one of the things I can do, I didn’t do it for this particular build, is I can actually build the SBOM and attach it to the image. So the reason why this is going a little bit slower is because I didn’t include that option. So it’s having to basically generate the SBOM every time I’m doing the analysis here, but I could have it do it once at the build time. But again, I can see all the basically the ancestry of my image and I can see that, okay, I’ve got an issue with the express 4.17 and I can see that there’s three different CVEs associated with it. This particular one gives me details about it and tells me okay, the fixed version I need to go to at least 4.20. This one says 4.19.2 and then this final one says 4.17.3. So with that knowledge, then I can go back to my code base. I can change my package JSON and just say, hey, I need to update express to 4.20.0. I can re-build and re-analyze and run all my tests, make sure everything still works, etc. So Scout again, helps me keep track of all that before it has to be deployed.

Now, I talked about Build Cloud and when I push this code, what I actually have are two different workflows that get triggered and you’ll see for every commit there’s these two different workflows. One that’s built with just plain GitHub actions and the one that’s using a Docker Build Cloud. And you can see that there’s a difference in the times here where the builds using Docker Build Cloud are hovering about in the 30-second time frame. This last one I had a dependency change so it took a little bit longer, but without Docker Build Cloud, it’s taken about two minutes. And so it takes quite a bit longer and this is a workflow that’s just running the build. Again, I’d see the same thing with Testcontainers Cloud, you know, launching containers, running containers, etc.

 

Summary (41:16)

So again, all these different services are helping me streamline my development practices. So the Docker suite of services again, as we saw them earlier, Docker Desktop, Docker Hub, Docker Scout, Docker Build Cloud and Testcontainers Cloud are really designed to help me develop with containers, test with containers, build and secure, and then eventually, you know, deploy with containers as well too. And all this together is the Docker suite of services. So again, we’re excited to have you on this journey, please tune into the future sessions and where you can dive in deeper and get hands on and learn how to actually do these things. And with that, thank you. I hope this has been beneficial to you and we look forward to helping support you on your journey.

 

Learn more

Whalecome to Docker! In this introductory session, you’ll start off with the basics – “What is a container?” From there, you will learn and see the power of containers in local development and how Docker’s suite of services bring it all together. You’ll see consistent development environments, integration testing made easy, and tools to help secure your images and build them more quickly!

Note that this session is intended to provide a wide overview of everything Docker has to offer and doesn’t get into the technical how-to’s. Whether you’re brand new to Docker or an experienced user, there will be something new for everyone!

Our speakers

Michael Irwin

Senior Manager, Developer Relations
Docker