A step-by-step guide to help you get started using Docker containers with your Node.js apps.
Prerequisites
To complete this tutorial, you will need the following:
- Free Docker Account
- You can sign-up for a free Docker account and receive free unlimited public repositories
- Docker running locally
- Node.js version 12.18 or later
- An IDE or text editor to use for editing files. I would recommend VSCode
Docker Overview
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly.
With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.
Sample Application
Let’s create a simple Node.js application that we’ll use as our example. Create a directory on your local machine named node-docker and follow the steps below to create a simple REST API.
$ cd [path to your node-docker directory]
$ npm init -y
$ npm install ronin-server ronin-mocks
$ touch server.js
Now let’s add some code to handle our REST requests. We’ll use a mocks server so we can focus on Dockerizing the application and not so much the actual code.
Open this working directory in your favorite IDE and enter the following code into the server.js file.
const ronin = require( 'ronin-server' )
const mocks = require( 'ronin-mocks' )
const server = ronin.server()
server.use( '/', mocks.server( server.Router(), false, true ) )
server.start()
The mocking server is called Ronin.js and will list on port 8000 by default. You can make POST requests to the root (/) endpoint and any JSON structure you send to the server will be saved in memory. You can also send GET requests to the same endpoint and receive an array of JSON objects that you have previously POSTed.
Testing Our Application
Let’s start our application and make sure it’s running properly. Open your terminal and navigate to your working directory you created.
$ node server.js
To test that the application is working properly, we’ll first POST some json to the API and then make a GET request to see that the data has been saved. Open a new terminal and run the following curl commands:
$ curl --request POST \
--url http://localhost:8000/test \
--header 'content-type: application/json' \
--data '{
"msg": "testing"
}'
{"code":"success","payload":[{"msg":"testing","id":"31f23305-f5d0-4b4f-a16f-6f4c8ec93cf1","createDate":"2020-08-28T21:53:07.157Z"}]}
$ curl http://localhost:8000/test
{"code":"success","meta":{"total":1,"count":1},"payload":[{"msg":"testing","id":"31f23305-f5d0-4b4f-a16f-6f4c8ec93cf1","createDate":"2020-08-28T21:53:07.157Z"}]}
Switch back to the terminal where our server is running and you should see the following requests in the server logs.
2020-XX-31T16:35:08:4260 INFO: POST /test
2020-XX-31T16:35:21:3560 INFO: GET /test
Creating Dockerfiles for Node.js
Now that our application is running properly, let’s take a look at creating a Dockerfile.
A Dockerfile
is a text document that contains all the commands a user could call on the command line to assemble an image. When we tell Docker to build our image by executing the docker build
command, Docker will read these instructions and execute them one by one and create a Docker image as a result.
Let’s walk through creating a Dockerfile
for our application. In the root of your working directory, create a file named Dockerfil
e and open this file in your text editor.
NOTE: The name of the Dockerfile is not important but the default filename for many commands is simply Dockerfile
. So we’ll use that as our filename throughout this series.
The first thing we need to do is add a line in our Dockerfile that tells Docker what base image we would like to use for our application.
Dockerfile:
FROM node:12.18.1
Docker images can be inherited from other images. So instead of creating our own base image, we’ll use the official Node.js image that already has all the tools and packages that we need to run a Node.js application. You can think of this as in the same way you would think about class inheritance in object oriented programming. So for example. If we were able to create Docker images in JavaScript, we might write something like the following.
class MyImage extends NodeBaseImage {}
This would create a class called MyImage
that inherited functionality from the base class NodeBaseImage
.
In the same way, when we use the FROM
command, we tell docker to include in our image all the functionality from the node:12.18.1 image
.
NOTE: If you want to learn more about creating your own base images, please checkout our documentation on creating base images.
To make things easier when running the rest of our commands, let’s create a working directory.
This instructs Docker to use this path as the default location for all subsequent commands. This way we do not have to type out full file paths but can use relative paths based on the working directory.
WORKDIR /app
Usually the very first thing you do once you’ve downloaded a project written in Node.js is to install npm packages. This will ensure that your application has all its dependencies installed into the node_modules
directory where the node runtime will be able to find them.
Before we can run npm install, we need to get our package.json
and package-lock.json file
s into our images. We’ll use the COPY
command to do this. The COPY
command takes two parameters. The first parameter tells Docker what file(s) you would like to copy into the image. The second parameter tells Docker where you want that file(s) to be copied to. We’ll copy the package.json
and package-lock.json
file into our working directory – /app.
COPY package.json package.json
COPY package-lock.json package-lock.json
Once we have our package.json files inside the image, we can use the RUN
command to execute the command npm install
. This works exactly the same as if we were running npm install locally on our machine but this time these node modules will be installed into the node_modules directory inside our image.
RUN npm install
At this point we have an image that is based on node version 12.18.1 and we have installed our dependencies. The next thing we need to do is to add our source code into the image. We’ll use the COPY command just like we did with our package.json files above.
COPY . .
This COPY command will take all the files located in the current directory and copies them into the image. Now all we have to do is to tell Docker what command we want to run when our image is run inside of a container. We do this with the CMD command.
CMD [ "node", "server.js" ]
Below is the complete Dockerfile.
FROM node:12.18.1
WORKDIR /app
COPY package.json package.json
COPY package-lock.json package-lock.json
RUN npm install
COPY . .
CMD [ "node", "server.js" ]
Building Images
Now that we’ve created our Dockerfile, let’s build our image. To do this we use the docker build command. The docker build
command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified PATH or URL. The Docker build process can access any of the files located in the context.
The build command optionally takes a –tag flag. The tag is used to set the name of the image and an optional tag in the format ‘name:tag’. We’ll leave off the optional “tag” for now to help simplify things. If you do not pass a tag, docker will use “latest” as it’s default tag. You’ll see this in the last line of the build output.
Let’s build our first Docker image.
$ docker build --tag node-docker .
Sending build context to Docker daemon 82.94kB
Step 1/7 : FROM node:12.18.1
---> f5be1883c8e0
Step 2/7 : WORKDIR /code
...
Successfully built e03018e56163
Successfully tagged node-docker:latest
Viewing Local Images
To see a list of images we have on our local machine, we have two options. One is to use the CLI and the other is to use Docker Desktop. Since we are currently working in the terminal let’s take a look at listing images with the CLI.
To list images, simply run the images
command.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
node-docker latest 3809733582bc About a minute ago 945MB
node 12.18.1 f5be1883c8e0 2 months ago 918MB
You should see at least two images listed. One for the base image node:12.18.1
and the other for our image we just build node-docker:latest.
Tagging Images
As mentioned earlier, an image name is made up of slash-separated name components. Name components may contain lowercase letters, digits and separators. A separator is defined as a period, one or two underscores, or one or more dashes. A name component may not start or end with a separator.
An image is made up of a manifest and a list of layers. Do not worry to much about manifests and layers at this point other than a “tag” points to a combination of these artifacts. You can have multiple tags for an image. Let’s create a second tag for the image we built and take a look at it’s layers.
To create a new tag for the image we built above, run the following command.
$ docker tag node-docker:latest node-docker:v1.0.0
The docker tag command creates a new tag for an image. It does not create a new image. The tag points to the same image and is just another way to reference the image.
Now run the docker images
command to see a list of our local images.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
node-docker latest 3809733582bc 24 minutes ago 945MB
node-docker v1.0.0 3809733582bc 24 minutes ago 945MB
node 12.18.1 f5be1883c8e0 2 months ago 918MB
You can see that we have two images that start with node-docker
. We know they are the same image because if you look at the IMAGE ID
column, you can see that the values are the same for the two images.
Let’s remove the tag that we just created. To do this, we’ll use the rmi
command. The rmi c
ommand stands for “remove image”.
$ docker rmi node-docker:v1.0.0
Untagged: node-docker:v1.0.0
Notice that the response from Docker tells us that the image has not been removed but only “untagged”. Double check this by running the images command.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
node-docker latest 3809733582bc 32 minutes ago 945MB
node 12.18.1 f5be1883c8e0 2 months ago 918MB
Our image that was tagged with :v1.0.0
has been removed but we still have the node-docker:latest
tag available on our machine.
Running Containers
A container is a normal operating system process except that this process is isolated in that it has its own file system, its own networking, and its own isolated process tree separate from the host.
To run an image inside of a container, we use the docker run command. The docker run
command requires one parameter and that is the image name. Let’s start our image and make sure it is running correctly. Execute the following command in your terminal.
$ docker run node-docker
After running this command you’ll notice that you were not returned to the command prompt. This is because our application is a REST server and will run in a loop waiting for incoming requests without return control back to the OS until we stop the container.
Let’s make a GET request to the server using the curl command.
$ curl --request POST \
--url http://localhost:8000/test \
--header 'content-type: application/json' \
--data '{
"msg": "testing"
}'
curl: (7) Failed to connect to localhost port 8000: Connection refused
As you can see, our curl command failed because the connection to our server was refused. Meaning that we were not able to connect to localhost on port 8000. This is expected because our container is run in isolation which includes networking. Let’s stop the container and restart with port 8000 published on our local network.
To stop the container, press ctrl-c. This will return you to the terminal prompt.
To publish a port for our container, we’ll use the —publish
flag (-p
for short) on the docker run command. The format of the —publish
command is [host port]:[container port]
. So if we wanted to expose port 8000 inside the container to port 3000 outside the container, we would pass 3000:8000
to the —publish
flag.
Start the container and expose port 8000 to port 8000 on the host.
$ docker run --publish 8000:8000 node-docker
Now let’s rerun the curl command from above.
$ curl --request POST \
--url http://localhost:8000/test \
--header 'content-type: application/json' \
--data '{
"msg": "testing"
}'
{"code":"success","payload":[{"msg":"testing","id":"dc0e2c2b-793d-433c-8645-b3a553ea26de","createDate":"2020-09-01T17:36:09.897Z"}]}
Success! We were able to connect to the application running inside of our container on port 8000. Switch back to the terminal where your container is running and you should see the POST request logged to the console.
2020-09-01T17:36:09:8770 INFO: POST /test
Press ctrl-c to stop the container.
Run In Detached Mode
This is great so far but our sample application is a web server and we should not have to have our terminal connected to the container. Docker can run your container in detached mode or in the background. To do this, we can use the —detach
or -d for short. Docker will start your container the same as before but this time will “detach” from the container and return you to the terminal prompt.
$ docker run -d -p 8000:8000 node-docker
ce02b3179f0f10085db9edfccd731101868f58631bdf918ca490ff6fd223a93b
Docker started our container in the background and printed the Container ID on the terminal.
Again, let’s make sure that our container is running properly. Run the same curl command from above.
$ curl --request POST \
--url http://localhost:8000/test \
--header 'content-type: application/json' \
--data '{
"msg": "testing"
}'
{"code":"success","payload":[{"msg":"testing","id":"dc0e2c2b-793d-433c-8645-b3a553ea26de","createDate":"2020-09-01T17:36:09.897Z"}]}
Listing Containers
Since we ran our container in the background, how do we know if our container is running or what other containers are running on our machine? Well, we can run the docker ps command. Just like on linux, to see a list of processes on your machine we would run the ps command. In the same spirit, we can run the docker ps command which will show us a list of containers running on our machine.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce02b3179f0f node-docker "docker-entrypoint.s…" 6 minutes ago Up 6 minutes 0.0.0.0:8000->8000/tcp wonderful_kalam
The ps command tells a bunch of stuff about our running containers. We can see the Container ID, The image running inside the container, the command that was used to start the container, when it was created, the status, ports that exposed and the name of the container.
You are probably wondering where the name of our container is coming from. Since we didn’t provide a name for the container when we started it, Docker generated a random name. We’ll fix this in a minute but first we need to stop the container. To stop the container, run the docker stop
command which does just that, stops the container. You will need to pass the name of the container or you can use the container id.
$ docker stop wonderful_kalam
wonderful_kalam
Now rerun the docker ps
command to see a list of running containers.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Stopped, Started and Naming Containers
Docker containers can be started, stopped and restarted. When we stop a container, it is not removed but the status is changed to stopped and the process inside of the container is stopped. When we ran the docker ps
command, the default output is to only show running containers. If we pass the —all
or –a
for short, we will see all containers on our system whether they are stopped or started.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce02b3179f0f node-docker "docker-entrypoint.s…" 16 minutes ago Exited (0) 5 minutes ago wonderful_kalam
ec45285c456d node-docker "docker-entrypoint.s…" 28 minutes ago Exited (0) 20 minutes ago agitated_moser
fb7a41809e5d node-docker "docker-entrypoint.s…" 37 minutes ago Exited (0) 36 minutes ago goofy_khayyam
If you’ve been following along, you should see several containers listed. These are containers that we started and stopped but have not been removed.
Let’s restart the container that we just stopped. Locate the name of the container we just stopped and replace the name of the container below in the restart command.
$ docker restart wonderful_kalam
Now list all the containers again using the ps command.
$ docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce02b3179f0f node-docker "docker-entrypoint.s…" 19 minutes ago Up 8 seconds 0.0.0.0:8000->8000/tcp wonderful_kalam
ec45285c456d node-docker "docker-entrypoint.s…" 31 minutes ago Exited (0) 23 minutes ago agitated_moser
fb7a41809e5d node-docker "docker-entrypoint.s…" 40 minutes ago Exited (0) 39 minutes ago goofy_khayyam
Notice that the container we just restarted has been started in detached mode and has port 8000 exposed. Also observe the status of the container is “Up X seconds”. When you restart a container, it will be started with the same flags or commands that it was originally started with.
Let’s stop and remove all of our containers and take a look at fixing the random naming issue.
Stop the container we just started. Find the name of your running container and replace the name in the command below with the name of the container on your system.
$ docker stop wonderful_kalam
wonderful_kalam
Now that all of our containers are stopped, let’s remove them. When a container is removed, it is no longer running nor is it in the stopped status but the process inside the container has been stopped and the metadata for the container has been removed.
$ docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce02b3179f0f node-docker "docker-entrypoint.s…" 19 minutes ago Up 8 seconds 0.0.0.0:8000->8000/tcp wonderful_kalam
ec45285c456d node-docker "docker-entrypoint.s…" 31 minutes ago Exited (0) 23 minutes ago agitated_moser
fb7a41809e5d node-docker "docker-entrypoint.s…" 40 minutes ago Exited (0) 39 minutes ago goofy_khayyam
To remove a container, simple run the docker rm command passing the container name. You can pass multiple container names to the command in one command. Again, replace the containers names in the below command with the container names from your system.
$ docker rm wonderful_kalam agitated_moser goofy_khayyam
wonderful_kalam
agitated_moser
goofy_khayyam
Run the docker ps --all
command again to see that all containers are gone.
Now let’s address the pesky random name issue. Standard practice is to name your containers for the simple reason that it is easier to identify what is running in the container and what application or service it is associated with. Just like good naming conventions for variables in your code makes it simpler to read. So goes naming your containers.
To name a container, we just need to pass the –name flag to the run command.
$ docker run -d -p 8000:8000 --name rest-server node-docker
1aa5d46418a68705c81782a58456a4ccdb56a309cb5e6bd399478d01eaa5cdda
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1aa5d46418a6 node-docker "docker-entrypoint.s…" 3 seconds ago Up 3 seconds 0.0.0.0:8000->8000/tcp rest-server
There, that’s better. Now we can easily identify our container based on the name.
Conclusion
In this post, we learned about creating Docker images using a Dockerfile, tagging our images and managing images. Next we took a look at running containers, publishing ports, and running containers in detached mode. We then learned about managing containers by starting, stopping and restarting them. We also looked at naming our containers so they are more easily identifiable.
In part 2, we’ll take a look at running a database in a container and connecting it to our application. We’ll also look at setting up your local development environment and sharing your images using Docker.
Learn more
- Dive into our Node.js language-specific guide