Published on Oct. 16, 2021
In order to scale up the production, industries are adopting DevOps practices. DevOps is not only providing rapid application development but also a high level of security. This can be ensured by the involvement of container technology in DevOps tools. And one of the very famous container tools is Docker.
In this article, we are going to learn Docker step by step from the very beginning to the advanced level. You will find almost all the commands and their concepts that we generally used while working on Docker. After going through this article, you would be able to find Docker Tutorial:
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. To know more about Docker follow this link.
In order to start the tutorial on Docker, you will need Docker software to be installed in your system. If you are using AWS cloud and inside your EC2 you may not need to install the Docker there.
In this article, We are not going to discuss the installation of Docker. You can find the complete guide on the official website for the installation-
To continue our learning there is some basic terminology of Docker that we should know. Let's start one by one.
To run the container just in one second images are the one who is responsible. Docker images are like an entire os. It is created with minimal functionality of an application or OS. There are some pre-created images we can use or we can create our own as well.
Docker Hub is an online community of Docker where you can find all the available images. You can also upload your own image to this and contribute to the world. If you are the one who works for opensource it is a great place for you. It is super easy to upload and update your images there.
A Dockerfile is a text file having all the details about the configuration of the image. We can create our own image using the Dockerfile. There might be several occasions we required manual setup of environment there we can use the Dockerfile to create a custom image. Once an image is ready we can launch the multiple containers using that image.
Once you have your image ready, using Docker run you can launch your container. Docker run utility is a command that can create your entire OS install, boot up, and be ready to use within a second. Containers are designed to be transient and temporary.
Docker Engine is the core of Docker, the underlying client-server technology that creates and runs the containers. Generally speaking, when someone says Docker generically and isn’t talking about the company or the overall project, they mean Docker engine
The most important thing while learning Docker is running a container. Running a container is like launching our application so what will be the pre-requisites for that?
To check you have successfully installed the docker is or not, follow the command:
docker version
This will print the current version of Docker that you have installed.
Now the second pre-requisite is An image. I already told you about the online registry of the Docker named Docker Hub. To download the image from the public repository we have the command docker pull <image-name: version>. If you don't define the version it will download the latest version available.
I am going to install CentOs version 7
docker pull centos:7
I am going to do all my practicals on CentOs 7 and that's why I pulled the image of that Linux distro.
You can check your image is downloaded or not by listing all the images
docker images
Once, we have successfully downloaded the image we are ready to launch the container. Let's use the image that we have downloaded from the Docker Hub.
docker run -it --name myContainer centos:7
NOTE: The above command,-it option is used to provide an interactive terminal as soon as the container is launched. Generally, we need to do some configuration over the container and that's why we need a terminal but sometimes, we don't need to further configure the container all we have required to launch the container there we can avoid interactive terminal and run our container in the background.
For running the container in the background we can add -dit option
docker run -dit --name myContainer centos:7
This will launch the container in the background and we will remain in the same shell.
You can list all your running container using the below command
docker ps
Using -a option you can list all the running and stopped command
docker ps -a
We can know details of our launched container using inspect command
docker inspect <container-name>
The above command will list all the details about the launched container.
You might need to do some configuration after launching the container and that's why we launch our container using -it option which provides us an interactive shell. Now, when you have completed your configuration you need to exit from this terminal, use a command
exit
This exit works fine, it will let you out from the container but it will also close the running container. If you want to close the container then there is no problem but in case you want to keep running your container and come out from it use ctrl+p+q.
This will don't close your container and safely land back to the bash terminal of your base application.
You can stop a running container using
docker stop container-name
This will stop your running container and you can easily check that after listing the running containers using docker ps.
Let's stop myContainer, which we have launched earlier
Check the list of the running containers.
In order to restart your container again, use
docker start <container-name>
Let's start myContainer
Confirm using docker ps
How can you go inside your container again? We have known about the -it option which provides us the terminal of the container while launching it but what if we want to go back after coming out. Here is the command that can help you to do this task.
docker attach <container-name>
Note: You can only go inside your container while it is in a running state, so first check if it is running then use the above command.
Yes, you can actually run a command inside the container without actually going inside it. There might be a case you want to run a command inside your container without going inside it. We have an option in Docker where we can actually run a command from our base OS and get the output in our base OS.
docker exec -it <container-name> <command>
Let's test this command in our launched container
docker exec -it myContainer whoami
whoami is a Linux command that provides the name of the user we logged in.
Using the above concept we can open the container in a better manner. How?
docker exec -it <container-name> bash
It will launch the container's bash shell for you. The better manner because after using command exit it will not close your running container. Don't forget to test in your container.
Logs are very essential while finding the error and fixing it. While working with a team it becomes very important to keep track of all the things. To find the log of any container follow the command -
docker logs <container-name>
Let's check the logs of our launched container
docker logs myContainer
To know more about the handling of docker containers you can refer to the official docs name Docker docs. There you can search for your need and use the relevant command. You can use your base OS to know more about the command using the --help option after any command. For eg.
docker --help
Now, you have a very basic idea about docker and you also know how to run or stop a simple container. Let's discuss some core things.
Working inside the private world switches are essential to connect two or more containers. For the public world, we need routers to connect.
Once, we launch a container docker-engine behind the scene create all of these. We don't have to worry about these complex configurations this is automatically done behind the scene and that is the reason we have connectivity between the containers and the public world.
To list all the available networks you can use the below command
docker network ls
There you will find three different drivers - bridges, host, null. By default, the bridge driver is used while launching a container. These drivers are responsible for network connectivity.
Where driver bridge provides switch and routers both properties. Hence in case, you want connectivity with the outside world uses a bridge driver.
While the Host is used to copy all the networking settings of BaseOS, although it works only in Linux, Windows or Mac doesn't support this.
The third driver NULL is used to keep disconnected the OS from the public world. If we don't want to connect our container from the public world we can choose this driver and it will keep our container private.
NOTE: It is advisable that before launching the container create your custom network because in a custom network you will find some extra functionality.
In order to create your own Docker network, you can follow the below command -
docker network create --driver bridge <network-name>
If you have more knowledge of networking there you can set the --subnet CIDR range and all. To know the details, use the --help option after creating for e.g. docker network create --help.
Once the network is created we can launch the container in it using the below command
docker run -it --name <container-name> --network <network-name> <image:version>
IMPORTANT: In real use cases we have multiple containers inside a network and we may be required to have a connection between them. Therefore we use the bridge driver for our network and to connect we use IP. But it is not necessary the IP of the container remains the same after boot. If changed may cause an error in your setup.
To overcome this, Docker has a facility that you can ping(connect) to the containers with their name rather than using their IP( they must be in the same network).
In case, if you launch two containers in the default network named bridge. And you want to ping with the name of the container you might fail. And if you try this setup in your custom network you can easily do this and that is the reason I already told you that custom network provides you some extra functionality.
But we have a trick to ping the container with the name inside the default network as well. To use this trick you can use the below command while launching the new container.
docker container run -it --link container1 --name <new-container> <image-name:version>
The above command will launch the container having a facility to ping container1(it is previously launched in the default network) with its name.
Now we are going to do a very great practical using the networking concept of Docker.
In this setup, we are going to set a web server and deploy our website there, and also set a load balancer to balance the traffic. We need to do the following setup
In case, if you don't know about load balancing, It is a special type of layer over the containers and when the traffic comes up it first goes to the load balancer. The load balancer distributes the request over the containers equally. How does this could actually help?
Suppose, you have 3 containers running and suddenly one goes down client doesn't know about it and will never feel any latency in getting the data because the load balancer will connect the client with other running containers.
I already told you how you can launch your own custom network in my case I have used
docker network create --driver bridge webnet
We have created the network and named that webnet(you can do whatever you want). Check if it is successfully created or not by listing all the available networks inside your OS.
docker network ls
To host the website, I am going to use the Apache webserver. In the beginning, I told you in Docker we can download the images from the docker hub and also create one. Now, for this setup, I am going to use my previously created image.
You may use this or you can directly use the apache image. For the beginning it is advisable to use the above command because there might be some more configuration you have to do which I have already done in this image.
We need an image to launch so firstly pull the image(Downloading from Docker Hub).
docker pull anubhavsinghgtm/apache-php-webserver
Though while launching the container docker first check the local hard disk and then go to the public repository and pull the image from there. So, without manually pulling the image you can directly run the container it will first pull the image and then run the container. But it is always advisory to do the process step by step.
To set up the load balancer we are going to use the --network-alias option at the time of launching the container. For my setup, I have used the below command you can use on your own.
For container 1
docker run -dit --network webnet --name myc1 --network-alias website anubhavsinghgtm/apache-php-webserver
For container 2
docker run -dit --network webnet --name myc2 --network-alias website anubhavsinghgtm/apache-php-webserver
Here webnet is the name of our network which we have created earlier and network-alias is named as website. You can choose any name as per your choice.
List both the container is running or not using the docker ps command.
Ignore if you see this AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.18.0.3. Set the 'ServerName' directive globally to suppress this message.
We need a client container to connect with the webserver in the same network. In this setup, we don't have public IP and therefore we can't access our webserver through the public world and that's the region our client should also be in the same network.
docker run -dit --network webnet --name client centos:7
Run docker ps. You will find three containers are running there.
If you are using the above-mentioned image you don't have to do anything but reading once to go through the complete process.
Go inside the container
docker exec -it myc1 bash
Inside the container go to /var/www/html (this is by default a directory for the webserver where we put all of our code). Create your PHP file there. If you are using the same image as I have used you will find an index.php file.
NOTE: If you are using another image it is more suggestible that create a PHP file where you can check the internal IP of your container and that will help us to understand the concept of load balancing.
You can use the below code to configure the index.php file
<pre>
<h2>Testing Apache Webserver over Docker</h2>
<?php print `ifconfig`?>
</pre>
NOTE: Make sure ifconfig command is running inside your container. If not? You can install the net-tools software. This will provide the ifconfig command for you and also install PHP to compile the above code.
(for centos, use the command yum install net-tools.)
Do the same steps in both of the containers.
Now, we are ready to check our results. Go inside the client container
docker exec -it client bash
Now you are inside your client container. First check you have connectivity with your container or not
ping -c 3 myc1
Similarly, for the myc2 container. Just replace myc1 with myc2.
Let's check our amazing setup. Use below command
curl website
You will see the IP of the containers. Try this 3-4 times and you will see sometimes you are connecting with your myc1 container and sometimes you are connecting with myc2 container.
Great!!! We have done great practical solving many use-cases of today's industries.
Inside your docker container if you want to connect with the public world you can do this easily. You can try this by the ping command.
ping -c 3 8.8.8.8
8.8.8.8 is one of Google's DNS IPs.
There are two types of connections we required -
Due to DNAting being disabled, in our previous setup, we have to launch our client in the same network webnet.
In the real world if we host our webserver we want to get clients from the public world. In that case, DNATing is very essential to be enabled.
We can't connect to the public IP from the private IP and similarly private IP from the public IP. So, to connect from the public IP to the Private IP we have to require a public IP and our container doesn't have any public IP.
Wait... What? If our container doesn't have any public IP and to connect to the public world we have also required a public IP, so how our container is able to connect?
When we create a network inside our Docker, there we get a router that enables SNATing and while connecting to the public world, the container first goes to the router and the router changes the private IP of the container with its public IP, and then the connection is established. This public IP is the same that ISP provides to your network(wifi or mobile network).
If this is the case, then the public world should also be able to connect with the container using the router's IP. But by default, this is disabled, why? Privacy is one of the main concerns and this makes the container more secure.
Now for DNATing, we can connect the public world with the router's public IP and then write some rules for the router so that it replaces its public IP with the private IP of the container. There may be multiple services running inside one network therefore we use port numbers to easily redirect the request from the public client to the actual container.
To enable the DNATing we have to expose our container and to do so follow the below command -
docker run -it -p 8080:80 --name mywebserver httpd
By the above command, we have written a rule for the Port Address Translation, in which if there any request comes to the router from a particular port(8080) in our case, it redirects to the service running on port 80(used for HTTP).
That's it.
Generally, storage is of two types- Ephemeral and Persistent.
Ephemeral storage is that storage that loses the data once the OS is terminated. It is similar to the C drive of our windows.
Persistent storage doesn't lose its data even after the termination of the OS.
Docker containers are ephemeral in nature. If the container goes down(terminated), we can't get back our data. To overcome this issue docker has a concept of docker volume that can help the container to achieve persistent storage.
Add persistent storage in the container is like buying a hard disk and attaching that to our system. At the place of buying, we will create a volume and then add that volume to our container.
Create a volume
docker volume create <volume-name>
Add volume while launching
docker run -it --name mycontainer -v myvolume:/var/www/html anubhavsinghgtm/apache-php-webserver
While launching the webserver we want to keep the data related to webserver and that is the reason we have mounted the webserver directory to our newly created volume. Your /var/www/html folder will be mounted with the myvolume, and a new container will be created.
We can pass our variable from the base OS to the container. This is a concept of environmental variable and to do that -e is the option.
Inside your base OS terminal, create a variable, for e.g.
x=10
echo $x
This will save the value of x and print its value in the terminal.
To pass this x variable to our container follow
docker run -e x=$x -it --name myos centos:7
Here x variable of the base OS will provide its value to the x variable of container OS.
Inside your container does x variable is provided or not
echo $x
We have learned a lot of concepts about Docker and now we are ready to create our own custom docker image. After the creation, we will be going to push that image to the public repository of the Docker i.e. Docker Hub.
But before creating the image let's know some basics.
Docker image is a complete configured container that once you configure and then launch over any system.
For setting up one or two OS, it's not bad to manually go and configure our environment. Doing manual configuration is not a good thing for real-world use cases. There are 100s of OS and we are required to do the same setup in all the OS, there it is always advisable to build an image of your configuration and install that in all the OS.
To build our own image we have two ways -
Docker commit is an extremely simple way of creating the image. Here we firstly pull a base image run the container and do the configuration manually as we are required and then using the commit command we can build our own custom image. Let's see the process.
Run a container
docker run -it --name myos centos:7
Run command ifconfig
This will show the error that the command was not found. We have to install net-tools software to use this command.
To install the software -
yum repolist
yum install net-tools
Type 'y' for yes.
Now, again run the ifconfig command. It will be working now.
We have configured a simple container image in which we have a net-tools app. We want this set up in our 100 different OS, in spite of configuring manually all of them we are going to create an image of this.
Come out of the container ctrl+p+q, run
docker commit myos myimage:v1
"myos" is the name of our container and "myimage" is the name of our image and "v1" is a tag for the image. Tag is similar to different versions of any file.
That's it we have our own image created. Now, simply took this image and install it on all the 100 OS. You can list all the available images using the below command -
docker image ls
This is not a recommended way to create our own image. In this way, you can't do much-advanced configuration. The most recommended way to create the docker image is Dockerfile.
Dockerfile is the best way to build custom images. It is the scripting way of creating the image inside the docker. Here we can create almost all kinds of images for the docker.
Mostly, we don't use Docker directly we use Kubernetes, Openshift, or maybe Cloud Services or CI/CD tools like Jenkins. There it is the best choice to create your own docker image.
There are some other container tools like Podman, CRI-O, Rocket. Once we created the image with the Dockerfile we can run the same image there as well.
Before creating our own custom image we must have knowledge about what we need actually. In this setup, I am going to use a CentOS version 7 image and there we want to print the IP of that container. We also know we can get the IP using ifconfig command.
But the challenge is we don't have any software there to run ifconfig. So firstly we have to install the software there and then get the IP by running ifconfig. We know our exact needs now we can start the process.
To create the Dockerfile script the name of our script should be Dockerfile. It is pre-defined name inside the docker.
vi Dockerfile
I am using vi as the editor, You can use any according to your OS.
Press "i" to start the insert mode to write some text.
FROM centos:7
RUN yum install net-tools -y
Press "esc" and then ":wq" to save the file.
Our script is ready. To build the image use
docker build -t myimage:v1 /wd/
/wd is my working directory. That's it, your complete image is built successfully and you can list that
docker image ls
You can also run a container using that image to test ifconfig is running or not.
In the above example, I have told you a very simple concept of Dockerfile if you want to learn more you can prefer the official doc.
You are reading till here, kudos to you! You have learned a lot of things related to the Docker today.
In this complete article, We have covered almost all the basic and advanced concepts related to the Docker tutorial. Now, you can explore more new things about Docker and test that easily.
Bookmark this post and don't forget to share with the needy once
You can connect with the author on Twitter.
···