Published on Oct. 16, 2021
With Docker, you can launch an OS in one second. Yes, I am saying true you can completely build and deploy a complete OS just in a matter of seconds with the help of Docker.
DevOps is one of the revolutionary practices of software development that is helping the IT industries to boost their production. And now, many industries out there in the market using DevOps practices and getting benefitted a lot. DevOps has multiple tools supporting different IT works. One of the very famous tools from that list is Docker. In this post, we will get to know What Docker is? & How does it work so fast?
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications.
By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.
Go through this link to get step by step Docker Tutorial.
Solomon Hykes introduced Docker to opensource in 2013. In the beginning, This was only working for a few Linux variants like RedHat, Fedora but later on it is accepted widely.
Docker works on container technology that makes it super fast. Let's lookout what is container technology and how Docker is using that?
Container technology is a concept inspired by the property of a real container. In order to transport something, we use containers to put our goods and ship them here and there. This container helps us to keep things organized and the goods from one container keep separated from other container's goods.
Keeping all these things in mind the container technology is evolved. Inside one container, all the dependencies and application code are installed & we can transport that container from one system to another without any extra configuration. It is also more secured than the previous system because it doesn't depend upon the outer environment much.
Once a container is ready you can run that on your local system, AWS cloud, or any other platform in the same way. Let's understand the container approach from the very beginning of OS.
At the beginning of the evolution of our machines, we had one operating system installed over the bare metal(ram & CPU). In the production world where we require thousands of operating systems that time we have to buy thousand of ram and CPU and install the same environment on top of that.
This is a static setup, What I mean by static is? Once we have a requirement we buy the resources and configure the whole system there but there might not be the same demand in the future and there our resources would be wasted. And if the demand increase, again we have to buy resources and configure the whole system which is a lengthy process.
There might be a need for applications to configure multiple setups like something is working on different versions with different dependencies and another is working on different so now this can't be possible on a single system.
Later on, we develop a virtual machine system. In this setup without running our OS directly on the bare metal, we install a layer of a hypervisor. This hypervisor provides us to create multiple environments without affecting one another on a single machine.
In this setup, the OS shares the bare metal resources. Whenever we are required to scale(add) one more machine, we simply contact the hypervisor and get the required resources.
But this is not the thing that actually deals with our current system. Suppose you are running an application and you are getting 500-1000 hits per day and you have configured your system to fulfill that need but suddenly you get 50000 hits, your system might get crashed or your server would be down for some time. Now, you need to have more VMs But from another day it comes back to a normal pace, now your extra resource gets wasted.
Now, to minimize the time of configuration and reduce the server load container technology is introduced. In virtualization, we run multiple OS on a single machine while in containerization we deploy multiple applications on the same OS. We don't have to install the OS again and again.
This reduces the load on the server very much and also provides a better-secured system. Docker is the tool that is using container technology to build deploy and run the application. Now, let's understand How docker is so fast?
Back in 2013, In a seminar, when Solomon Hykes introduced Docker. He told us the purpose of creating this tool. A hacker asked him about his work of deploying and managing the application, how he does all these kinds of stuff, Solomon told them that Linux Container is the technology that we used behind the scene. So he goes back and tries to understand the Linux container.
On the very next day, the hacker comes and asks him is there any simpler way to understand that because there is so much complexity to understand that. Solomon does not have any answer at that time. He and his fellow mate think about this problem and come up with a solution in the form of Docker. There might be some following reasons why Docker is super fast -
We have discussed almost every scenario about the computer world and understand what Docker is actually and how it came to the market. Let's now summarise the benefits we are getting from this amazing tool -
In this article, we have discussed What Docker is, What container technology is and how does Docker comes along with its benefits. To learn more about Docker stay tuned with us.
You can connect with the author on Twitter.
···