Introduction to Containerized Applications and Deploy Simple Application on Docker

Jalitha Dewapura
Geek Culture
Published in
8 min readJun 7, 2021

--

If you are a software engineer, you must know where your application is deployed. A few years back, web applications were deployed on dedicated physical servers. The evolution of application deployment can be considered as three generations. Let’s explore it for your knowledge.

Generation 1 — Dedicated Physical Servers

Earlier days, the application was deployed in a dedicated physical server. You may think that nowadays also deployed on physical servers. But it was a different story. For example, if your application needs to have different servers like the application server, database server, mail server, and web server, you must have four different hardware boxes (physical servers) to deploy this application.

There are many disadvantages of these servers.

  • Space for servers (Server room)
  • Maintenance cost (air-conditioned room, security, service)
  • Separate network
  • Operation systems
  • Resources waste (servers are not used 100% of processing power/ memory)

Generation 2 — Hypervisor

In this generation, virtual machines are solved few disadvantages of the previous generation. A virtual machine is an abstraction of a physical machine(hardware). So, few virtual machines can be run on a physical machine. Let’s see the creation of virtual machines. First, the hypervisor is installed on top of the high-performance server(physical machine). A hypervisor is a software that is used to create and manage virtual machines. There are many hypervisors out there such as VirtualBox, VMware, Hyper-V, etc.

After installing the hypervisor, separate virtual machines are installed on top of the hypervisor. These virtual machines are configured as the server requirement. For example, the application server may use 20% of processing power, the database server may use 20% of processing power, and the webserver may use 10% of processing power. Therefore, we can manage the resources depends on their requirement.

Hypervisor Architecture

Next, we need to install the operating system on top of the Virtual machine. Then the application is installed on the operating system. So, this is called a virtualized environment. Although resource waste has been resolved, there are still a few disadvantages that remain.

  • The installation cost of a new virtual machine (OS license cost, installation time).
  • The maintenance of OS (patches, updates of OS should be well managed).
  • Boot up time of the virtual machine is high.
  • Virtual machines are resource-intensive (each VM takes a slice of actual hardware resources like CPU, memory).

Generation 3 — Containerization

To overcome these disadvantages, containerization is introduced to the world. Containerization means each application is running in an isolated environment called containers. They allow running multiple applications in isolation. Containers are lightweight. It means they don’t need a full operating system. All containers on a single machine share the same operating system of the host. It means we need to license, patch, update a single operating system. Also, the operating system is already started on the host, the container can start quickly. Furthermore, these containers don’t reserve any hardware resources. So, we don't want to give them a specific number of CPU cores, a slice of memory space.

Virtual machines vs Containers

Containers are widely used in microservices. It provides the ability to build and deploy as independent services. Microservices should handle the scalability with the demand. Rather than the virtual machine, these containers can be easily created and configured on the same server with demand.

Docker

Being the first in the market and one of the leading software, most developers know Docker instead of containers. Docker is a platform for building, running, and shipping applications in a consistent manner. So, if an application working on your development machine, it can run and function the same way on other machines. Probably you have an experience that one application is completely working on your development machine, but it doesn’t work somewhere else. It can be happened because of three reasons.

  • One or more files are not included as part of your deployment.
  • The target machine is running a different version of the software.
  • Configuration settings or environment variables are different across these machines.

Docker can be used to overcome this problem. It can simply package and execute our application with all it needs. You can take this package and run it on any machine that runs Docker.

Docker is an isolated environment that allows multiple applications to use different versions of some software side by side. As we work on different projects our development machine gets cluttered with so many libraries and tools that are used by different applications. So, we don’t know which tools can be removed. Because it can be mess up with other applications. With Docker, we don’t have to worry about it. Because each application is running inside an isolated environment, we can safely remove the application with all these dependencies to clean up our machine.

Docker Architecture

Docker uses a client-server architecture. So, it has a client component that talks to a server component using RESTful API. The server also called as Docker engine that sits in the background and takes care of building and running docker containers. Technically, a container is a process. As I explained above, containers don’t contain a full brown operation system. Instead, all containers on a host share the operating system of the host. Actually, it's the kernel of the host.

Kernal — It is the core of the operating system. It is the part that manages all applications, as well as hardware resources like memory and CPU. Every operating system has its own kernel or engine and these kernels have different APIs. Therefore, we cannot run Windows applications on Linux. Under the wood, these applications talk to the kernel of the underlying operating system.

Docker Architecture

Docker daemon —It handles Docker objects such as images, containers, networks, and volumes by listening for Docker API requests.

Docker client — Many Docker users interact with Docker through the Docker client. When you use docker commands like docker run, the client sends them to dockerd, which executes them. The Docker API is used by the docker command. The Docker client has the ability to communicate with many daemons.

Docker registries — This is the place that stores all docker images. The docker pull or docker run commands are used to pull the required images from your configured registry. And, the docker push command is used to push to your configured registry.

Docker objects — It includes images, containers, networks, volumes, plugins, and other objects.

Docker image — It is a read-only template that includes a set of instructions to create a Docker container. Usually, an image is based on another image, with some additional customization.

Docker container— It is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI.

Create and Deploy a simple JavaScript project on Docker

I know this article gets a little bored because of tons of theory. So, let’s move with practicals. We are going to create a JavaScript project and execute it with Docker.

First, I create a JavaScript file with a single line of code. It is just a print statement on the console.

This is the application that will be dockerized at the end of this process. So, we want to build, run, and ship it using Docker. Typically, if you want to ship this application and run it on a different computer, we need to install Node on that computer. Then this application can be run on the terminal by the below command.

node app.js

So, here are the instructions to ship and run this application without Docker.

  • Start with an operating system
  • Install node
  • Copy app files
  • Run node app.js

If we use Docker, we can write these instructions inside the Dockerfile. And let Docker package our application.

Creating the Dockerfile is start with the base image. This base image has a bunch of files. So, we take those files and add additional customization to them. I used Node image which builds on top of the Linux image. You can find any official images from the docker hub.

FROM — Define the base image

COPY — Copy files from the current directory to /app directory in the image

WORKDIR — Set the current working directory in the image

CMD — Write the command that should be executed (Here, I execute the node command)

Then we need to execute a command as below. It will build the Docker package for our application.

docker build -t hello-docker .

The -t is a tag to identify the image. Then, thehello-docker is a specific name that can be used to access it. Finally, this command mentions the Dockerfile directory (. means current directory).

Then, I can see all the images on my computer with this command.

docker images

Now, I can run this image on any computer that running Docker. Let’s run it with this command.

docker run hello-docker

So, you can see the message on the terminal. Success!

If I publish this image on Docker-hub, anyone can use this image. So, we can get any application and dockerized it by adding Dockerfile to it. This Dockerfile contains instructions for packaging the application into an image. Once we have an image we can run it anywhere with Docker.

So, here you come to the end of this article. I hope you learn something new. If I miss any point please let me in the comment section. Happy Learning!

--

--

Jalitha Dewapura
Geek Culture

BSc. (Hons) in Engineering, Software Engineer at Virtusa