Docker Installation

We can install docker on any operating system whether it is Mac, Windows, Linux or any cloud. Docker Engine runs natively on Linux distributions. Here, we are providing step by step process to install docker engine for Linux Ubuntu Xenial-16.04 [LTS].

Introduction to Docker

Docker can be described as an open platform to develop, ship, and run applications. Docker allows us to isolate our applications from our infrastructure, so we can quickly deliver software. We can manage our infrastructure in similar ways we manage our applications with Docker. By taking the benefits of the methodologies of Docker to ship, test, and deploy code quickly, we can significantly decrease the delay between running code and writing it in production.

Background of Docker

Containers are separated from each other and bundle their configuration files, libraries, and software. They can negotiate with each other via well-defined channels. Containers utilize fewer resources as compared to virtual machines because each container distributes the services of one operating system kernel.

Operation

Docker can contain an app and its dependencies in an implicit container that can execute on macOS, Windows, or Linux computers. It allows the apps to execute in a range of locations, including in the private cloud, public, or on-premises. Docker utilizes the resource isolation aspects of the Linux kernel (like kernel namespaces and cgroups) and a union-capable file system (like OverlayFS) to permit containers to execute in one Linux instance, ignoring the overhead of initiating and managing virtual machines when running in Linux. On macOS, docker utilizes a Linux virtual machine to execute the containers.

A single virtual machine or server can simultaneously run many containers, as docker containers are very lightweight. In 2018, an analysis stated that a typical use case of docker involves executing eight containers/host. Also, it can be installed on one board system, such as Raspberry Pi.

  • Mostly, the support of the Linux kernel for namespaces isolates the operating environment view of an application, such as mounted file systems, user IDs, network, and process trees, while cgroups of the kernel offer resource limiting for CPU and memory.
  • Since the 0.9 version, docker contains its component (known as "libcontainer") to utilize virtualization features given directly through the Linux kernel, in inclusion to applying abstracted virtualization interfaces through systemd-nspawn, LXD, and libvirt.
  • Docker runs a high-level API to give lightweight containers that execute processes in separation.

What is the Docker Platform?

Docker gives the ability to package and execute an application in a loosely separated environment known as a container. This separation and security permit us to execute several containers simultaneously on the specified host. The containers are lightweight and include everything required to execute the applications, so we don't need to depend on what is installed on the host currently. We can easily distribute containers while we work and ensure that everyone we distribute with receives a similar container that operates in a similar way.

Docker gives a platform and tooling facility to maintain the lifecycle of our containers:

  • The container becomes a unit to distribute and test our application.
  • Develop our application and its supporting elements using containers.
  • When we are ready, deploy our application into our production environment as an orchestrated service or a container. It works similarly whether our production environment is a cloud provider, a local data center, or a combination of the two.

Licensing model of Docker

The Dockerfile files are licensed upon an open-source license. It is important to understand that the capacity of this license statement is just the Dockerfile, not the container image. The Docker Engine runs on the 2.0 version of the Apache License. Docker Desktop shares some elements licensed upon the GNU General Public License.

Components

As a service, the offerings of the Docker software are composed of three different components, which are listed and discussed below:

  • Software: The Docker daemon, known as dockerd, is a constant process that handles Docker containers and manages container objects. This daemon accepts requests send through the Docker Engine API. Another Docker client program, known as docker, offers a CLI (command-line interface) that permits users to collaborate with Docker daemons.
  • Registries: For Docker images, a Docker registry can be described as a repository. Docker clients associate with registries to pull (download) images for utilization and push (upload) images that they've created. Registries can be private or public. Docker Hub is the primary public registry. The default registry is Docker Hub, in which Docker finds images. Also, Docker registries permit notification creation based on events.
  • Objects: Docker objects are several entities used for assembling an app in Docker. The primary Docker object classes are services, containers, and images.
    • A Docker container is an encapsulated, standardized environment that executes apps. A container is maintained using the Docker CLI or API.
    • A Docker service permits containers to be reduced across two or more Docker daemons. The outcome is called a swarm, a group of cooperating daemons that negotiate via the Docker API.
    • A Docker Image is a template (read-only) used for creating containers. These images are used for storing and shipping apps.

Tools

  • Docker Swarm offers a native clustering feature for Docker containers, which converts a set of Docker engines into one virtual Docker engine. In the 1.12 and higher versions of Docker, Swarm mode is merged with Docker Engine.
  • Docker Compose is a tool used to define and run multi-container Docker apps. It utilizes YAML files to set up the services of the application and implements the building and start-up process of every container with one command.
    The utility, i.e., docker-compose CLI, permits the users to execute commands on two or more containers at the same time, e.g., scaling containers, building images, running containers, and more. The initial public beta release of Docker Compose was published on 21 December 2013, and the initial production-ready release was available on 16 October 2014.
  • Docker Volume offers independent data persistence, permitting data to be available even after a container is re-created or deleted.

History of Docker

Docker Inc. was discovered by Sebastian Pahl, Solomon Hykes, and Kamel Founadi at the time of the startup incubator group of the Y Combinator Summer 2010 and introduced in 2011. Also, the startup was one of the twelve startups in the Den's first cohort of founder. In France, Hykes initiated the Docker project as an internal project in dotCloud, which is a platform-as-a-service enterprise. Docker was introduced to the public at PyCon in Santa Clara in 2013. In March 2013, it was published as open source.

It utilized LXD as the default run environment at the time. One year later, Docker substituted LXD with its components, libcontainer, which was specified in the Go language with the publication of the 0.9 version. Docker made the Mody project in 2017 for open development and research.

Adoption

  • 19 September 2013: Docker and Red Hat introduced a collaboration across OpenShift, RHEL (Red Hat Enterprise Linux), and Fedora.
  • 15 October 2014: Microsoft introduced the Docker engine integration into Windows Server and native Docker Client role support in Windows.
  • 10 November 2014: Docker introduced an association with Stratoscale.
  • November 2014: The services of Docker container were introduced for the Amazon EC2 (Elastic Compute Cloud).
  • 4 December 2014: IBM introduced a strategic association with Docker that allows Docker to assimilate with the IBM Cloud more closely.
  • 22 June 2015: Docker and many other companies introduced that they're operating on an operating-system-independent standard and a new vendor for software containers.
  • December 2015: Oracle Cloud included support for Docker container after having StackEngine, which is a startup of Docker container.
  • April 2016: An independent software vendor, Windocks, published an open-source project port of Docker to Windows, permitting Windows Server 2016 and Server 2012 R2, with every SQL Server 2008 edition onward.
  • May 2016: Analysis displayed the below organizations as primary givers to Docker: Red Hat, Microsoft, IBM, Huawei, Google, Cisco, and the Docker team.
  • 8 June 2016: Microsoft declared that Docker could be utilized on Windows 10 natively.
  • January 2017: In 2016, a LinkedIn profile analysis displayed a presence developed by 160%.
  • 6 May 2019: Microsoft introduced the second version of WSL (Windows Subsystem for Linux). Docker Inc. introduced that it has worked on a Docker version for Windows that executes on WSL 2. It defines that Docker can execute on Windows 10 Home in particular.
  • August 2020: Microsoft introduced a WSL2 backport to Windows 10 1903 and 1909 versions, and Docker developers introduced Docker availability for these platforms.
  • August 2021: Docker Desktop for MacOS and Windows is no longer complimentary for enterprise users. Docker finished the free use of Docker Desktop for larger business users and substituted its free service with a Personal plan. On Linux distros, Docker remains unaffected.

Working of Containers

A container is made possible by visualization capabilities and process isolation created into the Linux kernel. The capabilities, including namespaces to restrict process visibility or access into other areas or resources of the system and Cgroups (control groups) to allocate resources between processes, enable two or more app components to distribute the resources of one host operating system instance in much a similar way that any hypervisor allows multiple VMs to share the memory, CPU, and other resources of one hardware server.

Container technology provides every benefit and functionality of virtual machines, such as application disposability, cost-effective scalability, and isolation, plus additional important benefits:

  • Lightweight: Containers do not bring the payload of a whole hypervisor and operating system instance, unlike VMs. They add only the operating system dependencies and processes essential to run the code. The sizes of the containers are evaluated in megabytes, make better hardware capacity usage, and have quick startup times.
  • Higher resource efficiency: Developers can execute many times similarly several application copies on similar hardware as they can use virtual machines with containers. It can decrease cloud spending.
  • Upgraded developer productivity: With this feature, containerized apps can be written only one time and executed everywhere. Containers are easier and faster to deploy, arrange, and restart as compared to VMs. It makes them perfect for use in CD/CI (continuous delivery and continuous integration) pipelines and a good option for development teams supporting DevOps and Agile practices.

Companies report other advantages of using containers, such as faster replies to market changes, developed app quality, and much more.

Usage of Docker

  • Fast application delivery

Docker accumulates the development lifecycle by permitting developers to operate in standardized environments with local containers, which give our services and applications. Containers are ideal for continuous delivery and continuous integration workflows.

  • Responsive scaling and deployment

The container-based platform of Docker permits for highly compact workloads. The containers can execute on the local laptop, cloud providers, virtual or physical machines in the data center, or in a combination of the environments of a developer.

The lightweight nature and portability of Docker also make it easier to dynamically maintain workloads and scale up and tear down services and applications as business requirements dictate.

  • Running multiple workloads on a similar hardware

Docker is fast and lightweight. It offers a cost-effective and viable replacement for hypervisor-based virtual machines, so we can use more of our server capacity to gain our business objectives. Docker is great for high-density platforms and for medium and small deployments where we require to work more using fewer resources.

Prerequisites:

Docker need two important installation requirements:

  • It only works on a 64-bit Linux installation.
  • It requires Linux kernel version 3.10 or higher.

To check your current kernel version, open a terminal and type uname -r command to display your kernel version:

Command:


Docker Installation 1

Update apt sources

Follow following instructions to update apt sources.

1. Open a terminal window.

2. Login as a root user by using sudo command.

3. Update package information and install CA certificates.

Command:

See, the attached screen shot below.

Docker Installation 2

4. Add the new GPG key. Following command downloads the key.

Command:

Screen shot is given below.

Docker Installation 3

5. Run the following command, it will substitute the entry for your operating system for the file.

See, the attached screen shot below.

Docker Installation 4

6. Open the file /etc/apt/sources.list.d/docker.listand paste the following line into the file.


Docker Installation 5

7. Now again update your apt packages index.


Docker Installation 6

See, the attached screen shot below.

8. Verify that APT is pulling from the right repository.

See, the attached screen shot below.

Docker Installation 7

9. Install the recommended packages.


Docker Installation 8

Install the latest Docker version.

1. update your apt packages index.

See, the attached screen shot below.

Docker Installation 9

2. Install docker-engine.

See, the attached screen shot below.

Docker Installation 10

3. Start the docker daemon.

See, the attached screen shot below.

Docker Installation 11

4. Verify that docker is installed correctly by running the hello-world image.

See, the attached screen shot below.

Docker Installation 12

This above command downloads a test image and runs it in a container. When the container runs, it prints a message and exits.






Latest Courses