Docker - Concept Intro

Package Your Application into a Standardized Unit for Software Development

Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.

Comparing Containers and Virtual Machines

Containers and virtual machines have similar resource isolation and allocation benefits – but a different architectural approach allows containers to be more portable and efficient.

Virtual Machines

Virtual machines include the application, the necessary binaries and libraries, and an entire guest operating system – all of which can amount to tens of GBs.

VM

1
2
3
4
5
6
7
8
9
10
11
    App1            App2            App3
----------- ----------- -----------
Bins/Libs Bins/Libs Bins/Libs
----------- ----------- -----------
Guest OS Guest OS Guest OS
---------------------------------------------
Hypervisor
---------------
Host OS
---------------
Infrastructure

Containers

Containers include the application and all of its dependencies –but share the kernel with other containers, running as isolated processes in user space on the host operating system. Docker containers are not tied to any specific infrastructure: they run on any computer, on any infrastructure, and in any cloud.

container

1
2
3
4
5
6
7
8
9
    App1            App2            App3
----------- ----------- -----------
Bins/Libs Bins/Libs Bins/Libs
---------------------------------------------
Docker Engine
---------------
Host OS
---------------
Infrastructure

Terminologies

1. Images and Containers

Docker Engine provides the core Docker technology that enables images and containers. As the last step in your installation, you ran the docker run hello-world command. The command you ran had three parts.

docker command

  • An image is a filesystem and parameters to use at runtime. It doesn’t have state and never changes.
  • A container is a running instance of an image. When you ran the command, Docker Engine:

    • checked to see if you had the hello-world software image
    • downloaded the image from the Docker Hub (more about the hub later)
    • loaded the image into the container and “ran” it

Depending on how it was built, an image might run a simple, single command and then exit. This is what hello-world did.

A Docker image, though, is capable of much more. An image can start software as complex as a database, wait for you to add data, store the data for later use, and then wait for the next person.

Who built the hello-world software image though? In this case, Docker did but anyone can. Docker Engine lets people (or companies) create and share software through Docker images. Using Docker Engine, you don’t have to worry about whether your computer can run the software in a Docker image — a Docker container can always run it.

2. Docker Engine

The Docker Engine is a lightweight container runtime and robust tooling that builds and runs your container.

My own explanation: So what the above statement is saying is: First, Docker Engine is a tool, a container runtime tool, and a robust tool. Second, Docker Engine runs containers

Docker allows you to package up application code and dependencies together in an isolated container that share the OS kernel on the host system. The in-host daemon communicates with the Docker Client to execute commands to build, ship and run containers.

3. Docker Machine

You can use Docker Machine to:

  • Install and run Docker on Mac or Windows
  • Provision and manage multiple remote Docker hosts
  • Provision Swarm clusters

Docker Machine is a tool that lets you 1) install Docker Engine on virtual hosts, and 2) manage the hosts with docker-machine commands.

You can use Machine to create Docker hosts on your local Mac or Windows box, on your company network, in your data center, or on cloud providers like AWS or Digital Ocean.

Using docker-machine commands, you can start, inspect, stop, and restart a managed host, upgrade the Docker client and daemon, and configure a Docker client to talk to your host.

Point the Machine CLI at a running, managed host, and you can run docker commands directly on that host. For example, run docker-machine env default to point to a host called default, follow on-screen instructions to complete env setup, and run docker ps, docker run hello-world, and so forth.

Docker Machine was the only way to run Docker on Mac or Windows previous to Docker v1.12. Starting with the beta program and Docker v1.12, Docker for Mac and Docker for Windows are available as native apps and the better choice for this use case on newer desktops and laptops. We encourage you to try out these new apps. The installers for Docker for Mac and Docker for Windows include Docker Machine, along with Docker Compose.

4. Docker for Mac

Docker for Mac is a Mac native application, that you install in /Applications. At installation time, it creates symlinks in /usr/local/bin for docker and docker-compose, to the version of the commands inside the Mac application bundle, in /Applications/Docker.app/Contents/Resources/bin.

Here are some key points to know about Docker for Mac before you get started:

  • Docker for Mac does not use VirtualBox, but rather HyperKit, a lightweight macOS virtualization solution built on top of Hypervisor.framework in macOS 10.10 Yosemite and higher.

  • Installing Docker for Mac does not affect machines you created with Docker Machine. The install offers to copy containers and images from your local default machine (if one exists) to the new Docker for Mac HyperKit VM. If chosen, content from default is copied to the new Docker for Mac HyperKit VM, and your original default machine is kept as is.

  • The Docker for Mac application does not use docker-machine to provision that VM; but rather creates and manages it directly.

  • At installation time, Docker for Mac provisions an HyperKit VM based on Alpine Linux, running Docker Engine. It exposes the docker API on a socket in /var/run/docker.sock. Since this is the default location where docker will look if no environment variables are set, you can start using docker and docker-compose without setting any environment variables.

This setup is shown in the following diagram.

Docker for Mac

With Docker for Mac, you get only one VM, and you don’t manage it. It is managed by the Docker for Mac application, which includes autoupdate to update the client and server versions of Docker.

If you need several VMs and want to manage the version of the Docker client or server you are using, you can continue to use docker-machine, on the same machine, as described in Docker Toolbox (Legacy desktop solution) and Docker for Mac coexistence.

5. Docker Compose

Docker Compose is a tool for defining and running multi-container Docker applications.

With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration.

Compose is great for development, testing, and staging environments, as well as CI workflows.

Using Compose is basically a three-step process.

  • Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
  • Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
  • Lastly, run docker-compose up, and Compose will start and run your entire app.

A docker-compose.yml looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
- logvolume01:/var/log
links:
- redis
redis:
image: redis
volumes:
logvolume01: {}

Compose has commands for managing the whole lifecycle of your application:

  • Start, stop and rebuild services
  • View the status of running services
  • Stream the log output of running services
  • Run a one-off command on a service

6. Docker Hub

Docker Hub is a cloud-based registry service which allows you to link to code repositories, build your images and test them, stores manually pushed images, and links to Docker Cloud so you can deploy images to your hosts.

It provides a centralized resource for container image discovery, distribution and change management, user and team collaboration, and workflow automation throughout the development pipeline.

Docker Hub provides the following major features:

  • Image Repositories: Find, manage, and push and pull images from community, official, and private image libraries.
  • Automated Builds: Automatically create new images when you make changes to a source code repository.
  • Webhooks: A feature of Automated Builds, Webhooks let you trigger actions after a successful push to a repository.
  • Organizations: Create work groups to manage access to image repositories.
  • GitHub and Bitbucket Integration: Add the Hub and your Docker Images to your current workflows.

Typical Docker Platform Workflow

  1. Get your code and its dependencies into Docker containers:

    • Write a Dockerfile that specifies the execution environment and pulls in your code.

    • If your app depends on external applications (such as Redis, or MySQL), simply find them on a registry such as Docker Hub, and refer to them in a Docker Compose file, along with a reference to your application, so they’ll run simultaneously.

      • Software providers also distribute paid software via the Docker Store.
    • Build, then run your containers on a virtual host via Docker Machine as you develop.

  2. Configure networking and storage for your solution, if needed.

  3. Upload builds to a registry (ours, yours, or your cloud provider’s), to collaborate with your team.

  4. If you’re gonna need to scale your solution across multiple hosts (VMs or physical machines), plan for how you’ll set up your Swarm cluster and scale it to meet demand.
    Note: Use Universal Control Plane and you can manage your Swarm cluster using a friendly UI!

  5. Finally, deploy to your preferred cloud provider (or, for redundancy, multiple cloud providers) with Docker Cloud. Or, use Docker Datacenter, and deploy to your own on-premise hardware.