Docker containers

Updated: Tue, 07 Apr 2015 by Rad

Docker container engine

Credit: By dotCloud, Inc. [Apache License 2.0 (http://www.apache.org/licenses/LICENSE-2.0)], via Wikimedia Commons

Docker - is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.

Package application and its dependencies

Docker is a tool that can package an application and its dependencies in a virtual container that can run on any Linux server. This helps enable flexibility and portability on where the application can run, whether on premise, public cloud, private cloud, bare metal, etc.

Docker usage example:

  • Isolation of dependencies and rapid spin up of containers allows companies to shorten development cycles and increase testing speed by more than four times
  • Continuous delivery
  • Applications to scale in distribute manner
  • Helps with application lifecycle. Significant gains from using Docker and it helps with scale, speed, and consistency.
  • It empowers developers to own more of the full stack and lessens infrastructure burden from ops team.

Security advantages of containers

Application containers offer operational benefits that will continue to drive the development and adoption of the platform. While the use of such technologies introduces risks, it can also provide security benefits:

  • Containers make it easier to segregate applications that would traditionally run directly on the same host. For instance, an application running in one container only has access to the ports and files explicitly exposed by other container.
  • Containers encourage treating application environments as transient, rather static systems that exist for years and accumulate risk-inducing artifacts.
  • Containers make it easier to control what data and software components are installed through the use of repeatable, scripted instructions in setup files.
  • Containers offer the potential of more frequent security patching by making it easier to update the environment as part of an application update. They also minimize the effort of validating compatibility between the app and patches.

The security risks that come to mind when assessing how and whether to use containers include the following:

  • The flexibility of containers makes it easy to run multiple instances of applications (container sprawl) and indirectly leads to Docker images that exist at varying security patch levels.
  • The isolation provided by Docker is not as robust as the segregation established by hypervisors for virtual machines.
  • The use and management of application containers is not well-understood by the broader ops, infosec, dev and auditors community yet.

Differences between Linux VM and Docker

Docker currently uses LinuX Containers (LXC), which run in the same operating system as its host. This allows it to share a lot of the host operating system resources. It also uses AuFS for the file system. It also manages the networking for you as well.

AuFS is a layered file system, so you can have a read only part, and a write part, and merge those together. So you could have the common parts of the operating system as read only, which are shared amongst all of your containers, and then give each container its own mount for writing.

A full virtualized system gets its own set of resources allocated to it, and does minimal sharing. You get more isolation, but it is much heavier (requires more resources).

With LXC you get less isolation, but they are more lightweight and require less resources. So you could easily run 1000's on a host, and it doesn't even blink. Try doing that with Xen, and unless you have a really big host, I don't think it is possible.

A full virtualized system usually takes minutes to start, LXC containers take seconds, and sometimes even less than a second.

If you just want to isolate processes from each other and want to run a ton of them on a reasonably sized host, then LXC might be the way to go.

Signing of containers

Right now, a lot of people are sharing Docker images without signing to ensure that the code you're running in a container actually is what was originally supplied or is actually from the source it purports to be from. This is a major concern and needs to be addressed eventually.

This has been addressed with version 1.3 of Docker. In this release, the Docker Engine will now automatically verify the provenance and integrity of all Official Repos using digital signatures. Official Repos are Docker images curated and optimized by the Docker community to be the best building blocks for assembling distributed applications. A valid signature provides an added level of trust by indicating that the Official Repo image has not been tampered with.

< back to glossary

Docker containers - from around the web

< back to glossary




External IT glossary resources.