Containers are having a moment. They are revolutionizing the way we do application development, but, as with most new technologies, their adoption in the enterprise is (rightfully) hindered by genuine security concerns. Ultimately, containers can bring huge security benefits not found in traditional infrastructure. But with new technologies come new risks. First of all, what is a container? In one sense, containers are an OS-level virtualization method for running multiple isolated Linux workloads on a host using a single Linux kernel. Effectively, everything outside the kernel is virtualized; applications, runtimes and files in one container can't see other containers on the same machine, but they share an underlying operating system, which makes them far more lightweight than virtual machines.
Containers have not only allowed companies to pack more onto a single machine, they’ve made it much easier to build portable software that is continuously redeployed. Container images can be easily shipped around, are portable from machine to machine, and start fast. They’ve become a key technology to enable microservices and auto-scaling applications, and are now a staple in many continuous integration/continuous delivery (CI/CD) pipelines.
Microservices break down the process of developing an application into a collection of small services; each service implements business capabilities, runs in its own process, and communicates via HTTP APIs or messaging. Microservices make it so you can have different people working on various parts at the same time, which not only speeds up the development process but also makes it easier to check for errors within individual services.
To manage all of the containers and microservices therein, you also need an orchestration platform, such as Kubernetes or Docker Swarm. These platforms have built-in security, but it’s more around making sure nothing gets out. The issue then becomes: what is going on inside?