A part of the Anti-Pattern Series.
Containers, as imagined by the people at Docker, have created a wonderful technology, that has already cemented itself as the best-in-class solution for deploying applications into production infrastructure, replacing the clumsy, heavy, expensive, and complicated proprietary Virtual Machine strategies that came before. However, the dev community has been too eager to apply this new concept into areas it does not belong. I shall provide several arguments for why containers should be avoided for any general use case, and I urge readers to imagine how such a scenario can adversely impact a project, its velocity, and its stakeholders before deciding to use it.
Containers, by design, are opaque.
The drawback here should be obvious. Anyone using your container will not easily be able to see into what is going on inside of the container, nor be able to influence, tune, or configure it. Take, for example, the anti-pattern of building artifacts in a container. The end user, assuming that your container is a black box, knows none of the particulars of what the build container does. Is it running code quality as well? Is it minifying? transpiling, loading the dependencies correctly? Engineers want to be shipping code and not reverse engineering a container’s behavior.
Containers add new overhead to the actual problem you are trying to solve.
In the context of the artifact building container, environmental variables are paramount to its correct behavior. Unfortunately, the container adds yet another layer to the tapestry of environmental values, with uncertainty about which configuration has the last say about what critical value is being set to.
Conainers are very complex, and by opting to use them to solve your problem, you now must contend with understanding how the particular container technology works, as well as other considerations such as port forwarding and networking and roundabout access to the filesystem of its host and forwarding errors and exit codes to its host, in addition to any abstractions that leak out of the container.
Containers typically follow a layering approach, closing the door to compositional approaches, a cornerstone of good software engineering.
Projects often find themselves pulling tools from different platforms. The issue with containers is that it’s not trivial to have both a “Java and Ruby” container that solves your problem. In general, once the container is built, its closed off from modifications, as injecting and composing solutions into a container is nontrivial.
Container technology is not standardized
Without a standardized specification, it will be difficult for general, robust container-based solutions to emerge, as they will always have to contend with the specifics of the underlying implementation of the container. Meaning that if you build a Docker based solution, I may run into trouble deploying that into my Kubernetes fleet.
Containers introduce yet another failpoint
Once adopting a container based solution, its now paramount that you max out your container repository’s availability, since losing it means that development comes screeching to a halt. It also brings its own host of optimization issues, including caching images, loading images over the network, authorizing access to images, and so on.
You are probably already working inside of a container
There is a very good chance that the container you are trying to build is going to be running inside of a container. If that is the case, and that the case for containers is that one can be confident that the environment is ‘clean’, then using yet another container serves no purpose except more overhead.
Containers are, quite simply, the most simple solution to shipping applications into their infrastructure. They are also, quite often, not the most simple solution for many other problems. Keep it simple. If you are going to use containers for other purposes, ensure that it makes sense for it, and that it’s done very well.