Hands-On Docker for Microservices with Python
上QQ阅读APP看书,第一时间看更新

Knowing the advantages of using Docker for builds

One of the main traditional problems with builds was having an adequate build environment with all the dependencies needed to run the full build. This could include things such as the compiler, the test framework to run the tests, any static analysis tools, and the package manager. A discrepancy in versions could also produce errors.

As we've seen before, Docker is a fantastic way of encapsulating our software. It allows us to create an image that contains both our code and all tools that are able to proceed through all the steps.

In the previous chapter, we saw how to run unit tests in a single command, based on a build image. The image itself can run its own unit tests. This abstracts the test environment and explicitly defines it. The only dependency necessary here is to have Docker installed.

Keep in mind that a single build could generate multiple images and make them work in coordination. We saw how to run unit tests in the previous chapter—by generating service image and a database image—but there are more possible usages. For example, you could check the test running on two different operating systems, creating two images from each of the operating systems or different Python interpreter versions, and checking whether the tests pass in all of them.

The usage of Docker images allows for standardization in all environments. We can locally run images in a development environment, using the same commands that we did in our automated environment. This streamlines finding bugs and problems, as it creates the same environment, including an encapsulated operating system, everywhere the build is run.

Do not underestimate this element. Before that, a developer working on a laptop running Ubuntu and keen to run code to be deployed in CentOS needed to install a Virtual Machine ( VM) and follow steps to have an environment similar to the one in production. But invariably, the local VM would deviate as it was difficult to keep every developer's local VM in sync with the one in production; also, any automated-build tool might also have requirements, such as not supporting an old version of CentOS running in production.

To make things worse, sometimes different projects were installed on the same VM, to avoid having one VM per project, and that may cause compatibility problems.

Docker massively simplifies this problem, in part forcing you to explicitly declare what the dependencies are, and reducing the surface actually required to run our code.

Note that we don't necessarily need to create a single step that runs the whole build; it could be several Docker commands, even making use of different images. But the requirement is that they are all contained in Docker, the only software required to run it.

The main product of a build using Docker is Docker image or images. We will need to properly tag them, but only if the build is successful.