The Docker Container as a Service (CaaS) platform delivers a secure, managed application environment for developers to build, ship, and run enterprise applications and custom business processes. In the “build” part of this process, there are design and organizational decisions that need to be made.
What You Will Learn
In an enterprise, there can be hundreds or even thousands of applications developed by in-house and outsourced teams. Application technology stacks can vary from a simple Excel macro, to multi-tier J2EE, all the way to clusters of elastic microservices deployed on a hybrid cloud. Applications are also deployed to several heterogeneous environments (development, test, UAT, staging, production, etc.), each of which can have very different requirements. Packaging an application in a container with its configuration and dependencies guarantees that the application will always work as designed in any environment. The purpose of this document is to provide you with typical development pipeline workflows as well as best practices for structuring your development process using Docker EE (formerly known as Docker Datacenter).
This document is divided into two main parts. In the first part, you will learn about the overall development pipeline, how Docker EE integrates with existing build systems, and different workflows (developer, CI/CD, operations). In the second part, you will learn best practices for migrating existing applications to Docker such as what applications to Dockerize, what components to put in images, what configuration to put in containers, where to put different types of configuration, and how to store assets for building images and configuration in version control.
Development Pipeline Overview
This section discusses the general workflow and organization of the development pipeline. It also covers the specific developer, CI/CD, and operations workflows and environments.
A typical enterprise has separate development and operations teams and may have some level of DevOps adoption. In general, operations teams are responsible for delivering and supporting the infrastructure up to the operating systems and even middleware components. Development teams are responsible for building and maintaining the applications. There is also some type of continuous integration (CI) for automated build and testing as well as continuous delivery (CD) for deploying versions to the different environments.
A typical CI/CD workflow is shown in the following diagram:
It starts on the left-hand side with development teams building applications. A CI/CD system then runs unit tests, packages the applications and builds Docker images on the Docker Universal Control Plane (UCP). If all tests pass, the images can be signed using Docker Content Trust and shipped to Docker Trusted Registry (DTR). The images can then be run in other non-production environments for further testing. If the images pass these testing environments, they can be signed again and then deployed by the operations team to the production environment.
Enterprises typically have separate production and non-production UCP clusters as previously shown. This is a natural fit with existing infrastructure organization and responsibilities. Enterprises typically have a production environment with higher security requirements, restrained operator access, a high-performance infrastructure, high-availability configurations, and full disaster recovery with multiple data centers. The non-production environment has different requirements with the main goal being testing and qualifying applications for production. The interface between the non-production and production clusters is DTR.
The question of whether to have a separate UCP cluster per availability zone or have one "stretched cluster" mainly depends on the network latency and bandwidth between availability zones. There could also be existing infrastructure and disaster recovery considerations to take into account.
In an enterprise environment where there can be hundreds of teams building and running applications a best practice is to separate the build from the run resources. By doing this, the image building process does not affect the performance or availability of the running containers/services.
There are two common methods of building images using Docker EE:
- Developers build images on their own machines then push them to DTR - This is suitable if there is no CI/CD system and no dedicated build cluster. Developers have the freedom to push different images to DTR.
- A CI/CD process builds images on a build cluster and pushes them to DTR - This is suitable if an enterprise wants to control the quality of the images pushed into DTR. Developers commit Dockerfiles to version control. They can then be analyzed and controlled for adherence to corporate standards before the CI/CD system builds the images, tests them, and pushes them to DTR. In this case CI/CD agents should be run directly on the dedicated build nodes.
Note: In the CI/CD job it is important to insure that images are built and pushed from the same Docker node so there is no ambiguity in the image that is pushed to DTR.
Unlike the separate production and non-production UCP clusters, enterprises commonly have a single master DTR cluster. This allows enforcement of enterprise processes such as Security Scanning in a centralized place. If pulling images from globally distributed locations takes too long then you can use the DTR Content Cache feature to create local caches.
Note: Policy enforcement on image signing will not currently work if you have your DTR in a separate cluster from UCP.
Developers and application teams usually use different repositories within the organization to develop, deploy, and test their applications. This section discusses the following diagram of a typical developer workflow using Docker EE as well as their interactions with the repositories:
A typical developer workflow follows these steps:
- Develop Locally - On the developer's machine or environment the developer locally builds images, run containers, and test their containers. There are several types of files and their respective repositories that are used to build Docker images.
- Version Control - This is used mainly for text-based files such as Dockerfiles,
docker-compose.yml, and configuration files. Small binaries can also kept in the same version control. Examples of version control are git, svn, Team Foundation Server, VSTS, and Clear Case.
- Repository Manager - These hold larger binary files such as Maven/Java, npm, NuGet, and RubyGems. Examples include Nexus, Artifactory, and Archiva.
- Package Repository - These hold packaged applications specific to an operating system such as CentOS, Ubuntu, and Windows Server. Examples include yum, apt-get, and PackageManagement.
After building an image, developers can run the container using the environment variables and configuration files for their environments. They can also run several containers together described in a
docker-compose.ymlfile and test their application.
- Version Control - This is used mainly for text-based files such as Dockerfiles,
- Push Images - Once an image has been tested locally, it can be pushed to the Docker Trusted Registry. The developer must have an account on DTR and can push to a registry on their user account for testing on UCP. For example:
docker push dtr.example.com/kathy.seaweed/apache2:1.0
- Deploy on UCP - The developer might want to do a test deployment on an integration environment on UCP in the case where the development machine does not have the ability or access to all the resources to run the entire application. They might also want to test whether the application scales properly if it's deployed as a service. In this case the developer would use CLI-based access to deploy the application on UCP.
$ eval $(<env.sh)to point the Docker client to UCP. Then run the following commands:
$ docker run -dit --name apache2 dtr.example.com/kathy.seaweed/apache2:1.0 $ docker-compose --project-name wordpress up -d $ docker service create --name apache2 dtr.example.com/kathy.seaweed/apache2:1.0
- Test the application - The developer can then test the deployed application on UCP from his machine to validate the configuration of the test environment.
- Commit to Version Control - Once the application is tested on UCP the developer can commit the files used to create the application, its images, and its configuration to version control. This commit triggers the CI/CD workflow.
Developer Environment and Tools
A developer's machine should have an edition of Docker (Windows, Mac, or Linux) installed on it. The installation provides Docker Engine, Docker CLI client, Docker Compose, and Docker Notary command line.
In some enterprises, the version of the operating system used on developer machines is older and not compatible with Docker for Windows or Mac. In this case Docker Toolbox can be used.
Docker versions do not provide a native IDE for developing with Docker. The primary interface is the command line API. However, most leading IDEs (NetBeans, Eclipse, IntelliJ, Visual Studio) have some support for Docker through plugins or add-ons. Our labs contain tutorials on how to set-up and use common developer tools and programming languages with Docker.
Tip: Optimizing images sizes. If an image size becomes too large, a quick way to identify where possible optimizations are is to use the
docker history <image>command. It will tell you which lines in the Dockerfile added what size to the image.
Docker Client CLI Contexts
When working with Docker EE and the Docker command line, it is important to keep in mind the context that the command is running in.
- Local Docker Engine - This is the main context used for development of Dockerfiles and testing locally on the developer's machine.
- Remote UCP CLI - CLI-based access is used for building and running applications on a UCP cluster.
- UCP GUI - The Docker EE web user interface provides an alternative to the CLI.
- Remote UCP node - Sometimes it can be useful for debugging to directly connect to a node on the UCP cluster. In this case, SSH can be used and the commands are executed on the Docker Engine of the node.
A quick way to see which context you are in is by doing an
export command. If you have a
DOCKER_HOST environment variable set, then you have most likely configured your UNIX session to use the UCP client bundle, meaning all Docker commands are executed on the UCP cluster.
A CI/CD platform uses different systems within the organization to automatically build, deploy, and test applications. This section discusses a typical CI/CD workflow using Docker EE and their interactions with those repositories as showing in the following illustration:
A typical CI/CD workflow follows these steps:
- Build Application — A change to the version control of the application triggers a build by the CI Agent. A build container is started and passed parameters specific to the application. The container can run on a separate Docker host or on UCP. The container obtains the source code from version control, runs the commands of the application's build tool, and finally pushes the resulting artifact to a Repository Manager.
- Build Image — The CI Agent pulls the Dockerfile and associated files to build the image from version control. The Dockerfile is setup so that the artifact built in the previous step is copied into the image. The resulting image is pushed to DTR. If Docker Content Trust has been enabled and Notary has been installed, then the image is signed with the CI/CD signature.
- Deploy Application — The CI Agent can pull a run-time configuration from version control (e.g.
docker-compose.yml+ environment-specific configuration files) and use them to deploy the application on UCP via the CLI-based access.
- Test Application — The CI Agent deploys a test container to test the application deployed in the previous step. If the all of the tests pass, then the image can be signed with a QA signature by pulling and pushing the image to DTR. This push triggers the Operations workflow.
Tip: The CI Agent can also be Dockerized, however, since it runs Docker commands, it needs access to the host's Docker Engine. This can be done by mounting the host's Docker socket, for example:
docker run --rm -it --name ciagent \ -v /var/run/docker.sock:/var/run/docker.sock \ ciagent:1
Tip: Use Docker images as build caches. When applications are built in a container, often there is a need to download dependencies to compile the application. An enterprise cache or proxy such as Nexus is also used to speed this up. To speed the build up even further you can create a "build base image" with pre-loaded common dependencies from which the build container can inherit. This avoids mounting shared volumes to cache the dependencies between build containers.
CI/CD Environment and Tools
The nodes of the CI/CD environment where Docker is used to build applications or images should have a CS Engine installed. The nodes can be labelled "build" to create a separate cluster.
There are many CI/CD software systems available (Jenkins, Visual Studio, Team City, etc). Most of the leading systems have some support for Docker through plugins or add-ons. However, to ensure the most flexibility in creating CI/CD workflows, it is recommended that you use the native Docker CLI or rest API for building images or deploying containers/services.
The operations workflow usually consists of two parts. It starts at the beginning of the entire development pipeline creating base images for development teams to use, and it ends with pulling and deploying the production ready images from the developer teams. The workflow for creating base images is the same as the developer workflow, so it is not shown here. However, the following diagram illustrates a typical Operations workflow for deploying images in production:
A typical operations workflow follows these steps:
Deploy Application — The deployment to production can be triggered automatically via a change to version control for the application or it can be triggered by operations. It can also be executed manually or done by a CI/CD agent.
A tag of the deployment configuration files specific to the production environment is pulled from version control. This includes a
docker-composefile or scripts which deploy the services as well as configuration files. Secrets such as passwords or certificates that are specific to production environments should be added or updated. Docker 17.03 (and Docker Engine 1.13) contains native secrets management. The CD agent can then deploy the production topology in UCP.
Test Application — The CD Agent deploys a test container to test the application deployed in the previous step. If all of the tests pass, then the application is ready to handle production load.
Load Balancing — Depending on the deployment pattern (Big Bang, Rolling, Canary, Blue Green, etc.), an external load balancer, DNS server, or router is reconfigured to send all or part of the requests to the newly deployed application. The older version of the application can remained deployed in case of the need to rollback. Once the new application is deemed stable, the older version can be removed.
Enterprise Base Images
The Operations team will usually build and maintain “base images.” They typically contain the OS, middleware, and tooling to enforce enterprise policies. They might also contain any enterprise credentials used to access repositories or license servers. The development teams can then inherit from these base images by using the keyword
FROM in their Dockerfile and then add their application specific components, applications, and configuration to their own application images.
Tip: Squash function. Since the base images do not change that often and are widely used within an organization, minimizing their size is very important. You can use
docker build --squash -t <image> .to create only one layer and optimize the size of the image. You will loose the ability modify, so this is recommended for base images and not necessarily for application images which change often.
The nodes of the production environment should have CS Engine installed. For more guidelines and best practices around installing and configuring Docker EE for production please refer to Docker EE Design and Best Practices. For best practices around security and Docker EE for production please refer to Securing Docker EE and Security Best Practices.
Dockerizing Applications in the Enterprise
In this section, best practices for migrating existing applications to Docker are explained. It starts with high-level decisions such as what applications to Dockerize and then moves on to more detailed decisions such as what components to put in images, what configuration to put in containers, where to put different types of configuration, and finally how to store assets for building images and configuration in version control.
What Applications to Dockerize?
Deciding which components of an application to Dockerize depends on the difficulty of Dockerizing the component versus the potential gains in speed, portability, compute density, etc. There are several ways to build Docker images, including:
- From a Dockerfile
- By directly modifying a running container and then saving the changes to an image
The preferred way is to build images from a Dockerfile so that the images are always reproducible. Another advantage is that Dockerfiles can be put under version control. However, with some components, this approach is too difficult. The following sections describe, in order of increasing difficulty, different categories of components and approaches for Dockerizing them.
In general components which are stateless are the easiest to Dockerize because there is no need to take into account persistent data such as with databases or a shared filesystem. This is also a general best practice for microservices and allows them to scale easier as each new instance can receive requests without any synchronization of state.
Some examples of these are:
- Web servers with static resources — Apache, Nginx, IIS
- Application servers with stateless applications — Tomcat, nodeJS, JBoss, Symphony, .NET
- Microservices — Spring Boot, Play
- Tools — Maven, Gradle, scripts, tests
Stateless images can be created using a Dockerfile.
Components which are stateful are not necessarily harder to Dockerize. However, because the state of the component must be stored or synchronized with other instances, there are operational considerations to consider.
Some examples of these are:
- Application servers with stateful applications — There is often a need to store user sessions in an application. Two approaches to handling this case are to use a load balancer with session affinity to ensure the user always goes to the same container instance or to use an external session persistence mechanism which all container instances share. There are also some components that provide native clustering such as portals or persistence layer caches. It is usually best to let the native software manage synchronization and state between instances. Having the instances on the same overlay network allows them to communicate with each other in a fast, secure way.
- Databases — Databases usually need to persist data on a filesystem. The best practice is to only containerize the Engine and not the data itself. This can be done using a host volume, for example:
docker run -dit -v /var/myapp/data:/var/lib/postgresql/data postgres
- Applications with shared filesystems - Content Management Systems (CMS) use filesystems to store documents such as PDFs, pictures, Word files, etc. This can also be done using a host volume which is often mounted to a shared filesystem so several instances of the CMS can access the files simultaneously.
Stateful images can usually be created using a Dockerfile.
Complex Product Installation
Components that have a complex production installation are usually the hardest to Dockerize because they cannot be captured in a Dockerfile.
Some examples of these are:
- Non-scriptable installation — These can include GUI-only installation/configuration or products that require multi-factor authentication.
- Non-idempotent installation process — Some installation processes can be asynchronous where the installation script has terminated but then starts background processes. The completion of the entire installation process includes waiting for a batch process to run or a cluster to synchronize without returning a signal or clear log message.
- Installation with external dependencies — Some products require an external system to reply for downloading or activation. Sometimes for security reasons this can only be done on a specific network or for a specific amount of time making it difficult to script the installation process.
- Installation that requires fixed IP address — Some products require a fixed IP address for a callback at install time but can then be configured once installed to use a hostname or DNS name. Since container IP address are dynamically generated, the IP address could be difficult to determine at build time.
These cases should be done by running a container, installing the product, and then saving the changes to an image. For example:
$ docker commit -a "John Smith" -m "Installed CMS" mycontainer cms:2
Tip: Tools or Test Container. When debugging services that have dependencies on each other, it is often helpful to create a container with tools to test connectivity or the health of a component. Common cases are network tools like telnet, netcat, curl, wget, or SQL clients, logging agents. This avoids adding unnecessary debugging tools to the containers that run the production loads.
What to Include in an Image
In the trend towards microservices, another question that arises is what components of an application stack to put in an image. You can include an entire application stack such as the the official GitLab image (https://hub.docker.com/r/gitlab/gitlab-ce/), or you can do the opposite, which would be to break up an existing monolithic application into microservices each residing in its own image.
In general, the best practice is to have one component per image. For example, a reverse proxy, an application server, or a database engine would each have its own image. What about an example where several web applications (war) are deployed on the same application server? Should they be separated and each have its own image or should they be in the same image? The criteria for this decision are similar to non-containerized architectural decisions:
- Release Lifecycle — Are the war release schedules tightly coupled or are they independent?
- Runtime Lifecycle — If one war stops functioning should all wars be stopped?
- Scalability — Do the wars need to be scaled separately or can they be scaled together?
- Security — Does one war need a higher level of security such as TLS?
- High Availability — Is one war mission critical needing redundancy and the others can tolerate a single point of failure and downtime?
Similarly with microservices, the same criteria apply. For example, consider a microservice that depends on a logging agent to push logs to a centralized logging database. The following diagram shows two different strategies for a high availability deployment for the microservice.
If the microservice and logging agent are loosely coupled, they can be run in separate containers such as in the configuration on the left. However, if the service and the logging agent are tightly coupled and their release lifecycles are identical, then putting the two processes in the same container can simplify deployments and upgrades as illustrated in the configuration on the right. To manage multiple processes there are several lightweight init systems for containers such as dumb-init and runit.
A question that arises frequently is which parts of the component should go into an image? The engine or server, the application itself, the configuration files? There are several main approaches:
- Create only a base image and inject the things that vary per release or environment
- Create an image per release and inject the things that vary per environment
- Create an image per release and environment
In some cases, a component does not have an application associated with it or its configuration does not vary per environment, so a base image is appropriate. An example of this might be a reverse proxy or a database. In other cases such as an application which requires an application server, using a base image would require mounting a volume for a certain version of an application.
The following table summarizes the advantages and disadvantages of each choice:
|Base Image||Release Image||Environment Image|
|What's inside the image||OS, middleware, dependencies||Base image, release artifacts, configuration generic to the environment||Release image, configuration specific to the environment|
|What's outside the image||Release artifacts, configuration, secrets||Configuration specific to the environment, secrets||Secrets|
|Advantages||Most flexible at run time, simple, one image for all use cases||Some flexibility at run time while securing a specific version of an application||Most portable, traceable, and secure as all dependencies are in the image|
|Disadvantages||Less portable, traceable, and secure as dependencies are not included in the image||Less flexible, requires management of release images||Least flexible, requires management of many images|
|Examples||Tomcat (dtr.example.com/base/tomcat7:3)||Tomcat + myapp-1.1.war (dtr.example.com/myapp/tomcat7:3)||Tomcat + myapp-1.1.war + META-INF/context.xml (dtr.example.com/myapp/tomcat7:3-dev)|
Usually a good choice is to use a release image. This gives the best combination of a sufficiently immutable image while maintaining the flexibility of deploying to different environments and topologies. How to configure the images per different environments is discussed later in the Configuration Management section.
Docker images natively provide inheritance. One of the benefits of deriving from base images is that any changes to a base or upstream image are inherited by the child image simply by rebuilding that image without any change to the child Dockerfile. By using inheritance, an enterprise can very simple enforce policies with no changes to their many applications’ Dockerfiles. Typically, an enterprise will develop a hierarchy of base images depending on how diverse their technology stacks are. The following is an example of an image hierarchy.
On the left are the enterprise-wide base images typically provided by the global operations team, and on the right are the application images. Even on the application side, depending on how large an application or program is, there can be a hierarchy as well.
Tip: Create a project base image. In a project team with a complicated application stack there are often common libraries, tools, configurations, or credentials that are specific to the project but not useful to the entire enterprise. Put these items in a "project base image" from which all project images derive.
A single enterprise application will typically have four to twelve environments to deploy on before going into production. Without Docker installing, configuring, and managing these environments, a configuration management system such as Puppet, Chef, Salt, Ansible, etc. would be used. Docker natively provides mechanisms through Dockerfiles and
docker-compose files to manage the configuration of these environments as code,, and thus can be handled through existing version control tools already used by development teams.
The topologies of application environments can be different in order to optimize resources. In some environments it doesn't make sense to deploy and scale all of the components in an application stack. For example, in functional testing only one instance of a web server is usually needed whereas in performance testing several instances are needed, and the configuration is tuned differently. Some common topologies are:
- Development — A single instance per component, debug mode
- Integration, Functional testing, UAT, Demonstration - A single instance per component, small dataset, and integration to test external services, debug mode
- Performance Testing — Multiple instances per component, large dataset, performance tuning
- Robustness Testing — Multiple instances per component, large dataset, integration to test external services, batch processing, and disaster recovery, debug mode
- Production and Staging — Multiple instances per component, production dataset, integration to production external services, batch processing, and disaster recovery, performance tuning
The configuration of components and how they are linked to each other is specified in the
docker-compose file. Depending on the environment topology, a different
docker-compose can be used. The extends feature can be used to create a hierarchy of configurations. For example:
myapp/ common.yml <- common configurations docker-compose-dev.yml <- development environment specific configurations which extend common.yml docker-compose-int.yml docker-compose-prod.yml
In a typical application stack there are tens or even hundreds of properties to configure in a variety of places. When building images and running containers or services there are many choices as to where and when a property should be set depending on how that property is used. It could be in a Dockerfile,
docker-compose file, environment variable, environment file, property file, entry point script, etc. This can quickly become very confusing in a complicated image hierarchy especially when trying to adopt DRY principles. The following table shows some common groupings based on lifecycles to help determine where to put configurations.
|Yearly build time||Enterprise policies and tools||Enterprise base image Dockerfiles||
|Monthly build time||Application policies and tools||Application base image Dockerfiles||
|Monthly/weekly build time||Application release||Release image Dockerfiles||
|Weekly/daily deploy time||Static environment configuration||Environment variables, docker-compose, .env||
|Deploy time||Dynamic environment configuration||Secrets, entrypoint.sh, vault, CLI, volumes||
|Run time||Elastic environment configuration||Service discovery, profiling, debugging, volumes||
The process of figuring out where to configure properties is very similar to code refactoring. For example, properties and their values that are identical in child images can be abstracted into a parent image.
Starting with Docker 17.03 (and Docker CS Engine 1.13), native secrets management is supported. Secrets can be created and managed using RBAC in Docker EE. Although Docker EE can manage all secrets, there might already be an existing secrets management system, or there might be the requirement to have one central system to manage secrets in Docker and non-Docker environments. In these cases, a simple strategy to adopt for Docker environments is to create a master secret managed by Docker EE which can then be used in an entry point script to access the exiting secrets management system at startup time. The recovered secrets can then be used within the container.
Dockerfile Best Practices
As the enterprise IT landscape and the Docker platform evolve, best practices around the creation of Dockerfiles have emerged. Docker keeps a list of best practices on docs.docker.com.
Docker Files and Version Control
Docker truly allows the concept of "Infrastructure as Code" to be applied in practice. The files that Docker uses to build, ship, and run containers are text-based definition files and can be stored in version control. There are different text-based files related to Docker depending on what they are used for in the development pipeline.
- Files for creating images — Dockerfiles,
entrypoint.sh, and configuration files
- Files for deploying containers or services —
docker-compose.yml, configuration files, and run scripts
These files are used by different teams from development to operations in the development pipeline. Organizing them in version control is important to have an efficient development pipeline.
If you are using a "release image" strategy, it can be a good idea to separate the files for building images and those used for running them. The files for building images can usually be kept in the same version control repository as the source code of an application. This is because release images usually follow the same lifecycle as the source code.
myapp/ src/ test/ Dockerfile docker-compose.yml <- build images only conf/ app.properties app.xml entrypoint.sh
docker-composefile with only
[build](https://docs.docker.com/compose/compose-file/#build)configurations for different components in an application stack can be a convenient way build the whole application stack or individual components in one file.
The files for running containers or services follow a different lifecycle, so they can be kept in a separate repository. In this example, all of the configurations for the different environments are kept in a single branch. This allows for very simple version control strategy and viewing configurations for all environments is in one place.
myapp/ common.yml docker-compose-dev.yml docker-compose-int.yml docker-compose-prod.yml conf/ dev.env int.env prod.env
However, this single branch strategy quickly becomes difficult to maintain when different environments need to deploy different versions of an application. A better strategy is to have each environment's run configuration is in a separate branch. For example:
myapp/ <- int branch docker-compose.yml conf/ app.env
The advantages of this are multiple: Tags per release can be placed on a branch, allowing an environment to be easily rolled back to any prior tag. Listing the history of changes to the configuration of a single environment becomes trivial. * When a new application release requires the same modification to all of the different environments and configuration files, it can be done using the merge function from the version control as opposed to copying and pasting the changes into each configuration file.
Repositories for Large Files
When building Docker images inevitably there will be large binary files that need to be used. Docker build does not let you access files outside of the context path, and it is not a good idea to store these directly in a version control, especially a distributed one such as git, as the repositories will rapidly become too large and unwieldy.
There are several strategies for storing large files:
- Web Server — They can be stored on a shared filesystem, served by a web server, and then accessed by exposing them with the
ADD <URL> <dest>command in the Dockerfile. This is the easiest method to setup, but there is no support for versions of files or RBAC on files.
- Repository Manager — They can be stored as files in a repository manager such as Nexus or Artifactory, which provide support for versions and RBAC.
- Centralized Version Control — They can be stored as files in a centralized version control system such as SVN, which eliminates the problem of pulling all versions of large binary files.
- Git Large File Storage — They can be stored using Git LFS. This gives you all of the benefits of git, and the Docker build is under one context. However, there is a learning curve to using Git LFS.
This document discusses the Docker development pipeline, integration with existing systems, different workflows (developer, CI/CD, operations), applications to Dockerize, image granularity, configuration management, configuration buckets, and version control. Follow these best practices to create an effective enterprise development pipeline.
Document Version: 1.0
Tested on: Docker EE 17.03, Docker Engine 1.13, DTR version 2.2.1, UCP 2.1.0