I'm having trouble understanding how using volumes for storage will affect my disk space usage.
I have image A which is a base image and comes with a lot of utilities my apps need. I have apps B and C which are images built from base image A. They install different languages to run my two different apps. Image A is 300MB and B and C each are 300MB.
If I create 10 instances of app A and B how much disk space will be used?
Also supposing I'm mounting an NFS share to all the containers, any apps/processes within the containers only ever write app data, logs etc to the mounted nfs share, so it would seem not writes are taking place within the container. Mount point is /var/www/html what will my disk usage look like?
As I currently understand it in the first case my disk usage will be (300mb for the base image + 600mb for the two app images that build on it therefore 900mb. I'm assuming that the base image will be shared. If any containers are created from the app images B and C and they each write 100mb data before being cleared. Then my total disk usage will be 900mb + 100mb (net data written to disk) * no. of containers?
How do I understand this?
The layered filesystem will reuse the layers from parent images, so if image A is 300MB and apps B and C are each 300MB, then in reality those apps containers are adding nearly 0 disk space, reusing the entire content of the parent image. With all data being stored externally and no writes to the container's local RW filesystem, you could spin up as many of these as you want and only use 300MB of disk.
If each of those apps are actually adding 300MB, and those 300MB are different from the parent and other apps containers (docker uses caching that could allow each app container to reuse from the other if they ran the same commands), then you end up with each image showing as 600MB, while the actual disk used will be 900MB, 300MB for the parent and 300MB for each app image.
Spinning up each container does not add to the used disk space until that container writes files to a local volume or the RW layer of the container.
Understanding this gets into the layered filesystem design. The image may consist of multiple layers, each of which are created once and can be reused by other images, everything is stored as references to a hash, and only when there are no more references to the hash will docker remove that layer on a
When you turn an image into a container (with
docker run or even
docker create), the image contents are mounted as read only layers, with a RW layer for the container mounted on top, and any volumes mounted on top of the layered filesystem. A read outside of a volume passes through the layers until it reaches one with the file (or some other modification on the file like it's removal). So if the file was not modified it reads from one of the image layers, but if you create it in the RW layer your read will pull that back. This results in the concept of images being immutable while containers can each store their own changes for the lifetime of the container. You can run a
docker diff on a container to what changes have been made to its RW layer. This diff is what you store into an image layer at each step of a build, or on a