I am in a position to deliver an application to a user, however it requires quite a few programs and components installed, which represents some overhead and explanation that I'd prefer to avoid. One method that I thought of that would be quite nice would be to simply supply a VM or Docker image. In this idea, the deliverable would be an image, and the user ideally would just double-click on the docker image and pop up a web interface showing a convenient front-end. (In this case I am imagining it would run Jupyter to allow interaction with a Python script I am developing.)
However, I'm still a bit confused by how Docker works. Instead of being able to send a double-clickable image, it seems you have to run
docker pull ..
docker run ...
docker run my-docker-thing.image
Yes, you can import and export Docker images as simple TAR files. To export an existing image into a tar file, use the following
docker save command:
docker save your_image > your_image.tar
The TAR file created this way is self-contained and you can distribute it any way you like. On another host (with an already installed Docker engine), you can then import the TAR file using the
docker load command (and subsequently start it with
docker load < your_image.tar docker run your_image
Some background notes (because you've asked why Docker works with images the way it does):
The layer filesystem allows Docker to work with images and containers very efficiently, in terms of diskspace and container creation time. For example, you might have multiple local images that are all built on a common base image (like, for example, the
ubuntu:16.04 image). The layer filesystem will recognize this and only download the base image once when doing a
Using exported image files, you give up this advantage, as an image saved as a file will always contain all filesystem layers that the image is built of. Consider an example, in which you're working with a 200MB base image and 10MB of custom application data (the latter changing frequently when releasing new versions of your application). Using
docker save and
docker load will always result in a new 210MB-sized tarball that you need to distribute for each build of your application. Using
docker push and
docker pull together with a centralized registry, you'll need to transfer only the 10MB of changed image layers to and from the registry.