Steve Steve - 1 year ago 86
Python Question

click-and-run docker image (or VM) with web interface?

I am in a position to deliver an application to a user, however it requires quite a few programs and components installed, which represents some overhead and explanation that I'd prefer to avoid. One method that I thought of that would be quite nice would be to simply supply a VM or Docker image. In this idea, the deliverable would be an image, and the user ideally would just double-click on the docker image and pop up a web interface showing a convenient front-end. (In this case I am imagining it would run Jupyter to allow interaction with a Python script I am developing.)

However, I'm still a bit confused by how Docker works. Instead of being able to send a double-clickable image, it seems you have to run

docker pull ..
and it downloads ... a bunch of stuff.. and installs them.. somewhere. Then you have to run a command,
docker run ...

Is there a simpler, more file-oriented interface to Docker or some similar VM solution? Why does it have to manage things in such a weird way?

(I prefer Docker because it now has very good Windows and Mac support, and its images can be quite a bit smaller than a VM image, from what I understand.)

Edit: By "weird way", what I mean is, what would be more clear to me is that I simply download some kind of file
and run it with like,
docker run my-docker-thing.image
. Instead a single
docker pull
seems to be filling up my harddrive quickly in
with a bunch of files named something
and I have no idea what it is actually doing. I assume these files contain "pieces" that can be composed into an image, but is there a way to represent this in a single file?

Answer Source

Yes, you can import and export Docker images as simple TAR files. To export an existing image into a tar file, use the following docker save command:

docker save your_image > your_image.tar

The TAR file created this way is self-contained and you can distribute it any way you like. On another host (with an already installed Docker engine), you can then import the TAR file using the docker load command (and subsequently start it with docker run):

docker load < your_image.tar
docker run your_image

Some background notes (because you've asked why Docker works with images the way it does):

The layer filesystem allows Docker to work with images and containers very efficiently, in terms of diskspace and container creation time. For example, you might have multiple local images that are all built on a common base image (like, for example, the ubuntu:16.04 image). The layer filesystem will recognize this and only download the base image once when doing a docker pull.

Using exported image files, you give up this advantage, as an image saved as a file will always contain all filesystem layers that the image is built of. Consider an example, in which you're working with a 200MB base image and 10MB of custom application data (the latter changing frequently when releasing new versions of your application). Using docker save and docker load will always result in a new 210MB-sized tarball that you need to distribute for each build of your application. Using docker push and docker pull together with a centralized registry, you'll need to transfer only the 10MB of changed image layers to and from the registry.

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download