Asked  7 Months ago    Answers:  5   Viewed   46 times

I am trying to build a backup and restore solution for the Docker containers that we work with.

I have Docker base image that I have created, ubuntu:base, and do not want have to rebuild it each time with a Docker file to add files to it.

I want to create a script that runs from the host machine and creates a new container using the ubuntu:base Docker image and then copies files into that container.

How can I copy files from the host to the container?



The cp command can be used to copy files.

One specific file can be copied TO the container like:

docker cp foo.txt mycontainer:/foo.txt

One specific file can be copied FROM the container like:

docker cp mycontainer:/foo.txt foo.txt

For emphasis, mycontainer is a container ID, not an image ID.

Multiple files contained by the folder src can be copied into the target folder using:

docker cp src/. mycontainer:/target
docker cp mycontainer:/src/. target

Reference: Docker CLI docs for cp

In Docker versions prior to 1.8 it was only possible to copy files from a container to the host. Not from the host to a container.

Tuesday, June 1, 2021
answered 7 Months ago

AppArmor (Application Armor) is a Linux security module that protects an operating system and its applications from security threats. To use it, a system administrator associates an AppArmor security profile with each program. Docker expects to find an AppArmor policy loaded and enforced. Check default profiles with:

# sudo apparmor_status

To use docker default profile on a container, run:

$ docker run --rm -it --name test-container --security-opt apparmor=docker-default image-name

You disable it using the commands:

--security-opt apparmor=unconfined

With the docker run commands.

To disable apparmor service, use:

# systemctl stop apparmor && systemctl disable apparmor

For Ubuntu 14. Use:

# service apparmor stop
# update-rc.d -f apparmor remove

It’s recommended to set working profiles for Docker apparmor than disabling it, especially for production setups.

Check this awesome google document on Securing Containers with AppArmor.

Tuesday, July 27, 2021
answered 5 Months ago

What you want to do is use a volume, and then mount that volume into whatever containers you want it to appear in.

Completely within Docker

You can do this completely inside of Docker.

Here is an example (stripped-down - your real file would have much more than this in it, of course).

version: '3'
      - asset-volume:/var/lib/assets
      - asset-volume:/var/lib/assets


At the bottom is a single volume defined, named "asset-volume".

Then in each of your services, you tell Docker to mount that volume at a certain path. I show example paths inside the container, just adjust these to be whatever path you wish them to be in the container.

The volume is an independent entity not owned by any particular container. It is just mounted into each of them, and is shared. If one container modifies the contents, then they all see the changes.

Note that if you prefer only one can make changes, you can always mount the volume as read-only in some services, by adding :ro to the end of the volume string.

      - asset-volume:/var/lib/assets:ro

Using a host directory

Alternately you can use a directory on the host and mount that into the containers. This has the advantage of you being able to work directly on the files using your tools outside of Docker (such as your GUI text editor and other tools).

It's the same, except you don't define a volume in Docker, instead mounting the external directory.

version: '3'
      - ./assets:/var/lib/assets
      - ./assets:/var/lib/assets

In this example, the local directory "assets" is mounted into both containers using the relative path ./assets.

Using both depending on environment

You can also set it up for a different dev and production environment. Put everything in docker-compose.yml except the volume mounts. Then make two more files.


In these files put only the minimum config to define the volume mount. We'll mix this with the docker-compose.yml to get a final config.

Then use this. It will use the config from docker-compose.yml, and use anything in the second file as an override or supplemental config.

docker-compose -f docker-compose.yml 
    up -d

And for production, just use the prod file instead of the dev file.

The idea here is to keep most of the config in docker-compose.yml, and only the minimum set of differences in the alternative files.


version: '3'
      - asset-volume:/var/lib/assets

version: '3'
      - ./assets:/var/lib/assets
Sunday, August 8, 2021
answered 4 Months ago

UPDATE: Now docker cp command line command works both ways. See the docker cp documentation




=======Original Answer ==============

Found the most easiest way that works across storage drivers:

cd /proc/`docker inspect --format "{{.State.Pid}}" <containerid>`/root

Have tested this on Fedora with Devicemapper as the storage driver and on Ubuntu with AUFS as the storage driver. Works for me in both the cases.

Sunday, August 22, 2021
Felix Lamouroux
answered 4 Months ago

The concept you are looking for is called volumes. You need to start a container and mount a host directory inside it. For the container, it will be a regular folder, and it will create files in it. For you, it will also be a regular folder. Changes made by either side will be visible to another.

docker run -v /a/local/dir:/a/dir/in/your/container

Note though that you can run into permission issues that you will need to figure out separately.

Monday, August 23, 2021
answered 4 Months ago
Only authorized users can answer the question. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :