Creates a Docker container that is restored and backed up to a directory on s3. You could use this to run short lived processes that work with and persist data to and from S3.
For the simplest usage, you can just start the data container:
docker run -d --name my-data-container \
elementar/s3-volume /data s3://mybucket/someprefixThis will download the data from the S3 location you specify into the
container's /data directory. When the container shuts down, the data will be
synced back to S3.
To use the data from another container, you can use the --volumes-from option:
docker run -it --rm --volumes-from=my-data-container busybox ls -l /dataWhen the BACKUP_INTERVAL environment variable is set, a watcher process will
sync the /data directory to S3 on the interval you specify. The interval can
be specified in seconds, minutes, hours or days (adding s, m, h or d as
the suffix):
docker run -d --name my-data-container -e BACKUP_INTERVAL=2m \
elementar/s3-volume /data s3://mybucket/someprefixIf you are running on EC2, IAM role credentials should just work. Otherwise, you can supply credential information using environment variables:
docker run -d --name my-data-container \
-e AWS_ACCESS_KEY_ID=... -e AWS_SECRET_ACCESS_KEY=... \
elementar/s3-volume /data s3://mybucket/someprefixAny environment variable available to the aws-cli command can be used. see
http://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html for more
information.
If you are using an S3-compatible service (such as Oracle OCI Object Storage), you may want to set the service's endpoint URL:
docker run -d --name my-data-container -e ENDPOINT_URL=... \
elementar/s3-volume /data s3://mybucket/someprefixA final sync will always be performed on container shutdown. A sync can be
forced by sending the container the USR1 signal:
docker kill --signal=USR1 my-data-containerThe first time the container is ran, it will fetch the contents of the S3
location to initialize the /data directory. If you want to force an initial
sync again, you can run the container again with the --force-restore option:
docker run -d --name my-data-container \
elementar/s3-volume --force-restore /data s3://mybucket/someprefixBy default if there are files that are deleted in your local file system, those will be deleted remotely. If you wish to turn this off, set the environment variable S3_SYNC_FLAGS to an empty string:
docker run -d -e S3_SYNC_FLAGS="" elementar/s3-volume /data s3://mybucket/someprefixMost of the time, you will use this image to sync data for another container.
You can use docker-compose for that:
# docker-compose.yaml
version: "2"
volumes:
s3data:
driver: local
services:
s3vol:
image: elementar/s3-volume
command: /data s3://mybucket/someprefix
volumes:
- s3data:/data
db:
image: postgres
volumes:
- s3data:/var/lib/postgresql/data- Fork it!
- Create your feature branch:
git checkout -b my-new-feature - Commit your changes:
git commit -am 'Add some feature' - Push to the branch:
git push origin my-new-feature - Submit a pull request :D
- Original Developer - Dave Newman (@whatupdave)
- Current Maintainer - Fábio Batista (@fabiob)
This repository is released under the MIT license: