Roots Discourse

Backing Up Trellis Sites to an S3 Bucket

Originally published at:

Add backup shell script Create site/scripts/ with the following contents: #!/bin/bash eval $(cat …/.env | sed ‘s/^/export /’) export AWS_CONFIG_FILE="/home/web/.aws/config" SITE="${DB_USER//_/.}" ENVIRONMENT="$WP_ENV" TIMESTAMP=env TZ=America/Denver date +%Y-%m-%d-%H%M ARCHIVE_PATH=/tmp/$SITE-$ENVIRONMENT-$TIMESTAMP ARCHIVE_FILENAME=$SITE-$ENVIRONMENT-$TIMESTAMP.tar.gz mkdir -p $ARCHIVE_PATH && cd /srv/www/$SITE/current && wp db export $ARCHIVE_PATH/db.sql && rsync -kavzP --exclude web/wp/ --exclude web/wp-config.php /srv/www/$SITE/current/web $ARCHIVE_PATH && rsync -kavzP /srv/www/$SITE/shared/uploads $ARCHIVE_PATH/web/app && tar…

1 Like

Thanks, @benword. You rock.


I believe that the trellis/group_vars/all/vault.yml parameters should be:

aws_access_key_id: xxxxxxx
aws_secret_access_key: "xxxxxxx"

This way the dstil aws-cli template will be able to grab them:


output = {{ aws_output_format }}
region = {{ aws_region }}
aws_access_key_id = {{ aws_access_key_id }}
aws_secret_access_key = {{ aws_secret_access_key }}

(I’m not sure if region matters, and not sure where the first two parameters would belong, perhaps as group_vars/production/wordpress_sites.yml env variables?)

Shell Script

Name of script referenced in cron job is, not

Can we note that the s3 bucket referenced in the third line of the script must be modified to match an s3 bucket that has been:

  1. Created manually by the user either via the AWS interface or some other tool like awscli or s3cmd.
  2. Will need to be unique among all s3 buckets as they all share the same namespace.

It would also be nice to remind folks that if their server is already provisioned and they want to save time they can just run the modified tasks:

ansible-playbook server.yml -e env=production --tags "wordpress-setup, aws-cli"

Two more steps:

  1. add one parameter to group_vars/all/users.yml: aws_cli_user: web

Otherwise by default, aws-cli credentials are set for admin-user, while cron script ( owned and run by web:www-data.

  1. Permissions on need to be 755 aka -rwxr-xr-x, aka chmod +x and not 644 or / won’t work.

The following task, also added to wordpress-setup/tasks/main.yml:

- name: Update '' permissions
    path: "{{ www_root }}/{{ item.key }}/{{ item.value.current_path | default('current') }}/scripts/"
    owner: "{{ web_user }}"
    group: "{{ web_group }}"
    mode: 0755
  with_dict: "{{ wordpress_sites }}"
1 Like


On the production server:

  • Read the output from the ansible provisioning process
  • Confirm that the credentials from group_vars/all/vault.yml exist in /home/web/.aws/config
  • As user web, run aws s3 ls s3://your-unique-namespace-site-backups
  • As user web, run aws s3 cp some_arbitrary_file s3://your-unique-namespace-site-backups
  • As user web, run bash /srv/www/ manually
  • Confirm that file /etc/cron.d/backup-nightly-example_com exists and contains

Contents should be:

0 12 * * * web cd /srv/www/ && ./ > /dev/null 2>&1

1 Like

To debug script, run ./ from its directory:

cd /srv/www/ && ./


bash /srv/www/

Otherwise the line 2 throw an error:

$ bash /srv/www/ 
cat: ../.env: No such file or directory
/srv/www/ line 10: cd: /srv/www//current: No such file or directory

Also, I suggest to add this in the document guide:

Change vendor/roles/aws-cli/defaults/main.yml

aws_access_key_id: 'YOUR_ACCESS_KEY_ID'
aws_secret_access_key: 'YOUR_SECRET_ACCESS_KEY'


aws_access_key_id: '{{ vault_aws_access_key_id }}'
aws_secret_access_key: '{{ vault_aws_secret_access_key }}'

I believe these lines won’t make a difference as they are default values that get replaced by the values exported to the environment by the bash script:

export AWS_CONFIG_FILE="/home/web/.aws/config"

Thanks for sharing your findings, it is very helpful!
Something I ran across is, when following step 2 the permissions do get updated. However, on a new deploy the file permissions get written as 664. the file modification only occurs on a server provision.

This means that the update task of the backup files should be triggered during the deploy playbook if I understand it correctly.

I then added the following task in roles/deploy/hooks/build-after.yml to make sure the file was set to 775:

  • name: Update ‘’ permissions
    path: “{{ deploy_helper.new_release_path }}/scripts/”
    mode: 0755
    with_dict: “{{ wordpress_sites }}”

And from then on, after each deploy the update worked as expected.

NOTE: requirements.yml is now galaxy.yml.

So ansible-galaxy install -r galaxy.yml