The quick fix approach
@tobyl It sounds like you’re running the server.yml playbook separately, once for group_vars/staging, then again for group_vars/production. You could possibly make that work if you change the “site key” in your group_vars/staging to something different, like staging.example.com. That way, when trellis creates nginx conf files using that site key name, you’ll in fact have two separate conf files.
You could do that, but then you’d have to always run server.yml twice (once per environment). Some minor adjustments to Trellis would enable a single run of server.yml to handle both staging and production.
A better approach
Trellis uses Ansible’s ability to run playbooks on a group of hosts or servers. Trellis groups servers by environment. However, it sounds like you’d like to group your environments within a server. Here’s one way you could adjust Trellis, grouping your staging and production sites under the domain name of the production site: “example.com”.
- Inventory.
- Save a copy of
hosts/productionashosts/my-inventory-name, e.g.,hosts/example.com. Ansible always uses an inventory file. Trellis happens to have different inventory files for different hosts, so it groups the inventory files in ahostsdirectory. - Edit your new
hosts/example.cominventory file. Add your server IP under[web]and change[production:children]to[example.com:children].
- Save a copy of
- Group variables.
- Save a copy of
group_vars/productionasgroup_vars/my-group-name, e.g.,group_vars/example.com - Copy the content of
wordpress_sitesfromgroup_vars/stagingand append it onto thewordpress_sitesingroup_vars/example.com. (see example below) - Rename the site keys within
wordpress_sites, e.g., toproductionandstaging(see example below). You could name themexample.comandstaging.example.com, but you’ll see later why that might make for funny-looking commands.
- Save a copy of
Your wordpress_sites in group_vars/example.com should now look about like this:
wordpress_sites:
production:
site_hosts:
- example.com
⋮
env:
wp_home: http://example.com
wp_siteurl: http://example.com/wp
wp_env: production
⋮
staging:
site_hosts:
- staging.example.com
⋮
env:
wp_home: http://staging.example.com
wp_siteurl: http://staging.example.com/wp
wp_env: staging
⋮
You’ll notice that this is just an implementation of what @swalkinshaw suggested above.
Provisioning
You only need to run server.yml once:
# ansible-playbook -i hosts/my-inventory-name server.yml
ansible-playbook -i hosts/example.com server.yml
Deployment
Deploy the production and staging sites separately:
# Production
# ./deploy.sh my-inventory-name my-site-key
./deploy.sh example.com production
# Staging
# ./deploy.sh my-inventory-name my-site-key
./deploy.sh example.com staging
To see if it worked, you can of course navigate to example.com and staging.example.com. Also run…
ssh [email protected] "ls /srv/www"
# production staging
ssh [email protected] "ls /etc/nginx/sites-enabled"
# no-default.conf production.conf staging.conf
Site key names
If you had named your site keys (in wordpress_sites) to example.com and staging.example.com (instead of production and staging), then the deploy commands would still work, but would look a little funny:
# Production
# ./deploy.sh my-inventory-name my-site-key
./deploy.sh example.com example.com
# Staging
# ./deploy.sh my-inventory-name my-site-key
./deploy.sh example.com staging.example.com
Development. Note that this grouping of environments includes only staging and production. To include development, with its differences such as display_errors = On in php.ini, you’d have to do a little more customization.
Consider using separate servers for staging and production. Quoting from above: