Staging and Production on same VPS

I would wink “yes” if you’re not dealing with high-liability data, but either way my official cya statement would be “no.”

If you are committing credentials unencrypted, it does seem safer to do so on your own server so that your credentials aren’t vulnerable like they would be when a big-target 3rd party git host is hacked or has a malicious employee.

But, as you noted, if your server gets hacked, or you have an implementation glitch, it really would be better to have your credentials encrypted within the repo.

I’m guessing the hesitation I feel to implement Ansible Vault could be simply because it is less familiar to me and is less discussed in roots circles. I doubt it would any more taxing to implement, or any less important than other more popular components of the roots setup. We should probably have used our time setting it up instead of typing a bunch in deliberation.

1 Like

Please make sure you back up that DO instance regularly :smile:

2 Likes

Yes, okay I concur.

I’m still going to use Gitlab for the flexibility, but I’m going to encrypt also.

1 Like

…and backup regularly :date:

@fullyint - You just did in one paragraph what hours of google’ing couldn’t accomplish…Excellent explanation!

More on the original post topic - I’d like to have staging and production on one DO droplet (on the basis that the staging site will be hidden to all except clients and project staff, so usage/bandwidth/resource usage will be limited).

I have my folders setup identically to the trellis example project - but when I provision and deploy, the staging and production site seem to conflict with each other. It looks as though Nginx only ever has one ‘site-name.conf’, but is there a way to alter the group_vars so that both can co-exist?

Thanks!

The quick fix approach
@tobyl It sounds like you’re running the server.yml playbook separately, once for group_vars/staging, then again for group_vars/production. You could possibly make that work if you change the “site key” in your group_vars/staging to something different, like staging.example.com. That way, when trellis creates nginx conf files using that site key name, you’ll in fact have two separate conf files.

You could do that, but then you’d have to always run server.yml twice (once per environment). Some minor adjustments to Trellis would enable a single run of server.yml to handle both staging and production.

A better approach
Trellis uses Ansible’s ability to run playbooks on a group of hosts or servers. Trellis groups servers by environment. However, it sounds like you’d like to group your environments within a server. Here’s one way you could adjust Trellis, grouping your staging and production sites under the domain name of the production site: “example.com”.

  • Inventory.
    • Save a copy of hosts/production as hosts/my-inventory-name, e.g., hosts/example.com. Ansible always uses an inventory file. Trellis happens to have different inventory files for different hosts, so it groups the inventory files in a hosts directory.
    • Edit your new hosts/example.com inventory file. Add your server IP under [web] and change [production:children] to [example.com:children].
  • Group variables.
    • Save a copy of group_vars/production as group_vars/my-group-name, e.g., group_vars/example.com
    • Copy the content of wordpress_sites from group_vars/staging and append it onto the wordpress_sites in group_vars/example.com. (see example below)
    • Rename the site keys within wordpress_sites, e.g., to production and staging (see example below). You could name them example.com and staging.example.com, but you’ll see later why that might make for funny-looking commands.

Your wordpress_sites in group_vars/example.com should now look about like this:

wordpress_sites:
  production:
    site_hosts:
      - example.com
    ⋮
    env:
      wp_home: http://example.com
      wp_siteurl: http://example.com/wp
      wp_env: production
      ⋮
  staging:
    site_hosts:
      - staging.example.com
    ⋮
    env:
      wp_home: http://staging.example.com
      wp_siteurl: http://staging.example.com/wp
      wp_env: staging
      ⋮

You’ll notice that this is just an implementation of what @swalkinshaw suggested above.

Provisioning
You only need to run server.yml once:

# ansible-playbook -i hosts/my-inventory-name server.yml
ansible-playbook -i hosts/example.com server.yml

Deployment
Deploy the production and staging sites separately:

# Production
# ./deploy.sh my-inventory-name my-site-key
./deploy.sh example.com production

# Staging
# ./deploy.sh my-inventory-name my-site-key
./deploy.sh example.com staging

To see if it worked, you can of course navigate to example.com and staging.example.com. Also run…

ssh web@example.com "ls /srv/www"
# production staging
ssh web@example.com "ls /etc/nginx/sites-enabled"
# no-default.conf production.conf staging.conf

Site key names
If you had named your site keys (in wordpress_sites) to example.com and staging.example.com (instead of production and staging), then the deploy commands would still work, but would look a little funny:

# Production
# ./deploy.sh my-inventory-name my-site-key
./deploy.sh example.com example.com

# Staging
# ./deploy.sh my-inventory-name my-site-key
./deploy.sh example.com staging.example.com

Development. Note that this grouping of environments includes only staging and production. To include development, with its differences such as display_errors = On in php.ini, you’d have to do a little more customization.

Consider using separate servers for staging and production. Quoting from above:

12 Likes

Woah… thank you so much for the lengthy response @fullyint - I’m going to give this a try tomorrow. Long term I’m considering separate servers for all the reasons you mentioned (and my own experience). In the meantime for smaller sites though, this seems a little more streamlined.

Thanks again!

Apologies for the delay in responding, the above recommendations worked perfectly @fullyint - thank you so much!

Toby.

2 Likes

@fullyint, I tried following this until I realized I’m using the version of Trellis that was updated back in August – all the environments are split into folders now. I went on and followed through with your instructions but I’m getting a “Missing become password” when provisioning. Do you know if this method and the newest version of Trellis will work? Or is this error something completely different? I believe I’m pretty close to getting my server set up and deployed, so any direction will be greatly appreciated.

@coreybruyere Great job getting it all set up. It should still work and you’ll find an easy solution. From the Trellis security docs:

With root login disabled, the admin_user will need to run commands using sudo with a password, so you will need to add the option --ask-become-pass when running server.yml.

The short version of the flag is -K (capital K), so:

ansible-playbook server.yml -i hosts/production -K

Hi folks,

Re-oppening an old post but I’m strugling about this topic. I want to setup production and staging on the same server. Why? Simply because it will simplify my life with my clients. I’m currently migrating some of my clients from a old, ugly code nightmare, custom e-commerce framework to Wordpress and WooCommerce using Trellis, Bedrock and Sage.

The thing is that these clients own their VPS and aren’t willing to buy one more for staging. And I can’t disagree with them because I do not find any reason not to have both on the same VPS except preferences… Currently they have both on the same, each with it’s own php config and database, one for prod the other prefixed for staging… When not needed staging is simply set on maintenance mode and is not visible to search engines anyway. Local dev is currently done with a MAMP setup.

So… Does someone have manage to successfully do it?..
And, if so, care to share the files to help me jump-up those projects please?

Michel

I’ve done this the once using Trellis and the guidance in Phil’s post up above.

@paul_tibbetts,

I manage, on one of my own domain to setup and deploy using two vps (in fact I use Elastic Host Containers instead). Now trying to set this up on one container…

So, in trellis/group_vars/yolomasso.ca

wordpress_sites:
  production:
    site_hosts:
      - yolomasso.ca
    local_path: ../site # path targeting local Bedrock site directory (relative to Ansible root)
    repo: MyRepoURL # replace with your Git repo URL
    repo_subtree_path: site # relative path to your Bedrock/WP directory in your repo
    branch: master
    multisite:
      enabled: false
    ssl:
      enabled: true
      provider: manual
      cert: MYCertPath
      key: MySSLKey
    cache:
      enabled: true
  staging:
    site_hosts:
      - staging.yolomasso.ca
    local_path: ../site # path targeting local Bedrock site directory (relative to Ansible root)
    repo: MyRepoURL # replace with your Git repo URL
    repo_subtree_path: site # relative path to your Bedrock/WP directory in your repo
    branch: staging
    multisite:
      enabled: false
    ssl:
      enabled: true
      provider: manual
      cert: MYCertPath
      key: MySSLKey
    cache:
      enabled: false

but in trellis/hosts/yolomasso.ca copied from trellis/hosts/production

[production]
??????

[web]
107.6.4.102

I’m stuck… I do not see [production:children]

My setup now is :

trellis/group_vars/yolomasso.ca/wordpress_sites.yml

wordpress_sites:
  production:
    site_hosts:
      - yolomasso.ca
      - 107.6.4.102
    local_path: ../site # path targeting local Bedrock site directory (relative to Ansible root)
    repo: gogs@git.michelchouinard.ca:QUENUA/yolomasso.ca.git # replace with your Git repo URL
    repo_subtree_path: site # relative path to your Bedrock/WP directory in your repo
    branch: master
    multisite:
      enabled: false
    ssl:
      enabled: true
      provider: manual
      cert: /Users/michelchouinard/projects/ssl/yolomasso.ca/yolomasso.ca.ssl.bundle.crt
      key: /Users/michelchouinard/projects/ssl/yolomasso.ca/yolomasso.ca.ssl.key
    cache:
      enabled: true
    env:
      wp_home: https://yolomasso.ca
      wp_siteurl: https://yolomasso.ca/wp
      wp_env: production
  staging:
    site_hosts:
      - staging.yolomasso.ca
      - 107.6.4.102
    local_path: ../site # path targeting local Bedrock site directory (relative to Ansible root)
    repo: gogs@git.michelchouinard.ca:QUENUA/yolomasso.ca.git # replace with your Git repo URL
    repo_subtree_path: site # relative path to your Bedrock/WP directory in your repo
    branch: staging
    multisite:
      enabled: false
    ssl:
      enabled: true
      provider: manual
      cert: /Users/michelchouinard/projects/ssl/yolomasso.ca/yolomasso.ca.ssl.bundle.crt
      key: /Users/michelchouinard/projects/ssl/yolomasso.ca/yolomasso.ca.ssl.key
    cache:
      enabled: false
    env:
      wp_home: https://staging.yolomasso.ca
      wp_siteurl: https://staging.yolomasso.ca/wp
      wp_env: staging

trellis/hosts/yolomasso.ca

[production]
107.6.4.102

[staging]
107.6.4.102

[web]
107.6.4.102

But here is the result when I run ansible-playbook -i hosts/yolomasso.ca server.yml

PLAY [Ensure necessary variables are defined] **********************************

TASK [Ensure environment is defined] *******************************************
System info:
  Ansible 2.0.2.0; Darwin
  Trellis 0.9.7: April 10th, 2016
---------------------------------------------------
Environment missing. Use `-e` to define `env`:
ansible-playbook server.yml -e env=<environment>

fatal: [localhost]: FAILED! => {"changed": false, "failed": true}

NO MORE HOSTS LEFT *************************************************************
        to retry, use: --limit @server.retry

PLAY RECAP *********************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1

@MichelChouinard did you read the error message? Trellis is telling you exactly what’s wrong.

You ran this command: ansible-playbook -i hosts/yolomasso.ca server.yml

Environment missing. Use -e to define env:
ansible-playbook server.yml -e env=

I’m following @fullyint earlier post and the command is the same so I’m missing something

Once again, our helpful error message is telling you the exact command you need to run. Things have changed since @fullyint’s old post.

This thread is being locked since it has outdated information and we do not recommend running staging and production on the same server.

2 Likes