Roots Discourse

Staging and Production on same VPS

Your example group_vars/production looks correct.

hosts/production would just contain a single host/IP for the staging/production server since they’re the same. If you aren’t using other environments/stages, then yeah you can just remove them.

1 Like

I tried to setup staging and production on the same server, but I got this error as it was going through the playbook.
TASK: [fail2ban | ensure fail2ban is configured] ******************************
fatal: [104.236.209.224] => Failed to template ‘*d5?0m+q&4&+s ?-4PT4y-a&YqJ|G%/jshMcS6DFbB:fN#gq7[ID){{!l>c0KR’);: template error while templating string: unexpected char u’!’ at 58

FATAL: all hosts have already failed – aborting

Is it not possible to setup staging and production on the same server?

Doubt that has anything to do with it.

Someone else posted about this error before: Trying to deploy - FATAL: all hosts have already failed -- aborting

I’ve never seen it and not sure what would cause it to be honest.

As @swalkinshaw mentioned, I doubt the template error is related to having staging and production on the same server, and this discourse has a few random unresolved jinja templating threads. Closest thing to a resolution I found was here: “Rebasing the latest [trellis] and doing a provisioning solves the problem of fail2ban.”

I’d make sure your ansible is on the latest (1.9.2 as of this writing) to handle potential jinja templating issues, git pull/rebase the latest trellis, then spin up a brand new server.

I haven’t tried having staging and production on the same server, but I see no reason it wouldn’t work using the setup @swalkinshaw mentioned above.

But for an extra $5/mo at DO, I enjoy having a separate server for staging so I can blow it up, reprovision, and go nuts with it all the time, while leaving my production server untouched. It’s totally worth it to me. I’d recommend separate servers if you’re doing that kind of work with staging.

Of course, there are plenty of use cases in which you wouldn’t need staging on a separate server. In such cases, on the occasion you need to be able to test server setup without disturbing your production site, you could just spin up a temporary disposable testing staging server on the side.

2 Likes

I am new to git workflows. If I start fresh by git clone’ing trellis, what should I do prior to editing files (like group_vars/…) that will enable me to be able to pull or rebase the changes that the trellis team makes in the future?

do I delete .git folder and git init? fork? i am new and trying to learn the workflows?

fyi - i’d like to still push my changes to a private bitbucket account so that i have a private repository for building a team in the future, but still be able to pull in the changes from the trellis team.

Awesome that your setting up a good workflow! The good news is that these concepts are global – not roots/trellis specific – so you’ll find a ton of info by googling around.

The basic idea is…

  1. clone trellis repo to your dev machine (I’d leave the .git folder there)
  2. add bitbucket as a “remote” (e.g., named “origin”)
  3. add the official roots/trellis as a “remote” (e.g., named “upstream”)

Now you can push/pull changes to the “origin” remote (your private bitbucket repo). You can pull updates from the “upstream” remote (the roots official repo).

As changes are made to the official “upstream” repo, I’d just “merge” them so your git history shows a timeline with your changes intermingled with the official changes over time (i.e., providing temporal context). Alternatively, you could “rebase” your changes on top of “upstream” master, making all your customizations appear lumped together at the end of the git history.

If I planned making customizations over the long term, I’d use the merge strategy, so that the git history would show my changes in temporal context. If I were doing just one time-delimited round of customizations, then I’d rebase them on top of “upstream” master. Others may do differently.

This approach starts at bitbucket, automatically adding the bitbucket “origin” remote.

Create private fork. Log in to bitbucket then go to https://bitbucket.org/repo/import

Create local copy. Edit with your bitbucket username.

  • git clone git@bitbucket.org:username/my-trellis.git
  • cd my-trellis
  • git remote add upstream https://github.com/roots/trellis.git
  • git remote -v

Then start editing and customizing.

6 Likes

Dang @fullyint, you just helped me to see the light. That is totally awesome. Thanks!

…and on a related note, do you trust Bitbucket with your ansible configs?

If so, are you using encryption?

I’ve setup my own Gitlab instance on DO for this stuff.

1 Like

@treb0r With my limited experience on the matter of handling app credentials in or out of repos, I could only recommend doing more research, starting with @swalkinshaw’s post here, which discusses common issues and mentions the trellis Passwords wiki. I’m guessing setting up Ansible Vault is the way to go. I just haven’t looked into it enough yet.

I’ve lagged on setting up a mature approach to handling credentials. Right now I essentially gitignore the files that have credentials so they aren’t committed to the repo. It works ok because I have other backup mechanisms (so I don’t need credentials “backed up” in an online repo) and because I’m not in a position of having to share/communicate the credentials. I could maybe relax a bit and just commit all files unencrypted to my private repos because I’m the only one accessing the repos and my servers don’t deal with high-liability data.

I’d love to hear what you come up with.

Yes, I agree that Ansible Vault is probably the way to go, but what with learning about Ansible and all it just seems like one step too many, at the moment anyway.

For now I have the site repo for each project on bitbucket where I can take advantage of the unlimited free private repos while giving the team write access to all of the project site repos.

As mentioned above , I also have my own Gitlab instance running on a DO droplet. This is just for storing the Ansible repo for each project and is more private (as I am the only one with root access). This gives me flexibility to grant read only access to the team, if necessary, although I prefer to ask them to just clone the Trellis repo and configure development for each project. I usually handle the deploys myself as I like to check everything locally first :wink:.

Assuming the droplet doesn’t get hacked, it’s both private and secure.

Does this sound reasonably sane?

I would wink “yes” if you’re not dealing with high-liability data, but either way my official cya statement would be “no.”

If you are committing credentials unencrypted, it does seem safer to do so on your own server so that your credentials aren’t vulnerable like they would be when a big-target 3rd party git host is hacked or has a malicious employee.

But, as you noted, if your server gets hacked, or you have an implementation glitch, it really would be better to have your credentials encrypted within the repo.

I’m guessing the hesitation I feel to implement Ansible Vault could be simply because it is less familiar to me and is less discussed in roots circles. I doubt it would any more taxing to implement, or any less important than other more popular components of the roots setup. We should probably have used our time setting it up instead of typing a bunch in deliberation.

1 Like

Please make sure you back up that DO instance regularly :smile:

2 Likes

Yes, okay I concur.

I’m still going to use Gitlab for the flexibility, but I’m going to encrypt also.

1 Like

…and backup regularly :date:

@fullyint - You just did in one paragraph what hours of google’ing couldn’t accomplish…Excellent explanation!

More on the original post topic - I’d like to have staging and production on one DO droplet (on the basis that the staging site will be hidden to all except clients and project staff, so usage/bandwidth/resource usage will be limited).

I have my folders setup identically to the trellis example project - but when I provision and deploy, the staging and production site seem to conflict with each other. It looks as though Nginx only ever has one ‘site-name.conf’, but is there a way to alter the group_vars so that both can co-exist?

Thanks!

The quick fix approach
@tobyl It sounds like you’re running the server.yml playbook separately, once for group_vars/staging, then again for group_vars/production. You could possibly make that work if you change the “site key” in your group_vars/staging to something different, like staging.example.com. That way, when trellis creates nginx conf files using that site key name, you’ll in fact have two separate conf files.

You could do that, but then you’d have to always run server.yml twice (once per environment). Some minor adjustments to Trellis would enable a single run of server.yml to handle both staging and production.

A better approach
Trellis uses Ansible’s ability to run playbooks on a group of hosts or servers. Trellis groups servers by environment. However, it sounds like you’d like to group your environments within a server. Here’s one way you could adjust Trellis, grouping your staging and production sites under the domain name of the production site: “example.com”.

  • Inventory.
    • Save a copy of hosts/production as hosts/my-inventory-name, e.g., hosts/example.com. Ansible always uses an inventory file. Trellis happens to have different inventory files for different hosts, so it groups the inventory files in a hosts directory.
    • Edit your new hosts/example.com inventory file. Add your server IP under [web] and change [production:children] to [example.com:children].
  • Group variables.
    • Save a copy of group_vars/production as group_vars/my-group-name, e.g., group_vars/example.com
    • Copy the content of wordpress_sites from group_vars/staging and append it onto the wordpress_sites in group_vars/example.com. (see example below)
    • Rename the site keys within wordpress_sites, e.g., to production and staging (see example below). You could name them example.com and staging.example.com, but you’ll see later why that might make for funny-looking commands.

Your wordpress_sites in group_vars/example.com should now look about like this:

wordpress_sites:
  production:
    site_hosts:
      - example.com
    ⋮
    env:
      wp_home: http://example.com
      wp_siteurl: http://example.com/wp
      wp_env: production
      ⋮
  staging:
    site_hosts:
      - staging.example.com
    ⋮
    env:
      wp_home: http://staging.example.com
      wp_siteurl: http://staging.example.com/wp
      wp_env: staging
      ⋮

You’ll notice that this is just an implementation of what @swalkinshaw suggested above.

Provisioning
You only need to run server.yml once:

# ansible-playbook -i hosts/my-inventory-name server.yml
ansible-playbook -i hosts/example.com server.yml

Deployment
Deploy the production and staging sites separately:

# Production
# ./deploy.sh my-inventory-name my-site-key
./deploy.sh example.com production

# Staging
# ./deploy.sh my-inventory-name my-site-key
./deploy.sh example.com staging

To see if it worked, you can of course navigate to example.com and staging.example.com. Also run…

ssh web@example.com "ls /srv/www"
# production staging
ssh web@example.com "ls /etc/nginx/sites-enabled"
# no-default.conf production.conf staging.conf

Site key names
If you had named your site keys (in wordpress_sites) to example.com and staging.example.com (instead of production and staging), then the deploy commands would still work, but would look a little funny:

# Production
# ./deploy.sh my-inventory-name my-site-key
./deploy.sh example.com example.com

# Staging
# ./deploy.sh my-inventory-name my-site-key
./deploy.sh example.com staging.example.com

Development. Note that this grouping of environments includes only staging and production. To include development, with its differences such as display_errors = On in php.ini, you’d have to do a little more customization.

Consider using separate servers for staging and production. Quoting from above:

12 Likes

Woah… thank you so much for the lengthy response @fullyint - I’m going to give this a try tomorrow. Long term I’m considering separate servers for all the reasons you mentioned (and my own experience). In the meantime for smaller sites though, this seems a little more streamlined.

Thanks again!

Apologies for the delay in responding, the above recommendations worked perfectly @fullyint - thank you so much!

Toby.

2 Likes

@fullyint, I tried following this until I realized I’m using the version of Trellis that was updated back in August – all the environments are split into folders now. I went on and followed through with your instructions but I’m getting a “Missing become password” when provisioning. Do you know if this method and the newest version of Trellis will work? Or is this error something completely different? I believe I’m pretty close to getting my server set up and deployed, so any direction will be greatly appreciated.