Staging and Production on same VPS

Hello!

Working on a new project and thought I’d implement a staging, in the past I’ve only done local and production.

Ideally I would like to setup both stage and production on the same VPS on Digital Ocean. Anyone that could clue me in how I’d achieve that in the best way?

Do I setup two vagrant instances on DO?

Are you doing this via bedrock-ansible?

Have not begun setting it up yet, was hoping for some input before I get started.

Bedrock-ansible wasn’t available yet when I setup Bedrock the last time, but it seems to be the way to go. Will it help me setup multiple environments on DO?

Otherwise I was eyeing this tutorial:

But it would be nice to use a playbook to automate the setup.

Can’t wrap my head about the purpose of Bedrock Ansible, maybe it gets clearer if I fiddle to it.

But from what I can tell from reading in the repo it sets up three different environments, within the same Vagrant VM?

So if I were to run the Vagrant file on my DO droplet I would get three stages, and if I run it locally I will get three stages there aswell. Is this correct? In that case I don’t see why I would like to have staging/prod locally?

Edit: Or do I simply use the playbook to setup staging/prod on DO?

Your edit is correct.

You generally want to us Vagrant for dev only. The Vagrantfile just runs the playbook anyway. So you can manually run the Ansible playbook against your staging/prod servers.

bedrock-ansible lets you define multiple WP sites on a server. Although I just realized it’s a little weird with the environment naming right now. For example, by default there’s group_vars/staging and group_vars/production. You’d probably want to just pick production and define 2 sites in there and vary the env.wp_env setting. You’d also need to define the server ip under hosts/production.

You can run the playbook as normal in Ansible like ansible-playbook -i hosts/production site.yml

2 Likes

Sounds about right! But I think an example would go along way for me in this case. Little bit puzzled with how this is supposed to work.

Like this with group_vars/production:

mysql_root_password: somegeneratedpassword


    wordpress_sites:
      - site_name: staging.example.com
        site_hosts:
          - staging.example.com
          - 192.168.50.5
        user: deploy
        group: www-data
        site_install: true
        site_title: Example Staging Site
        admin_user: admin
        admin_password: admin
        admin_email: admin@staging.example.com
        system_cron: true
        multisite:
          enabled: false
        env:
          wp_home: http://staging.example.com
          wp_siteurl: http://staging.example.com/wp
          wp_env: staging
          db_name: example_staging
          db_user: example_dbuser
          db_password: example_dbpassword
    
      - site_name: example.com
        site_hosts:
          - example.com
          - 192.168.50.5
        user: deploy
        group: www-data
        site_install: true
        site_title: Example Production Site
        admin_user: admin
        admin_password: admin
        admin_email: admin@example.com
        system_cron: true
        multisite:
          enabled: false
        env:
          wp_home: http://example.com
          wp_siteurl: http://example.com/wp
          wp_env: production
          db_name: example_prod
          db_user: example_dbuser
          db_password: example_dbpassword

But what about hosts? Stage and production have the same IP so how should hosts/production look?

Oh and by the way, I should simply just drop group_vars/development and group_vars/staging, yeah?

Your example group_vars/production looks correct.

hosts/production would just contain a single host/IP for the staging/production server since they’re the same. If you aren’t using other environments/stages, then yeah you can just remove them.

1 Like

I tried to setup staging and production on the same server, but I got this error as it was going through the playbook.
TASK: [fail2ban | ensure fail2ban is configured] ******************************
fatal: [104.236.209.224] => Failed to template ‘*d5?0m+q&4&+s ?-4PT4y-a&YqJ|G%/jshMcS6DFbB:fN#gq7[ID){{!l>c0KR’);: template error while templating string: unexpected char u’!’ at 58

FATAL: all hosts have already failed – aborting

Is it not possible to setup staging and production on the same server?

Doubt that has anything to do with it.

Someone else posted about this error before: Trying to deploy - FATAL: all hosts have already failed -- aborting

I’ve never seen it and not sure what would cause it to be honest.

As @swalkinshaw mentioned, I doubt the template error is related to having staging and production on the same server, and this discourse has a few random unresolved jinja templating threads. Closest thing to a resolution I found was here: “Rebasing the latest [trellis] and doing a provisioning solves the problem of fail2ban.”

I’d make sure your ansible is on the latest (1.9.2 as of this writing) to handle potential jinja templating issues, git pull/rebase the latest trellis, then spin up a brand new server.

I haven’t tried having staging and production on the same server, but I see no reason it wouldn’t work using the setup @swalkinshaw mentioned above.

But for an extra $5/mo at DO, I enjoy having a separate server for staging so I can blow it up, reprovision, and go nuts with it all the time, while leaving my production server untouched. It’s totally worth it to me. I’d recommend separate servers if you’re doing that kind of work with staging.

Of course, there are plenty of use cases in which you wouldn’t need staging on a separate server. In such cases, on the occasion you need to be able to test server setup without disturbing your production site, you could just spin up a temporary disposable testing staging server on the side.

2 Likes

I am new to git workflows. If I start fresh by git clone’ing trellis, what should I do prior to editing files (like group_vars/…) that will enable me to be able to pull or rebase the changes that the trellis team makes in the future?

do I delete .git folder and git init? fork? i am new and trying to learn the workflows?

fyi - i’d like to still push my changes to a private bitbucket account so that i have a private repository for building a team in the future, but still be able to pull in the changes from the trellis team.

Awesome that your setting up a good workflow! The good news is that these concepts are global – not roots/trellis specific – so you’ll find a ton of info by googling around.

The basic idea is…

  1. clone trellis repo to your dev machine (I’d leave the .git folder there)
  2. add bitbucket as a “remote” (e.g., named “origin”)
  3. add the official roots/trellis as a “remote” (e.g., named “upstream”)

Now you can push/pull changes to the “origin” remote (your private bitbucket repo). You can pull updates from the “upstream” remote (the roots official repo).

As changes are made to the official “upstream” repo, I’d just “merge” them so your git history shows a timeline with your changes intermingled with the official changes over time (i.e., providing temporal context). Alternatively, you could “rebase” your changes on top of “upstream” master, making all your customizations appear lumped together at the end of the git history.

If I planned making customizations over the long term, I’d use the merge strategy, so that the git history would show my changes in temporal context. If I were doing just one time-delimited round of customizations, then I’d rebase them on top of “upstream” master. Others may do differently.

This approach starts at bitbucket, automatically adding the bitbucket “origin” remote.

Create private fork. Log in to bitbucket then go to https://bitbucket.org/repo/import

Create local copy. Edit with your bitbucket username.

  • git clone git@bitbucket.org:username/my-trellis.git
  • cd my-trellis
  • git remote add upstream https://github.com/roots/trellis.git
  • git remote -v

Then start editing and customizing.

6 Likes

Dang @fullyint, you just helped me to see the light. That is totally awesome. Thanks!

…and on a related note, do you trust Bitbucket with your ansible configs?

If so, are you using encryption?

I’ve setup my own Gitlab instance on DO for this stuff.

1 Like

@treb0r With my limited experience on the matter of handling app credentials in or out of repos, I could only recommend doing more research, starting with @swalkinshaw’s post here, which discusses common issues and mentions the trellis Passwords wiki. I’m guessing setting up Ansible Vault is the way to go. I just haven’t looked into it enough yet.

I’ve lagged on setting up a mature approach to handling credentials. Right now I essentially gitignore the files that have credentials so they aren’t committed to the repo. It works ok because I have other backup mechanisms (so I don’t need credentials “backed up” in an online repo) and because I’m not in a position of having to share/communicate the credentials. I could maybe relax a bit and just commit all files unencrypted to my private repos because I’m the only one accessing the repos and my servers don’t deal with high-liability data.

I’d love to hear what you come up with.

Yes, I agree that Ansible Vault is probably the way to go, but what with learning about Ansible and all it just seems like one step too many, at the moment anyway.

For now I have the site repo for each project on bitbucket where I can take advantage of the unlimited free private repos while giving the team write access to all of the project site repos.

As mentioned above , I also have my own Gitlab instance running on a DO droplet. This is just for storing the Ansible repo for each project and is more private (as I am the only one with root access). This gives me flexibility to grant read only access to the team, if necessary, although I prefer to ask them to just clone the Trellis repo and configure development for each project. I usually handle the deploys myself as I like to check everything locally first :wink:.

Assuming the droplet doesn’t get hacked, it’s both private and secure.

Does this sound reasonably sane?

I would wink “yes” if you’re not dealing with high-liability data, but either way my official cya statement would be “no.”

If you are committing credentials unencrypted, it does seem safer to do so on your own server so that your credentials aren’t vulnerable like they would be when a big-target 3rd party git host is hacked or has a malicious employee.

But, as you noted, if your server gets hacked, or you have an implementation glitch, it really would be better to have your credentials encrypted within the repo.

I’m guessing the hesitation I feel to implement Ansible Vault could be simply because it is less familiar to me and is less discussed in roots circles. I doubt it would any more taxing to implement, or any less important than other more popular components of the roots setup. We should probably have used our time setting it up instead of typing a bunch in deliberation.

1 Like

Please make sure you back up that DO instance regularly :smile:

2 Likes

Yes, okay I concur.

I’m still going to use Gitlab for the flexibility, but I’m going to encrypt also.

1 Like

…and backup regularly :date: