Matching live environment with Vagrant/Virtualbox

I’ve been a roots user for some time, currently trying to get my head round all the other bits & pieces in Bedrock and improve my workflow. I’m looking at my hosting and local setups and trying to figure out the ideal configuration.

I have a dedicated, managed server for hosting client websites. It runs CentOS 5, PHP, Apache, MySQL. I can get the exact versions for everything and lists of all the PHP and Apache modules, but when I went to https://puphpet.com/ it didn’t have many of these modules, so I’m concerned I’m not going to be creating a reliable dev environment for myself.

Am I misunderstanding something, do I need to do even more tinkering to PHP/Apache once the VM is up and running?

I’m also curious what other people are using, what are you hosting live sites on and how are you emulating the environment locally for development.

I booted up Varying Vagrant Vagrants to have a look and it seemed to be a bit easier to get working than the VM I tried to make through puphpet… should I be asking my hosting company for an nginx/ubuntu server to match? I’ve heard VVV is based on an ā€˜ideal’ Wordpress environment?

1 Like

ā€œidealā€ is very relative… VVV uses Nginx and some other things (phpmemcache) to make it ā€œfaster.ā€ For the record I uses VVV and it works great for what I need.
I’ve head the setup is similar to what WordPress managed hosts such as WP Engine might have going on, one of the reasons why I use VVV, because I host on WPE.

I was going to recommend trying puphpet with centos and apache - you should be able to get your vm up and then add the packages like you would any normal server box. That might be your best bet.

To get your local environment working EXACTLY the same as your host is going to take you some patience and trail and error…

Good luck!

The ideal setup is to use a configuration tool like Puppet to build all your servers from the same configuration so they all match. It’s hard to start with an already built production server and then try to work backwards to replicate that into Puppet for example.

The versions of software like PHP, Apache, etc, are probably the latest stable official ones for CentOS. One of the benefits of tools like Puppet is they allow you to use newer versions pretty easily. I have a hunch your provider’s ā€œmanagedā€ service wouldn’t like allowing that anyway. Makes their job a little harder to work with tons of different versions.

Ok, so this sort of confirms what I thought… the main reason I haven’t built a production server myself is because it would mean it was no longer a ā€œmanagedā€ service which is something we come to rely on when we’re running a small business. Then again, I am now questioning how much we actually use it and could be better off managing everything ourselves.

Anyone know of any hosts that will manage a server AND let you choose your exact set up? or am I being too ambitious?

Another question I’m interested to know (which may have been answered elsewhere), what will the Vagrant VM that’s supposed to be part of Bedrock be running? Has this been decided yet.

It will likely be running Ubuntu LTS (14.04 is released next month), Nginx, PHP-FPM and a MySQL drop-in replacement (MariaDB or Percona).

We’ll be provisioning the server with Ansible, so if you or your managed host have a different preference, it’ll be very easy to customise to match on Ubuntu.

If you use a different OS then it’s still possible to rewrite the playbook to get an identical setup, but you won’t be able to use PPAs so it will be a more significant rewrite.

If you develop your own configuration management system to provision servers, having a ā€œmanagedā€ service becomes less important.

I think a service like Linode’s managed is valuable since it’s more incident response than initial setup. https://www.linode.com/managed/ you can see exactly what they provide and it looks like you can still install whatever you want.

@Foxaii Thanks, wow, you guys love to keep me on my toes with new things to learn! First I’ve heard of Ansible and not sure I understood exactly what it does from browsing their website… I’m imagining it helps retards like me set up and configure web servers? or am I completely barking up the wrong tree?

off topic but - whats the thinking behind moving away from MySQL? is that purely a performance thing?

@swalkinshaw Linodes looks interesting - once again, not sure I totally understand what it is they’re offering. Are you suggesting use Linodes to start a new virtual server for each website you create? (and manage with Ansible?)

I wasn’t specifically suggesting Linode (although they are pretty good in general). It was just an example of how a host’s ā€œmanagedā€ service can vary and theirs just allows you more freedom.

Having 1 server per site is a good ideal model, but often it’s wasteful (in terms of money) if they’re low traffic sites. So you can still have 1 server with many sites on it all managed by Ansible.

Ansible (and other server configuration tools like Puppet, Chef, etc) are all confusing on their own websites. I find it easier to just look at examples: https://github.com/ansible/ansible-examples/tree/master/wordpress-nginx

Ansible is basically equivalent to Puppet though.

Cool conversation. I think I’ll be looking to try out Ansible in the coming weeks. I had a dev environment set up with Vagrant and Chef using Berkshelf, but the Vagrant-berkshelf plugin has basically been deprecated and I was having compatibility issues on my laptop with it. Killed most of my Sunday afternoon trying to figure that one out.

So yeah… looking forward to trying out Ansible!

Just a heads up that @Foxaii has started work on the Ansible part. Not sure when it will be done but at least we’ve started!

2 Likes

Anyone have a high level workflow they wish to share?

Working local + git + vagrant etc? curious

I’ve been working on a few projects with a colleague and it seems to be working. We both work off the same Git repository (on Bitbucket), we have Vagrant set up and used Bedrock as a base to set up WP/Composer.

The only tricky part is the database. For the moment, we are doing a db dump and db import with a simple shell script when we make a change to the DB or to import the new changes. So although the db is in Git, it does not track very well in version control, since it’s a generated file. So we are do communicate and discuss who will be working on what. Generally he sets up pages/posts in Wordpress with some content and sets up the Advanced Custom Fields and custom post types. He’ll commit the DB and any updates, I’ll pull those down and style them.

It’s working well, just something to be mindful of. I would like to potentially move to a file-based setup for pages in WP, but since WP is tied very closely to a MySQL database, that is difficult.

Awesome conversation. I’m interested to see what you guys can come up with as a good Vagrant environment.

As for the DB stuff, I can’t say enough good things about WP DB Migrate Pro. It’s outstanding.

I have also heard of people exporting DBs and then including them in git commits so that the other person can easily pull it down and import it. It’s also there as a backup, so that’s cool.

Yeah good info here guys, I started this thread cause I’ve currently been tasked with completely pimping out our company workflow and hosting set up.

Up until now I’ve always relied on tech support from hosting companies to handle maintaining the OS, configuring Apache/PHP and installing the right modules etc. I imagine there’s quite a lot of people in a similar situation - in this case we’re sort of at the mercy of someone else’s expertise, I have to place quite a lot of trust in him keeping it secure etc.

It’s also tricky to locally emulate in a VM whatever server environment they’ve provided, our hosting guy is pretty helpful but as mentioned above this seems like we’re working backwards. The alternative is we install a pre-configured stack to both the live server and local VMs, but the issue with this is hosting companies only really offer support on the system they know and are comfortable with, not some system you’ve set up which they’ll see as lots of hard work and potentially risky.

If I’m understanding right though, I shouldn’t need to be an expert to handle setting up a virtual server on some cloud hosting using something like Ansible? And I guess worst case scenario it makes it straightforward to start over, install the server from scratch and have your site back up and running.

I’m probably going to keep most of my clients on the server we have, and start migrating sites one at a time to some kind of cloud hosted virtual server with all the bedrock goodies on.

My worry at the moment is of getting in over my head and somebody getting hacked due to me just not knowing what I’m doing…

Are my fears justified? I suspect it’s simply fear of the unknown…

p.s. massively appreciate the info here and everything the roots/bedrock devs are doing! this is a great community :smile:

A basic LEMP stack (Linux, Nginx, MySQL, PHP) is mostly easy to setup and configure and is nothing to fear. Usually once you look into some existing solutions in Ansible, Chef, etc you see how simple they really are.

There are things to be concerned about like security, but usually it’s not too complicated either in terms of locking down ports, access, SSH etc. Once again there’s default ā€œcookbooksā€ to help with this.

The absolute best part of configuration management is just being able to destroy/create servers and know that every time you start one up, you’re getting the exact same consistent setup from scratch.

Another potential benefit is having 1 cloud server PER client. A $5 Digital Ocean box is probably powerful enough for most standard client WP sites (with proper caching). You can still easily up-sell that at a bigger multiple and it has many benefits. You don’t need to worry about taking down ALL your sites if something happens to 1 server.

Personally, I don’t want to ever have to touch server configs, because I try to focus on actually building the sites, not maintaing or hosting them. So in my case I’d pay more for more ā€œmanagedā€ hosting so that I don’t have to worry about that when I go to sleep at night.

Gotta pick your battles, but that’s just my personal opinion.

I plan to use Migrate DB pro for keeping DBs in sync across environments. One question I have which I hope qualifies as relevant to this thread is, if the live site needs to be hosted on a client’s cpanel which uses something other then Maria DB/Nginx/etc. will I have any issues simply migrating the db over from a staging site setup with the LEMP stack required by Trellis?

Maybe? There’s a chance but it’s probably low since WP caters to old MySQL versions. You never know what plugins or humans do after that though.

If your final live environment is going to be cpanel (I’m in this boat 90% of the time), I tend to just have a vagrant machine I’ve generated using http://puphpet.com which closely matches the specs of my server (I’m on Sitegrounds cloud accounts quite often these days and if you pester their staff you can get them to tell you which apache and php modules they’re running… it even lets me change PHP version on a per directory basis which is pretty cool).

For this I decided against Trellis as its just a bit too different to the live environment for my liking.

A couple of valuable tips I’ve learn in the couple of years since making this thread:

  1. If you put your generated vagrant machine from puphpet itself into a git repo and agree a directory structure for your projects folder with your team, you can quickly make sure the rest of the team has the same Vagrant config, with the same vhosts and databases available.

  2. You can set your vagrant sync directory as a directory ā€˜above’ the vagrant file e.g. something like:

     synced_folder:
         vflsf_qsxnmdmnxct2:
             source: ../
             target: /var/www
             sync_type: nfs
    

This way I just tell my team to keep a /Projects directory, with sites in paths like: /Projects/examplesite.com/ and the vagrant machine in a path like /Projects/vagrant-001.

This lets me have a /Projects folder with all my sites and various vagrant machines.

To answer your question more specifically though @louisnovick, for database syncing when working with cPanel, I’ve found the simplest thing to do is to set up a mysql database through the cPanel gui, whitelist your IP in the ā€˜Remote mySQL’ section of cPanel.

Then in either wp-config.php or (.env if you’re using Bedrock), I have something like this going on in my local dev version:

    # Dev:
    #DB_NAME=vagrant_dbname
    #DB_USER=vagrant_dbuser
    #DB_PASSWORD=vagrant_dbpass
    #DB_HOST=localhost

    # Staging:
    DB_NAME=cpanel_staging_dbname
    DB_USER=cpanel_staging_dbuser
    DB_PASSWORD=cpanel_staging_dbpass
    DB_HOST=<IP of cPanel SERVER>

    WP_ENV=development
    WP_HOME=http://example.dev
    WP_SITEURL=${WP_HOME}/wp

^^ Provided you’re developing with reliable internet the fact Bedrock overrides the WP_HOME and WP_SITEURL settings with your local domain name (ā€˜example.dev’), means you’ll get a dev copy that’s pulling your database from your staging site, and running off your local files.

If you’re not using Bedrock you’ll need something like the following in your wp-config.php:

    define('WP_SITEURL', 'http://' . $_SERVER['HTTP_HOST'] . '/wordpress');
    define('WP_HOME',    'http://' . $_SERVER['HTTP_HOST']);
    define('WP_CONTENT_DIR', $_SERVER['DOCUMENT_ROOT'] . '/wp-content');
    define('WP_CONTENT_URL', 'http://' . $_SERVER['HTTP_HOST'] . '/wp-content');

I like this because it means I can let non-developers in my team start adding content via the wordpress admin on the staging domain whilst the site is still in development, effectively using the staging database (and uploads folder) as the ā€˜one true’ database.

I effectively do no database syncing, all I have to do is periodically download the contents of the /wp-content/uploads directory.

In most cases I don’t even need to do that since post attachments will use the staging URL.

I’ll happily elaborate on doing it this way if this explanation didn’t make sense, I’m kind of interested if anyone else does it this way.

Note: when I finally ā€˜go live’ and my client/team have finished adding content and uploads, I download the whole uploads directory, optimise all the images, and then re-upp them on the live site. I add the live database details to my .env or wp-config, and do a similar set up (so live DB becomes the ā€˜one true’ database).

As a final step I’ll run https://github.com/interconnectit/Search-Replace-DB on the database (from command line in vagrant since similar commands stay in my bash history), or just through its gui. This is to make sure ALL urls in the final database (WP_HOME, WP_SITEURL and all the post attachment links) are pointing to the live domain (e.g. http://example.com rather than ā€˜example.dev’ or ā€˜staging.example.com’ or anything else).

I still think Trellis is better, but I reserve that for when I have a client who’s willing to let me put them on a digital ocean droplet. I’ve found Trellis and cPanel to just not play too nice together.

(Even Bedrock requires some symlinking of your public_html folder to make it work with cPanel - cPanel won’t let you change the default webroot, so I point /public_html to /examplesite.com/web and install Bedrock to /examplesite.com/, there were threads on here about this I’m sure but I can’t seem to find them in search right now).

Convoluted answer but, thought I’d share as this is working well for me now.

Edit: One more ā€˜quirk’ that I got was that if you’re also using Sage, with Git, running Gulp locally, when you deploy from git to your staging and live environments, your /themes/sage/dist/ folder will be missing. I got round this by making my ā€˜staging’ and ā€˜production’ branches have a separate .gitignore where the /dist folder isn’t ignored. My git history ends up with a lot of switching to ā€˜staging’ branch, merge in latest changes from ā€˜dev’ branch, running ā€˜gulp’ or ā€˜gulp --production’, commiting the changes to the /dist folder, and then git push and pull, /dist directory included. This way I rarely have to touch FTP.