Matching live environment with Vagrant/Virtualbox

Iā€™ve been a roots user for some time, currently trying to get my head round all the other bits & pieces in Bedrock and improve my workflow. Iā€™m looking at my hosting and local setups and trying to figure out the ideal configuration.

I have a dedicated, managed server for hosting client websites. It runs CentOS 5, PHP, Apache, MySQL. I can get the exact versions for everything and lists of all the PHP and Apache modules, but when I went to https://puphpet.com/ it didnā€™t have many of these modules, so Iā€™m concerned Iā€™m not going to be creating a reliable dev environment for myself.

Am I misunderstanding something, do I need to do even more tinkering to PHP/Apache once the VM is up and running?

Iā€™m also curious what other people are using, what are you hosting live sites on and how are you emulating the environment locally for development.

I booted up Varying Vagrant Vagrants to have a look and it seemed to be a bit easier to get working than the VM I tried to make through puphpetā€¦ should I be asking my hosting company for an nginx/ubuntu server to match? Iā€™ve heard VVV is based on an ā€˜idealā€™ Wordpress environment?

1 Like

ā€œidealā€ is very relativeā€¦ VVV uses Nginx and some other things (phpmemcache) to make it ā€œfaster.ā€ For the record I uses VVV and it works great for what I need.
Iā€™ve head the setup is similar to what WordPress managed hosts such as WP Engine might have going on, one of the reasons why I use VVV, because I host on WPE.

I was going to recommend trying puphpet with centos and apache - you should be able to get your vm up and then add the packages like you would any normal server box. That might be your best bet.

To get your local environment working EXACTLY the same as your host is going to take you some patience and trail and errorā€¦

Good luck!

The ideal setup is to use a configuration tool like Puppet to build all your servers from the same configuration so they all match. Itā€™s hard to start with an already built production server and then try to work backwards to replicate that into Puppet for example.

The versions of software like PHP, Apache, etc, are probably the latest stable official ones for CentOS. One of the benefits of tools like Puppet is they allow you to use newer versions pretty easily. I have a hunch your providerā€™s ā€œmanagedā€ service wouldnā€™t like allowing that anyway. Makes their job a little harder to work with tons of different versions.

Ok, so this sort of confirms what I thoughtā€¦ the main reason I havenā€™t built a production server myself is because it would mean it was no longer a ā€œmanagedā€ service which is something we come to rely on when weā€™re running a small business. Then again, I am now questioning how much we actually use it and could be better off managing everything ourselves.

Anyone know of any hosts that will manage a server AND let you choose your exact set up? or am I being too ambitious?

Another question Iā€™m interested to know (which may have been answered elsewhere), what will the Vagrant VM thatā€™s supposed to be part of Bedrock be running? Has this been decided yet.

It will likely be running Ubuntu LTS (14.04 is released next month), Nginx, PHP-FPM and a MySQL drop-in replacement (MariaDB or Percona).

Weā€™ll be provisioning the server with Ansible, so if you or your managed host have a different preference, itā€™ll be very easy to customise to match on Ubuntu.

If you use a different OS then itā€™s still possible to rewrite the playbook to get an identical setup, but you wonā€™t be able to use PPAs so it will be a more significant rewrite.

If you develop your own configuration management system to provision servers, having a ā€œmanagedā€ service becomes less important.

I think a service like Linodeā€™s managed is valuable since itā€™s more incident response than initial setup. https://www.linode.com/managed/ you can see exactly what they provide and it looks like you can still install whatever you want.

@Foxaii Thanks, wow, you guys love to keep me on my toes with new things to learn! First Iā€™ve heard of Ansible and not sure I understood exactly what it does from browsing their websiteā€¦ Iā€™m imagining it helps retards like me set up and configure web servers? or am I completely barking up the wrong tree?

off topic but - whats the thinking behind moving away from MySQL? is that purely a performance thing?

@swalkinshaw Linodes looks interesting - once again, not sure I totally understand what it is theyā€™re offering. Are you suggesting use Linodes to start a new virtual server for each website you create? (and manage with Ansible?)

I wasnā€™t specifically suggesting Linode (although they are pretty good in general). It was just an example of how a hostā€™s ā€œmanagedā€ service can vary and theirs just allows you more freedom.

Having 1 server per site is a good ideal model, but often itā€™s wasteful (in terms of money) if theyā€™re low traffic sites. So you can still have 1 server with many sites on it all managed by Ansible.

Ansible (and other server configuration tools like Puppet, Chef, etc) are all confusing on their own websites. I find it easier to just look at examples: https://github.com/ansible/ansible-examples/tree/master/wordpress-nginx

Ansible is basically equivalent to Puppet though.

Cool conversation. I think Iā€™ll be looking to try out Ansible in the coming weeks. I had a dev environment set up with Vagrant and Chef using Berkshelf, but the Vagrant-berkshelf plugin has basically been deprecated and I was having compatibility issues on my laptop with it. Killed most of my Sunday afternoon trying to figure that one out.

So yeahā€¦ looking forward to trying out Ansible!

Just a heads up that @Foxaii has started work on the Ansible part. Not sure when it will be done but at least weā€™ve started!

2 Likes

Anyone have a high level workflow they wish to share?

Working local + git + vagrant etc? curious

Iā€™ve been working on a few projects with a colleague and it seems to be working. We both work off the same Git repository (on Bitbucket), we have Vagrant set up and used Bedrock as a base to set up WP/Composer.

The only tricky part is the database. For the moment, we are doing a db dump and db import with a simple shell script when we make a change to the DB or to import the new changes. So although the db is in Git, it does not track very well in version control, since itā€™s a generated file. So we are do communicate and discuss who will be working on what. Generally he sets up pages/posts in Wordpress with some content and sets up the Advanced Custom Fields and custom post types. Heā€™ll commit the DB and any updates, Iā€™ll pull those down and style them.

Itā€™s working well, just something to be mindful of. I would like to potentially move to a file-based setup for pages in WP, but since WP is tied very closely to a MySQL database, that is difficult.

Awesome conversation. Iā€™m interested to see what you guys can come up with as a good Vagrant environment.

As for the DB stuff, I canā€™t say enough good things about WP DB Migrate Pro. Itā€™s outstanding.

I have also heard of people exporting DBs and then including them in git commits so that the other person can easily pull it down and import it. Itā€™s also there as a backup, so thatā€™s cool.

Yeah good info here guys, I started this thread cause Iā€™ve currently been tasked with completely pimping out our company workflow and hosting set up.

Up until now Iā€™ve always relied on tech support from hosting companies to handle maintaining the OS, configuring Apache/PHP and installing the right modules etc. I imagine thereā€™s quite a lot of people in a similar situation - in this case weā€™re sort of at the mercy of someone elseā€™s expertise, I have to place quite a lot of trust in him keeping it secure etc.

Itā€™s also tricky to locally emulate in a VM whatever server environment theyā€™ve provided, our hosting guy is pretty helpful but as mentioned above this seems like weā€™re working backwards. The alternative is we install a pre-configured stack to both the live server and local VMs, but the issue with this is hosting companies only really offer support on the system they know and are comfortable with, not some system youā€™ve set up which theyā€™ll see as lots of hard work and potentially risky.

If Iā€™m understanding right though, I shouldnā€™t need to be an expert to handle setting up a virtual server on some cloud hosting using something like Ansible? And I guess worst case scenario it makes it straightforward to start over, install the server from scratch and have your site back up and running.

Iā€™m probably going to keep most of my clients on the server we have, and start migrating sites one at a time to some kind of cloud hosted virtual server with all the bedrock goodies on.

My worry at the moment is of getting in over my head and somebody getting hacked due to me just not knowing what Iā€™m doingā€¦

Are my fears justified? I suspect itā€™s simply fear of the unknownā€¦

p.s. massively appreciate the info here and everything the roots/bedrock devs are doing! this is a great community :smile:

A basic LEMP stack (Linux, Nginx, MySQL, PHP) is mostly easy to setup and configure and is nothing to fear. Usually once you look into some existing solutions in Ansible, Chef, etc you see how simple they really are.

There are things to be concerned about like security, but usually itā€™s not too complicated either in terms of locking down ports, access, SSH etc. Once again thereā€™s default ā€œcookbooksā€ to help with this.

The absolute best part of configuration management is just being able to destroy/create servers and know that every time you start one up, youā€™re getting the exact same consistent setup from scratch.

Another potential benefit is having 1 cloud server PER client. A $5 Digital Ocean box is probably powerful enough for most standard client WP sites (with proper caching). You can still easily up-sell that at a bigger multiple and it has many benefits. You donā€™t need to worry about taking down ALL your sites if something happens to 1 server.

Personally, I donā€™t want to ever have to touch server configs, because I try to focus on actually building the sites, not maintaing or hosting them. So in my case Iā€™d pay more for more ā€œmanagedā€ hosting so that I donā€™t have to worry about that when I go to sleep at night.

Gotta pick your battles, but thatā€™s just my personal opinion.

I plan to use Migrate DB pro for keeping DBs in sync across environments. One question I have which I hope qualifies as relevant to this thread is, if the live site needs to be hosted on a clientā€™s cpanel which uses something other then Maria DB/Nginx/etc. will I have any issues simply migrating the db over from a staging site setup with the LEMP stack required by Trellis?

Maybe? Thereā€™s a chance but itā€™s probably low since WP caters to old MySQL versions. You never know what plugins or humans do after that though.

If your final live environment is going to be cpanel (Iā€™m in this boat 90% of the time), I tend to just have a vagrant machine Iā€™ve generated using http://puphpet.com which closely matches the specs of my server (Iā€™m on Sitegrounds cloud accounts quite often these days and if you pester their staff you can get them to tell you which apache and php modules theyā€™re runningā€¦ it even lets me change PHP version on a per directory basis which is pretty cool).

For this I decided against Trellis as its just a bit too different to the live environment for my liking.

A couple of valuable tips Iā€™ve learn in the couple of years since making this thread:

  1. If you put your generated vagrant machine from puphpet itself into a git repo and agree a directory structure for your projects folder with your team, you can quickly make sure the rest of the team has the same Vagrant config, with the same vhosts and databases available.

  2. You can set your vagrant sync directory as a directory ā€˜aboveā€™ the vagrant file e.g. something like:

     synced_folder:
         vflsf_qsxnmdmnxct2:
             source: ../
             target: /var/www
             sync_type: nfs
    

This way I just tell my team to keep a /Projects directory, with sites in paths like: /Projects/examplesite.com/ and the vagrant machine in a path like /Projects/vagrant-001.

This lets me have a /Projects folder with all my sites and various vagrant machines.

To answer your question more specifically though @louisnovick, for database syncing when working with cPanel, Iā€™ve found the simplest thing to do is to set up a mysql database through the cPanel gui, whitelist your IP in the ā€˜Remote mySQLā€™ section of cPanel.

Then in either wp-config.php or (.env if youā€™re using Bedrock), I have something like this going on in my local dev version:

    # Dev:
    #DB_NAME=vagrant_dbname
    #DB_USER=vagrant_dbuser
    #DB_PASSWORD=vagrant_dbpass
    #DB_HOST=localhost

    # Staging:
    DB_NAME=cpanel_staging_dbname
    DB_USER=cpanel_staging_dbuser
    DB_PASSWORD=cpanel_staging_dbpass
    DB_HOST=<IP of cPanel SERVER>

    WP_ENV=development
    WP_HOME=http://example.dev
    WP_SITEURL=${WP_HOME}/wp

^^ Provided youā€™re developing with reliable internet the fact Bedrock overrides the WP_HOME and WP_SITEURL settings with your local domain name (ā€˜example.devā€™), means youā€™ll get a dev copy thatā€™s pulling your database from your staging site, and running off your local files.

If youā€™re not using Bedrock youā€™ll need something like the following in your wp-config.php:

    define('WP_SITEURL', 'http://' . $_SERVER['HTTP_HOST'] . '/wordpress');
    define('WP_HOME',    'http://' . $_SERVER['HTTP_HOST']);
    define('WP_CONTENT_DIR', $_SERVER['DOCUMENT_ROOT'] . '/wp-content');
    define('WP_CONTENT_URL', 'http://' . $_SERVER['HTTP_HOST'] . '/wp-content');

I like this because it means I can let non-developers in my team start adding content via the wordpress admin on the staging domain whilst the site is still in development, effectively using the staging database (and uploads folder) as the ā€˜one trueā€™ database.

I effectively do no database syncing, all I have to do is periodically download the contents of the /wp-content/uploads directory.

In most cases I donā€™t even need to do that since post attachments will use the staging URL.

Iā€™ll happily elaborate on doing it this way if this explanation didnā€™t make sense, Iā€™m kind of interested if anyone else does it this way.

Note: when I finally ā€˜go liveā€™ and my client/team have finished adding content and uploads, I download the whole uploads directory, optimise all the images, and then re-upp them on the live site. I add the live database details to my .env or wp-config, and do a similar set up (so live DB becomes the ā€˜one trueā€™ database).

As a final step Iā€™ll run https://github.com/interconnectit/Search-Replace-DB on the database (from command line in vagrant since similar commands stay in my bash history), or just through its gui. This is to make sure ALL urls in the final database (WP_HOME, WP_SITEURL and all the post attachment links) are pointing to the live domain (e.g. http://example.com rather than ā€˜example.devā€™ or ā€˜staging.example.comā€™ or anything else).

I still think Trellis is better, but I reserve that for when I have a client whoā€™s willing to let me put them on a digital ocean droplet. Iā€™ve found Trellis and cPanel to just not play too nice together.

(Even Bedrock requires some symlinking of your public_html folder to make it work with cPanel - cPanel wonā€™t let you change the default webroot, so I point /public_html to /examplesite.com/web and install Bedrock to /examplesite.com/, there were threads on here about this Iā€™m sure but I canā€™t seem to find them in search right now).

Convoluted answer but, thought Iā€™d share as this is working well for me now.

Edit: One more ā€˜quirkā€™ that I got was that if youā€™re also using Sage, with Git, running Gulp locally, when you deploy from git to your staging and live environments, your /themes/sage/dist/ folder will be missing. I got round this by making my ā€˜stagingā€™ and ā€˜productionā€™ branches have a separate .gitignore where the /dist folder isnā€™t ignored. My git history ends up with a lot of switching to ā€˜stagingā€™ branch, merge in latest changes from ā€˜devā€™ branch, running ā€˜gulpā€™ or ā€˜gulp --productionā€™, commiting the changes to the /dist folder, and then git push and pull, /dist directory included. This way I rarely have to touch FTP.