I have launched around 7 production sites using roots in the last year or so. My workflow has always been smooth and now I am ready to update it and advance a bit with the launch of Sage.
My old flow was theme-only. Third party plugins would be installed via the normal WordPress plugin install. Database is kept in sync using WP Migrate DB Pro until the keys are handed to the client for testing and content development. At that point any database configuration changes would have to be duplicated on my local and then on staging. Basically the point is that I would keep the theme under version control but nothing else.
So I would develop locally with git, push to github, deploy with deployhq. This worked really well because it didnāt matter what hosting the production server was on. Did not have to install npm or bower on the server etc⦠I could push to a totally different staging server via deployhq.
So the first thing I have noticed in sage is gulp --production. This removes and rebuilds the files in the dest/ folder and appends the cachebreaker / version number and organizes it in with assets.json. So the first truth I see is you donāt run gulp --production locally. However my original workflow required me to keep the production assets under version control.
So in order to keep my old workflow, after removing /dist from gitignore, I would now have to ā¦
Develop
Run $ gulp --production
Push
Run $ gulp to get back to the development version of the assets
Continue
Obviously this sucks. So just wondering if there is a better way. One thought is to have gulp build to a different folder (like dist-dev) and tweak assets.php accordingly. That way I can run gulp --production and push and still have working files for the development environment.
Well this isnāt necessarily always true. The --production flag tries to do as little as possible different from the dev build. In fact: it uses all the same exact tasks. The only thing this does different is
Filename Revving
disable sourcemaps
If there is a less/Sass syntax error fail the build rather than ignore it
Notice how none of those change the content of your source files. They are āmetaā tasks that enable and disable tradeoffs useful in development. When I have a project that is nearing completion I will actually switch my WP_ENV to to staging and start using gulp --production so I can start simulating exactly what is heading to staging/production. This way I am never crossing my fingers and hoping it works on deploy.
Options
1. Donāt check compiled files into source control
I recommend doing this. When you say:
You are actually saying āchecking compiled files into source control sucksā.
Iāve got an article here: Build Steps and Deployment that explains how I approach deploying projects with a build process. Iām going do another write up specifically about deploying sage.
2. Be clever about checking compiled files into source control
Use git hooks to run the --production build and commit the files automatically.
Use a CI to automatically build and commit files.
3. Your original idea
I donāt recommend this. However: I was very clever when I built the gulpfile. So it can definitely facilitate shenanigans such as this. The variable path.dist controls the output of all the compiled files. So maybe around here: https://github.com/roots/sage/blob/master/gulpfile.js#L16
You can add:
if(argv.production) {
path.dist = "dist-prod/"
}
Which means
if --production then output everything to dist-prod/ directory
The gulpfile is just javascript. This is a huge advantage over the previous grunt workflow. You can make it do anything you want.
This is helpful and a sage-specific workflow post would be great and Iām sure helpful to others. To be a bit more clear on my issue, when I run gulp --production, the site no longer renders in the development environment. This is because the /dist files are replaced with revved versions. The development environment throws 404ās after you run gulp --production. When you run gulp again the development environment is back in business.
Yes when I said this sucks I mean nothing to do with Roots or Sage. It is I who sucks. Finding the optimal workflow is tough and I spent a while honing my roots workflow to the point where I was building multiple projects with multiple clients myself and it all going smooth as silk. With Sage I am ready to take another step up.
Hey good looking people. Just thought I would tie this one off with a post on exactly what I am now doing in case anyone else has the same issue.
To recap my issue was that I was doing theme-only deployment from local->github->deployHQ->production. In many cases running gulp on the host server is not feasible. Also I personally donāt love the idea of relying on the production server running gulp, particular where part of that process is to delete all the assets and replace them. Pushing up the exact files the server is going to serve just feels better to me.
This worked well in Roots but broke down in Sage. So here is what I did to get back on track with Sage.
Compile a separate set of assets for production:
gulpfile.js line 8 add
if(argv.production) {
path.dist = "dist-prod/";
}
Add a new constant to config for use in assets.php
lib/config.php line 35 add
if (!defined('PROD_DIR')) {
// Path to the PRODUCTION (non-dev) build directory for front-end assets
define('PROD_DIR', '/dist-prod/');
}
Change manifest path to production dist
lib/assets.php change:
Hey all. So here is a quick follow up. The above solution works but causes problems when you want to reference an image on the production environment using asset_path() from namespace Roots\Sage\Assets;
There is a way to fix it but it is really starting to add a lot of changes to the core Sage structure. So I decided a better solution is to just follow the original suggestion which isā¦
remove ādistā from gitignore so the production assets are tracked
run gulp --production
push to staging / live
run gulp to continue developing
The main downside to this method, other than what was mentioned by @austin and @ben above is that you can accidentally push without running gulp --production. This will simply cause the sourcemapped css files to be served from the production server (and a non-revved version of the files).
Do you use Bedrock at all? I still havenāt decided on how I want to track Bedrock per project. Do I want theme development and and Bedrock all in one repo or separate it?
With DeployHQ you can setup scripts to run on the server to pull in dependencies for Bedrock which is nice. However, I still havenāt figured out if after the initial deployment of Bedrock and setting up the project if I want to track it on a per project basis if that makes sense.
I donāt use bedrock. I use xampp for local dev and in some cases VVV. I donāt really require the complete deployment stack for my projects. Usually managing theme deployments via git / deployhq and then other tools for data sync where requires.
Some clients use pantheon so I use their workflow in that case. Others are on shared hosting.
So Iāve spent the better part of the day nailing down my Capistrano deployment (I use Sage and Bedrock).
Local dev - uses the bedrock-ansible local VM (donāt tell my boss Iām using Ansible )
I then followed the instructions in the Capistrano webcast (which was super boss) and generally got things working, substituting capistrano/grunt with capistrano/gulp for obvious reasons.
One thing I ran into - capistrano/npm builds with the --production flag by default, and gulp has a ton of devDependencies. So I removed that flag, and things worked pretty well, although my cap deploy takes a LONG time, because every single deploy has to build all of those dependencies.
I also had to make sure I had all the right deps on the server; my remote server (which is āstagingā for all practical purposes; itās actually a local VM for right now for reasons that will make sense in a bit) is configured with a chef cookbook, and once I got it to install node/npm and gulp and bower system-wide, that was a hurdle (there are ways to tell capistrano/gulp where to find the gulp executable, but it was getting annoying). (The reason that my āremoteā server is a local VM is that I was chugging along running cap deploy and every time I hit a snag, I went to update my chef code to fix it; this was quite a bit easier to do using test-kitchen on a local VM than playing with real servers; I presume the npm build time will get better on a real box).
One thing that might be interesting (for someone whoās Capistrano skills arenāt over two years old) would be to figure out a way to have a āfor productionā version of the gulpfile? Obviously I donāt need browsersync and some other items when compiling to push up to a server.
Edit to add - Iāll be doing this on a real machine shortly. If youāre curious, a cap deploy that had to build all the npm modules took 40 minutesā¦granted my local VM only has 384mb of RAM
My next step is getting this all working with TravisCI, and getting all of the capistrano stuff off my local machine and communicating only via source control.
Doing the entire npm build, even on a decently powered web server, was still not delightful (it was throwing uninformative errors, and was taking over 15 minutes to get that far).
My current method, based upon some suggestions Iāve seen scattered around various threads here, is this workflow:
Run gulp --production
git push to my remote
cap deploy
Re-run gulp to get my local stuff back.
Itās not amazing, and as suggested above, a nice mix of pre and post commit hooks would automate it nicely (it would ensure that the production ready dist files end up in the repo that cap is building from, but clean up after myself) but for the time being, itās Good Enough.
Weāve had discussions about moving packages out of devDependencies and into dependencies but I donāt really remember what we decided (probably not to do it). Itās a slight violation of the reasoning behind that npm feature.
To speed up your deploy times you should be copying your previous node_modules/bower_components folders into your new releases.