Sage and augmenting my workflow

I have launched around 7 production sites using roots in the last year or so. My workflow has always been smooth and now I am ready to update it and advance a bit with the launch of Sage.

My old flow was theme-only. Third party plugins would be installed via the normal WordPress plugin install. Database is kept in sync using WP Migrate DB Pro until the keys are handed to the client for testing and content development. At that point any database configuration changes would have to be duplicated on my local and then on staging. Basically the point is that I would keep the theme under version control but nothing else.

So I would develop locally with git, push to github, deploy with deployhq. This worked really well because it didn’t matter what hosting the production server was on. Did not have to install npm or bower on the server etc… I could push to a totally different staging server via deployhq.

So the first thing I have noticed in sage is gulp --production. This removes and rebuilds the files in the dest/ folder and appends the cachebreaker / version number and organizes it in with assets.json. So the first truth I see is you don’t run gulp --production locally. However my original workflow required me to keep the production assets under version control.

So in order to keep my old workflow, after removing /dist from gitignore, I would now have to …

  1. Develop
  2. Run $ gulp --production
  3. Push
  4. Run $ gulp to get back to the development version of the assets
  5. Continue

Obviously this sucks. So just wondering if there is a better way. One thought is to have gulp build to a different folder (like dist-dev) and tweak assets.php accordingly. That way I can run gulp --production and push and still have working files for the development environment.

I am interested in anybody’s thoughts.

See @austin’s reply here:

Well this isn’t necessarily always true. The --production flag tries to do as little as possible different from the dev build. In fact: it uses all the same exact tasks. The only thing this does different is

  • Filename Revving
  • disable sourcemaps
  • If there is a less/Sass syntax error fail the build rather than ignore it

Notice how none of those change the content of your source files. They are “meta” tasks that enable and disable tradeoffs useful in development. When I have a project that is nearing completion I will actually switch my WP_ENV to to staging and start using gulp --production so I can start simulating exactly what is heading to staging/production. This way I am never crossing my fingers and hoping it works on deploy.

Options

1. Don’t check compiled files into source control

I recommend doing this. When you say:

You are actually saying “checking compiled files into source control sucks”.

I’ve got an article here: http://austinpray.com/ops/2015/01/15/build-steps-and-deployment.html that explains how I approach deploying projects with a build process. I’m going do another write up specifically about deploying sage.

2. Be clever about checking compiled files into source control

I still don’t recommend committing compiled files to source control. But I know in some situations it is the path of least resistance. So if you are going to do it, be clever about it.

  • Use git hooks to run the --production build and commit the files automatically.
  • Use a CI to automatically build and commit files.

3. Your original idea

I don’t recommend this. However: I was very clever when I built the gulpfile. So it can definitely facilitate shenanigans such as this. The variable path.dist controls the output of all the compiled files. So maybe around here:
https://github.com/roots/sage/blob/master/gulpfile.js#L16

You can add:


if(argv.production) {
  path.dist = "dist-prod/"
}

Which means

if --production then output everything to dist-prod/ directory

The gulpfile is just javascript. This is a huge advantage over the previous grunt workflow. You can make it do anything you want.

3 Likes

This is helpful and a sage-specific workflow post would be great and I’m sure helpful to others. To be a bit more clear on my issue, when I run gulp --production, the site no longer renders in the development environment. This is because the /dist files are replaced with revved versions. The development environment throws 404’s after you run gulp --production. When you run gulp again the development environment is back in business.

Yes when I said this sucks I mean nothing to do with Roots or Sage. It is I who sucks. Finding the optimal workflow is tough and I spent a while honing my roots workflow to the point where I was building multiple projects with multiple clients myself and it all going smooth as silk. With Sage I am ready to take another step up.

Your help is appreciated!

/me slow claps…

1 Like

This is basically what I am looking for. Would love to see a sage-specific writeup at some point.

Hey good looking people. Just thought I would tie this one off with a post on exactly what I am now doing in case anyone else has the same issue.

To recap my issue was that I was doing theme-only deployment from local->github->deployHQ->production. In many cases running gulp on the host server is not feasible. Also I personally don’t love the idea of relying on the production server running gulp, particular where part of that process is to delete all the assets and replace them. Pushing up the exact files the server is going to serve just feels better to me.

This worked well in Roots but broke down in Sage. So here is what I did to get back on track with Sage.

Compile a separate set of assets for production:
gulpfile.js line 8 add

if(argv.production) {
	path.dist = "dist-prod/";
}

Add a new constant to config for use in assets.php
lib/config.php line 35 add

if (!defined('PROD_DIR')) {
	// Path to the PRODUCTION (non-dev) build directory for front-end assets
	define('PROD_DIR', '/dist-prod/');
}

Change manifest path to production dist
lib/assets.php change:

  if (empty($manifest)) {
-    $manifest_path = get_template_directory() . DIST_DIR . 'assets.json';
+    $manifest_path = get_template_directory() . PROD_DIR . 'assets.json';
     $manifest = new JsonManifest($manifest_path);
   }

Return the prod path in production
lib/assets.php line 68 change

-    return $dist_path . $directory . $manifest->get()[$file];
+    return $prod_path . $directory . $manifest->get()[$file];

So now I can develop locally, and when I need to push up to staging I run gulp --production similar to how I used to run grunt build.

Again the downside here is that compiled assets are being tracked in git.

1 Like

Hey all. So here is a quick follow up. The above solution works but causes problems when you want to reference an image on the production environment using asset_path() from namespace Roots\Sage\Assets;

There is a way to fix it but it is really starting to add a lot of changes to the core Sage structure. So I decided a better solution is to just follow the original suggestion which is…

  • remove ‘dist’ from gitignore so the production assets are tracked
  • run gulp --production
  • push to staging / live
  • run gulp to continue developing

The main downside to this method, other than what was mentioned by @austin and @ben above is that you can accidentally push without running gulp --production. This will simply cause the sourcemapped css files to be served from the production server (and a non-revved version of the files).

I think this is still a topic to be explored.

@stueynet

Thanks for this. Definitely of interest to me.

Do you use Bedrock at all? I still haven’t decided on how I want to track Bedrock per project. Do I want theme development and and Bedrock all in one repo or separate it?

With DeployHQ you can setup scripts to run on the server to pull in dependencies for Bedrock which is nice. However, I still haven’t figured out if after the initial deployment of Bedrock and setting up the project if I want to track it on a per project basis if that makes sense.

I don’t use bedrock. I use xampp for local dev and in some cases VVV. I don’t really require the complete deployment stack for my projects. Usually managing theme deployments via git / deployhq and then other tools for data sync where requires.

Some clients use pantheon so I use their workflow in that case. Others are on shared hosting.

One repo

FWIW, Bedrock is awesome to use even without Capistrano for deployment

Gonna spend tomorrow on Bedrock

So I’ve spent the better part of the day nailing down my Capistrano deployment (I use Sage and Bedrock).

Local dev - uses the bedrock-ansible local VM (don’t tell my boss I’m using Ansible :wink:)

I then followed the instructions in the Capistrano webcast (which was super boss) and generally got things working, substituting capistrano/grunt with capistrano/gulp for obvious reasons.

One thing I ran into - capistrano/npm builds with the --production flag by default, and gulp has a ton of devDependencies. So I removed that flag, and things worked pretty well, although my cap deploy takes a LONG time, because every single deploy has to build all of those dependencies.

I also had to make sure I had all the right deps on the server; my remote server (which is “staging” for all practical purposes; it’s actually a local VM for right now for reasons that will make sense in a bit) is configured with a chef cookbook, and once I got it to install node/npm and gulp and bower system-wide, that was a hurdle (there are ways to tell capistrano/gulp where to find the gulp executable, but it was getting annoying). (The reason that my “remote” server is a local VM is that I was chugging along running cap deploy and every time I hit a snag, I went to update my chef code to fix it; this was quite a bit easier to do using test-kitchen on a local VM than playing with real servers; I presume the npm build time will get better on a real box).

One thing that might be interesting (for someone who’s Capistrano skills aren’t over two years old) would be to figure out a way to have a “for production” version of the gulpfile? Obviously I don’t need browsersync and some other items when compiling to push up to a server.

Edit to add - I’ll be doing this on a real machine shortly. If you’re curious, a cap deploy that had to build all the npm modules took 40 minutes…granted my local VM only has 384mb of RAM :wink:

My next step is getting this all working with TravisCI, and getting all of the capistrano stuff off my local machine and communicating only via source control.

An update, in case anyone was wondering:

Doing the entire npm build, even on a decently powered web server, was still not delightful (it was throwing uninformative errors, and was taking over 15 minutes to get that far).

My current method, based upon some suggestions I’ve seen scattered around various threads here, is this workflow:

  1. Run gulp --production
  2. git push to my remote
  3. cap deploy
  4. Re-run gulp to get my local stuff back.

It’s not amazing, and as suggested above, a nice mix of pre and post commit hooks would automate it nicely (it would ensure that the production ready dist files end up in the repo that cap is building from, but clean up after myself) but for the time being, it’s Good Enough.

Edited to add - here’s some information about using Bower and NPM on TravisCI, which will probably prove helpful.

We’ve had discussions about moving packages out of devDependencies and into dependencies but I don’t really remember what we decided (probably not to do it). It’s a slight violation of the reasoning behind that npm feature.

To speed up your deploy times you should be copying your previous node_modules/bower_components folders into your new releases.

See https://github.com/capistrano/copy-files

1 Like