Best way to cache & compress with Roots Stack

I am trying to find the best possible way to setup maximum compression and caching on the production site with my Roots stack. From the documentation (which does have an article on caching), I can only see how to enable FastCGI caching.

There is a couple of things I don’t understand from that, and can’t find useful information on that in the web:

  1. Why is the default caching duration so low? I don’t understand what the 30 seconds really mean. Is this geared more towards websites with a lot of traffic, where there are hundreds of users surfing at a giving time? What about lower-frequented websites with 1000 unique visitors per month?

  2. I don’t understand what exactly is being cached with FastCGI caching. Is there still a need for wordpress caching plugins?

  3. Using the roots stack, is gzip compression enabled by default on production sites? If not, how do I enable it?

  4. What other things must/can I consider regarding the performance (excluding the actual code itself)?

It would be great if more information on performance could be found regarding the roots stack (I also don’t mind paying for ebooks or screencasts, if they are available).

Here goes:

  1. The built in caching with Trellis is a “micro-cache” as apposed to a long-term/duration cache. The default cache of 30 seconds is exactly that, a cache that is valid for 30 seconds. Yes, this would generally be geared towards handling a large volume of concurrent users. From some benchmarking, I’ve had found that a $5 DO server can handle around 1,000 concurrent users with a fairly average WordPress site (very scientific measurement).
  2. Unless you had some very specific requirements, probably not. If you have a low volume of visitors, the FastCGI cache will still work. If you visit a page and the cache has expired, you will still be served the cached content and the cache will be rebuild in the background for the next visitor. You can observe this by visiting a Trellis-based site with cache enabled and looking at the response headers. You should see either, fastcgi-cache: HIT, ‘MISS’ or EXPIRED. The Roots stack is pretty amazingly optimised, so between the great performance you get out of the server from Trellis and the FastCGI cache, you should be all good on the perf front, even if you miss the cache.
  3. Yes, gzip is enabled. Again, you can check the headers – you should see content-encoding: gzip
  4. Only thing I would recommend is figuring out a good image optimisation strategy for images uploaded through WordPress (theme images will be optimised by Sage) and setting an expiry date for various file types (Help setting expiry date for static resources (images) in Trellis?).

Also, worth noting that if you enable SSL, you will also serve assets concurrnently via HTTP/2. Demo and more info: https://http2.akamai.com/demo.

3 Likes

@nathobson did a good job, few additional points:

  1. The cache is “low” because there’s no intelligent expiry. It’s just a set time. You could raise it if you want, but if you update a post (for example), the cache won’t be invalidated and users wouldn’t see the updates for whatever you set the time to. 30s is a tradeoff where it’s “real time” enough to not really matter for most people.
  2. No you don’t need any plugins unless you’d want to use permanent caching with explicitly expiry. WP caching plugins will usually handle intelligently invalidating any necessary cache when you update any resource.
  3. Gzip caching is enabled, but you could pre-gzip assets which removes the burden from Nginx doing it on the server. See https://github.com/roots/trellis/issues/864

The other thing Trellis doesn’t do out of the box is enable browser caching for static assets. You can definitely do this yourself though.

3 Likes

Thanks a ton @nathobson and @swalkinshaw for your amazing answers!

So, if I understand correctly, once deployed, both my main.css and my main.js will be minified and gzipped automatically.

Thanks also for pointing out the concurrent serving through HTTP/2. I was not aware of that.

Re 4. (image optimization): I had tried to do this via the suggested imgix as this was recommended in the ebook. Fell in love with it immediately. So through a combination of imgix and the <picture> element I deliver compressed, ideal-size images through a CDN. I don’t see what I can do else in this regard?

EDIT: I just found this in the imgix documentation, I had overlooked it before:

URLs can be given an expiration date via an expires parameter that takes a UNIX timestamp in the query string. For example: ?expires=1477789261

So apparently a UNIX expiration timestamp can be added to images. I can see how this can easily be automated and optimized with blade.

@swalkinshaw: how can I determine if I want permanent caching? I am confused regarding this, because before working with this stack it was the only caching I knew, basically, and I thought it was a must-do for speed optimization. Now I am not so sure anymore if I even need it.

Hey.

Glad that helps clear things up :slight_smile:

Yup but make sure you have a build process in place. In Trellis, you would use the build-before deploy hook. This is all run for you whenever you deploy. See the trellis/deploy-hooks/build-before.yml file, it has a boilerplate set of tasks included but commented out.

Sounds like you’re doing well with image optimisation. I’m actually yet to implement Imgix or Cloudinary but definitely on my radar. Images are often where page size can creep up, so always worth keeping an eye on.

Scott might have more to add on this one but unless you have a particular need, I would start with the default Trellis caching and then determine from there if you would need something different. As I mentioned before, servers provisioned with Trellis are pretty amazingly optimised. The main indicator would be the type of content you have. If you’re not changing your content very frequently, maybe a more persistent cache work work better for you. However, if you have content that changes a lot, the micro-caching approach is very effective.

1 Like

Thanks!

I have had a look into build-before.yml. It doesn’t seem to include any particular process for gzipping/minification/compression. Is that implied in the automatic build process, or does that need to be added manually in the same manner to the build process? It looks like this for me, which I think is the default view:

- name: Install npm dependencies
   command: yarn
   connection: local
   args:
     chdir: "{{ project_local_path }}/web/app/themes/manalyse"

 - name: Install Composer dependencies
   command: composer install --no-ansi --no-dev --no-interaction --no-progress --optimize-autoloader --no-scripts
   args:
     chdir: "{{ deploy_helper.new_release_path }}/web/app/themes/manalyse"

 - name: Compile assets for production
   command: yarn run build:production
   connection: local
   args:
     chdir: "{{ project_local_path }}/web/app/themes/manalyse"

 - name: Copy production assets
   synchronize:
     src: "{{ project_local_path }}/web/app/themes/manalyse/dist"
     dest: "{{ deploy_helper.new_release_path }}/web/app/themes/manalyse"
     group: no
     owner: no
     rsync_opts: --chmod=Du=rwx,--chmod=Dg=rx,--chmod=Do=rx,--chmod=Fu=rw,--chmod=Fg=r,--chmod=Fo=r

Gzipping is handled by the server automatically. Or Scott did mention you could pre-gzip assets.

The build-before code you pasted above includes running yarn run build:production which would run Sage’s build process (Webpack). This would include minification etc. More info on the Sage commands: https://roots.io/sage/docs/theme-development-and-building/#available-build-commands.

1 Like

Hey, resurrecting an old thread to ask how we can enable caching for static assets?

I might be missing something, but surely static assets are the things you’d want to be cached for longest–things like images never change(?)

And on that note, is it possible to set different durations for different things? E.g. cache scripts and styles for 1hr but images for several days?

Static assets are cached by the clients (if they want that).
nginx should be able to serve the static assets already as fast as realistically possible (maybe precompress with gzip)?

Sorry, I had a brainfart haha. Was thinking of client-side caching.

In case anyone else winds up here from googling how to enable caching, FastCGI Caching (what this thread is about) is server-side caching. To set the client caching, you need to enable it in your group_vars/<environment>/main.yaml by adding h5bp_expires_enabled: true to it.

1 Like