Trellis with nginx microcaching and PHP configuration is already pretty fast.
However, with lots of fully dynamic pages, as it is the case with WooCommerce, performance can suffer noticeably.
Any plugins or PHP configuration for speeding up the dynamic part (like The SEO Framework plugin helps with SEO)?
memcached is already ensured by Trellis, can it be used, together with opcache, to speed up WordPress/WooCommerce?
We use redis as the object cache and also needed to increase the size of the opcache to get good speeds on bigger woocommerce sites.
Thanks! What plugin(s) are you using for redis/memcached?
For redis, the official redis plugin is recommended on some blogs:
Being an official plugin is probably a good thing?
In our projects we use wordpress-pecl-memcached-object-cache to take advantage of memcached.
Only need to add object-cache.php to your /app bedrock folder.
How would you install this “drop-in” with Trelis?
Are you using composer
for installing it as a bedrock dependency?
We are using the official wp-redis plugin. Here is a step by step.
This was all set up a while ago, so apologies if I miss something!
We have been using this in production for 3 years. When it’s enabled along with the nginx cache it kicks butt. I tried to cover all the caching and performance-related changes and why.
We added the geerlingguy.redis role in galaxy.yml
- name: redis
src: geerlingguy.redis
version: 1.5.1
Add it in dev.yml and server.yml
- { role: redis, tags: [redis] }
Configured it in group_vars/al.yml
#Redis
redis_maxmemory: 256m
redis_maxmemory_policy: "allkeys-lru"
We normally have multiple sites on a VPS with 4GB RAM and 2 cores. It can be up to 20. We like having more sites as when one does need to burst it has more capacity. I know a lot of Trellis sites are one per VPS, so the config for memory, etc could stay with the defaults. With Redis, most requests don’t hit the CPU. The maxmemory is probably high and can be tuned. We didn’t want it to be too small though and it currently works.
In roles/php/defaults/main.yml we needed to add to php_extensions_default:
php-igbinary: "{{ apt_package_state }}"
php-redis: "{{ apt_package_state }}"
In bedrock we needed to add a unique cache key salt to the config so sites don’t conflict.
so in config/application.php
/**
* Unique cache key salt for memcached/redis
*/
define('WP_CACHE_KEY_SALT', getenv('DB_NAME'));
In composer.json
"wpackagist-plugin/wp-redis": "^1",
Then you need to commit the most recent version of the object_cache.php from the redis plugin to the app directory as per it’s docs.
Aside from this we also needed to add exclusions to the nginx full page cache to ensure woocommerce didn’t get cached and in particular, people didn’t get weird cart sharing issues
In Trellis - roles/wordpress-setup/defaults/main.yml
# Fastcgi cache params
nginx_cache_duration: 30s
nginx_skip_cache_uri: /wp-admin/|/wp-json/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml|/store.*|/cart.*|/my-account.*|/checkout.*|/addons.*
nginx_skip_cache_cookie: comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in|woocommerce_cart_hash|woocommerce_items_in_cart|wp_woocommerce_session_
If you compare to the current lines for this, you can see the woocommerce specific additions.
In the same file, we also have more PHP max children by default because our VPS has more ram
# PHP FPM
php_fpm_pm_max_children: 40
php_fpm_pm_start_servers: 10
php_fpm_pm_min_spare_servers: 10
php_fpm_pm_max_spare_servers: 20
php_fpm_pm_max_requests: 500
We also found that the opcache was way too small for the sites. You can use a bunch of tools on the running server to see how many open files and how full the cache is to see if all your PHP is being cached. The defaults in roles/php/defaults/main.yml are:
php_opcache_max_accelerated_files: 4000
php_opcache_memory_consumption: 128
We added a php.yml to each server, eg group_vars/production/php.yml and tuned it. The default for us is
php_opcache_max_accelerated_files: 100000
php_opcache_memory_consumption: 512
It’s a pretty huge difference in terms of time to first byte and performance in lighthouse tests, etc.
You may have some issues with redis running in your vagrant environment, we just normally enable the object cache file when we go to staging/production - we could fix this, but haven’t. Sorry!
Amazing! Thanks for this great tutorial.
What do you think about using APCu for caching?
Can WordPress caching plugins make use of APCu, too?
I’d be interested in trying out APCu and comparing it. We used Redis over Memcached mainly because it was persistent and more modern and we could have dedicated/shared redis instances where needed with our current setup.
We have some instances that are load balanced and also have seperate sql servers so redis seemed a better fit.
It looks like you could try the drop in APCu caches for Wordpress and add in the module and it would work. The approach would be similar to the redis solution.
So give it a go and see what it looks like.
The main thing we have found in this process is you can have a generic setup that works well, but they often need to be tuned.
I forgot to mention that we do have a big shared DB instance with it’s own config with heaps more rams to keep the tables in memory and also trellis sites that skip mariadb so it doens’t even run for bigger WooCommerce sites… or you use RDS!
Hi everyone, I’m super interested in this topic as well. Just set up a WooCommerce with the full roots.io Stack and in search of ways to speed it up.
Thanks a lot for the detailed guide @nichestudio!
Unfortunately, all I can share for now is that I tested Redis with both available Object Cache plugins and still hover around 500ms for the initial HTML response without fastcgi cache… Everything seems to be working fine and almost 100% hits. I hope to report back once I found something working for me
This topic was automatically closed after 42 days. New replies are no longer allowed.