Johnpbloch/wordpress moved to a new configuration and WP goes missing


So it turns out Trellis is also slightly to blame here we think.

Trellis copies the vendor folder to new releases. See

It seems like there is an issue with the underlying package being updated, but it might be exposed by copying the vendor dir instead of just relying on a brand new composer install every time.

This was done for speed purposes and has never been an issue until now.


I believe this means that for the remote server (vs. local dev) these two steps should avoid downtime:

# clear composer cache
ansible "web:&production" -m command -a "composer clear-cache" -u web

# deploy (without copying `vendor` dir)
ansible-playbook deploy.yml -e env=production -e -e 'project_copy_folders=[]'

As @swalkinshaw linked, the project_copy_folders list has only the vendor dir by default. If you haven’t customized project_copy_folders you can define the list as empty for one deploy, like in the command above. If you have customized project_copy_folders, you’ll need to temporarily edit it to omit vendor. These adjustments to project_copy_folders have the same effect as @tmdk’s solution of commenting out the two related tasks.

Edit: Added quotes around project_copy_folders=[]


So again, for clarity (and forgive my thickness here)…

To fix local environment (development)

  1. Update composer.json to remove the "johnpbloch/wordpress" line
  2. Run composer clear-cache
  3. Re-add the "johnpbloch/wordpress" (or composer require johnpbloch/wordpress)
  4. Run composer update
  5. Commit updated composer files as normal

To fix deployed environments (staging, production)

  1. Clear composer cache ansible "web:&production" -m command -a "composer clear-cache" -u web
  2. Deploy (without copying vendor dir) ansible-playbook deploy.yml -e env=production -e -e project_copy_folders=[]

And everything should work fine going forward after that?

Is that correct?


@MWDelaney No guarantees, but this appears to work in my tests.

Local dev

# run commands in local machine Bedrock `site` dir

composer remove johnpbloch/wordpress

composer clear-cache

composer require johnpbloch/wordpress:4.7.3

# git add..., git commit..., git push...

Remote server

# run commands in local machine `trellis` dir
# edit `production` to be your <environment>
# edit `` to be your site name

ansible "web:&production" -m command -a "composer clear-cache" -u web

ansible-playbook deploy.yml -e env=production -e -e 'project_copy_folders=[]'

If you have customized project_copy_folders, just temporarily remove vendor from your project_copy_folders and run deploy.yml (instead of using -e project_copy_folders=[] in second command above).

Edit: Added quotes around project_copy_folders=[]

Multisite (subdomain) deploy: Wordpress Installed?, PHP Fatal error: require() missing files
Npm, Bower and gulp tasks not included in deploy routine?
Staging Deploy Fails When New Plugin Is Added
"The lock file is not up to date with the latest changes" when deploying
The johnpbloch/wordpress:4.7 upgrade issue

Awesome. It seems to work in my tests, too. Thanks for humoring my specificity :slight_smile:


Does not work in my case. The second command returns: `no matches found: project_copy_folders=[]``
I tried removing “vendors” but then the deploy fails :

rmtree failed: [Errno 13] Permission denied: '/srv/www/ releases/20170201151109/vendor/squizlabs/php_codesniffer/CodeSniffer/Reportin g.php'


@pixeline Thanks for reporting that. I suspect your shell is trying to use the square brackets [] for globbing/pattern matching. I’ve added single quotes around project_copy_folders=[] in the example commands above to hopefully avoid the problem. Could you try this:

ansible-playbook deploy.yml -e env=production -e -e 'project_copy_folders=[]'


Just had two sites with this issue. And though I decided to kill one of the projects as I no longer needed it this worked for the other one just fine. Just some other boxes left now. Glad to know I was not losing it AND that there is a solution!


WTF?! I just wanted to deploy a simple bugfix today and then this…

Thanks guys! You saved my live!


Adding single quotes made it work perfectly, thank you!


Thankyou everyone for solving this mystery for me. It was driving me crazy – the fix from @mwdelaney worked for me!


It worked for me locally, but didn’t help for staging and production.

What helped was extremely rough:

  • cleaning out contents of releases folder (that effectively kills the site until you re-deploy it and there will be no ability to rollback to previous releases because you delete them, be aware)
  • deploying afresh right after.

It’s the dumbest and risky cowboy-style solution out there, that will make a site unavailable for a few minutes if re-deploying goes well and for unpredictable period if re-deploying fails, but it worked for me.


Just tested this on a multi-site install and confirm it works too.


So, instead of remove releases and make the site unavailable, could be useful fix manually inside last release folder with composer cache or installing again wp package and then re-try deploy?


Definitely, there should be a better way.

I did ansible "web:&<environment>" -m command -a "composer clear-cache" -u web, it finished successfully, but didn’t help (the same “two packages in the same folder” error), so how “fixing manually composer cache inside last release” can be different?

Auto update to minor versions and security releases

I cannot test anymore because I’m actually with staging environment and nothing is in production yet, so I fixed by removing the entire item folder from the server and ran again the deploy, but if you are in production and you don’t want to make site unavailable, I’d try this way:

  1. go inside the server
  2. go to last release folder
  3. make a new copy (/<releasedate>_fix/ eg) and work on it
  4. remove vendor folder (I fixed in this way locally) and/or wp folder
  5. run composer install
  6. restore this new release folder with the original name
  7. make deploy again

Try this and you shouldn’t make down and without risk to leave site down because deploy goes in error :slight_smile:


Staging and production should be functionally identical; if this process works on one it should work on the other.

If you (you, anyone reading this) are having difficulty with the process @fullyint explained above, make sure you illustrate how your set up might differ from the vanilla Trellis setup outlined in the Trellis docs.


Does anyone know if an existing site will be affected if the server is provisioned prior to applying the fix suggested by @fullyint?


Answering my own question: I did not have any problems running the wordpress-setup and letsencrypt tags.


Hi everyone! We just got tangled in this too.

Since we have multiple language packs, we had to commit a clean composer.{json,lock} to each machine.

We did this for WordPress core, for the core language packs and for plugins that had language packs. Meaning, for each path that was shared among packages, we had to commit the clean up, and then commit the packages back again. This seemed to work fine for staging, so we repeated the steps for production.

its working now flawlessly. thanks everyone for their contributions.