This worked for me. Thank you!
@nathobson Just to be clear, commenting out those tasks fixed the deploys for me
I should mention that I deploy through the Trellis ansible playbook, so my fix certainly does not apply when you deploy using capistrano.
So it turns out Trellis is also slightly to blame here we think.
Trellis copies the
vendor folder to new releases. See https://github.com/roots/trellis/blob/8666765785aa799cb6828dccf9c6f846eb975cba/roles/deploy/defaults/main.yml#L11-L15
It seems like there is an issue with the underlying package being updated, but it might be exposed by copying the
vendor dir instead of just relying on a brand new
composer install every time.
This was done for speed purposes and has never been an issue until now.
I believe this means that for the remote server (vs. local dev) these two steps should avoid downtime:
# clear composer cache ansible "web:&production" -m command -a "composer clear-cache" -u web # deploy (without copying `vendor` dir) ansible-playbook deploy.yml -e env=production -e site=example.com -e 'project_copy_folders='
As @swalkinshaw linked, the
project_copy_folders list has only the
vendor dir by default. If you haven’t customized
project_copy_folders you can define the list as empty for one deploy, like in the command above. If you have customized
project_copy_folders, you’ll need to temporarily edit it to omit
vendor. These adjustments to
project_copy_folders have the same effect as @tmdk’s solution of commenting out the two related tasks.
Edit: Added quotes around
So again, for clarity (and forgive my thickness here)…
To fix local environment (development)
- Update composer.json to remove the
- Re-add the
composer require johnpbloch/wordpress)
- Commit updated composer files as normal
To fix deployed environments (staging, production)
- Clear composer cache
ansible "web:&production" -m command -a "composer clear-cache" -u web
- Deploy (without copying
ansible-playbook deploy.yml -e env=production -e site=example.com -e project_copy_folders=
And everything should work fine going forward after that?
Is that correct?
@MWDelaney No guarantees, but this appears to work in my tests.
# run commands in local machine Bedrock `site` dir composer remove johnpbloch/wordpress composer clear-cache composer require johnpbloch/wordpress:4.7.3 # git add..., git commit..., git push...
# run commands in local machine `trellis` dir # edit `production` to be your <environment> # edit `example.com` to be your site name ansible "web:&production" -m command -a "composer clear-cache" -u web ansible-playbook deploy.yml -e env=production -e site=example.com -e 'project_copy_folders='
If you have customized
project_copy_folders, just temporarily remove
vendor from your
project_copy_folders and run
deploy.yml (instead of using
-e project_copy_folders= in second command above).
Edit: Added quotes around
Awesome. It seems to work in my tests, too. Thanks for humoring my specificity
Does not work in my case. The second command returns: `no matches found: project_copy_folders=``
I tried removing “vendors” but then the deploy fails :
rmtree failed: [Errno 13] Permission denied: '/srv/www/domain.com/ releases/20170201151109/vendor/squizlabs/php_codesniffer/CodeSniffer/Reportin g.php'
@pixeline Thanks for reporting that. I suspect your shell is trying to use the square brackets
 for globbing/pattern matching. I’ve added single quotes around
project_copy_folders= in the example commands above to hopefully avoid the problem. Could you try this:
ansible-playbook deploy.yml -e env=production -e site=example.com -e 'project_copy_folders='
Just had two sites with this issue. And though I decided to kill one of the projects as I no longer needed it this worked for the other one just fine. Just some other boxes left now. Glad to know I was not losing it AND that there is a solution!
WTF?! I just wanted to deploy a simple bugfix today and then this…
Thanks guys! You saved my live!
Adding single quotes made it work perfectly, thank you!
Thankyou everyone for solving this mystery for me. It was driving me crazy – the fix from @mwdelaney worked for me!
It worked for me locally, but didn’t help for staging and production.
What helped was extremely rough:
- cleaning out contents of
releasesfolder (that effectively kills the site until you re-deploy it and there will be no ability to rollback to previous releases because you delete them, be aware)
- deploying afresh right after.
It’s the dumbest and risky cowboy-style solution out there, that will make a site unavailable for a few minutes if re-deploying goes well and for unpredictable period if re-deploying fails, but it worked for me.
Just tested this on a multi-site install and confirm it works too.
So, instead of remove releases and make the site unavailable, could be useful fix manually inside last release folder with composer cache or installing again wp package and then re-try deploy?
Definitely, there should be a better way.
ansible "web:&<environment>" -m command -a "composer clear-cache" -u web, it finished successfully, but didn’t help (the same “two packages in the same folder” error), so how “fixing manually composer cache inside last release” can be different?
I cannot test anymore because I’m actually with staging environment and nothing is in production yet, so I fixed by removing the entire item folder from the server and ran again the deploy, but if you are in production and you don’t want to make site unavailable, I’d try this way:
- go inside the server
- go to last release folder
- make a new copy (
/<releasedate>_fix/eg) and work on it
- remove vendor folder (I fixed in this way locally) and/or wp folder
- restore this new release folder with the original name
- make deploy again
Try this and you shouldn’t make down and without risk to leave site down because deploy goes in error
Staging and production should be functionally identical; if this process works on one it should work on the other.
If you (you, anyone reading this) are having difficulty with the process @fullyint explained above, make sure you illustrate how your set up might differ from the vanilla Trellis setup outlined in the Trellis docs.
Does anyone know if an existing site will be affected if the server is provisioned prior to applying the fix suggested by @fullyint?