Ansible Provisioning Error on Server

failed: => (item={‘site_install’: True,}
stdout: Composer could not find a composer.json file in /srv/www/domain.tld/current
To initialize a project, please create a composer.json file as described in the “Getting Started” section

FATAL: all hosts have already failed -- aborting

I cut out the array of information having to do with the site install.

Ansible does everything else up to the final stage in site.yaml

Edit: Setting run_composer: false in my staging file allows it to continue.

However, then when you get to TASK: [wordpress-sites | Install WP], WP-CLI errors out with

stderr: Error: This does not seem to be a WordPress install. ``Pass --path=path/to/wordpressor runwp core download`.

FATAL: all hosts have already failed -- aborting

The playbook assumes that a WP project exists at the path /srv/www/domain.tld/current. That error means that a composer.json file wasn’t found in that directory which usually means that your shared folder isn’t set up properly. Which is confirmed when WP-CLI doesn’t think it’s a valid project either.

If this is for development, make sure your Vagrant synced folders are correct. If you aren’t using Vagrant for this, you’ll need to manually get your project codebase onto the server in that path.

This is not for Vagrant (Vagrant works fine) - this is for a VPS (in this case DO).

So to set up the server, I need to ansible-playbook -i hosts/staging site.yml - have it error out, run a cap deploy staging, then run ansible-playbook -i hosts/staging site.yml --limit @/Users/me/site.retry'?

Why does Vagrant automatically install WP, but not a VPS using the same playbook?


The reason I mention this workflow, is because this works. But it seems silly to run ansible just to set up the server, have it attempt to install wordpress (via composer) if it’s just going to error out.


When I cap deploy, the files get pulled from my git repo correctly, the symlink gets updated, but the internet isn’t updating.

For example, I just deleted a theme in web/app - pushed to git. Git shows that I deleted it. cap staging deploy pulls it, current shows the newest revision, /etc/nginx/sites-enabled/domain.tld points to current but it’s still showing an old revision.

Deleting all the revisions, then doing a fresh deploy results in a white screen. The files are in the revisions folder, there are no visible errors in the logs, and current is symlinking correctly.

I’m lost.

Ideas you’ve probably already considered:

  • White screen - sudo service php5-fpm restart && service nginx restart
  • Displaying old revisions - ensure caches are cleared, if any (server, browser).
  • Digital Ocean integration - roots/bedrock-ansible#40, thread, thread, etc.

Have considered, read, and tried all of the above. Permissions are setup correctly.

What really confuses me is if I run ansible-playbook again, the site starts working again. So it’s something with cap deploy. I don’t understand what though. Permissions with ansible are deploy:www-data and that’s the same in cap.

I’ve gone back to not using Cap for now. I’m just pushing my changes via git with a webhook - it’s unfortunate though, this seemed like an awesome system. I’ll keep tabs on this forum, and see if anyone else has this same issue.

@brandon someone just posted about this issue too:

Right now it kind of sucks to deploy a server from scratch. Once it’s set up it’s fine. It’s something that needs be to worked on.

Is there a way to take the playbook and modify it so it just installs the server (php5, fail2ban, mariadb) etc - and then deploy via Cap as intended. I really like the idea of both my vm and my vps matching 100%.

I’m going to spend some more time debugging why I have a white screen after a Cap deploy. It has to be in a log somewhere.

This issue has the same symptoms:

And this PR seems to have fixed it for them: - I’m gonna try this now.

Edit: This did solve it for me.

Restarting the VPS results in a white screen again.

We have tags on the roles now so you can manually run the playbook with just the tags you want.

So to get this right I basically need to run the playbook once, get the error, go to /srv/www/domain.tld/current and clone the regular bedrock repo and then rerun the playbook?

If you’re provisioning a new remote server from scratch, yes. Although you can just comment out some tasks for the initial run.

We’re working on a much better workflow. The start of it is here:

Probably missing something real basic, but during the first run I got the error message of missing composer.json.

So I SSH’ed into the server, /srv was present, but no www folder inside it, so I created those and pulled the composer.json from the bedrock repo.

Trying to rerun with ansible-playbook -i hosts/production site.yml --limit @/Users/myuser/site.retry I got:

ERROR: provided hosts list is empty

The only content of site.retry is “default” (does that seem right?)

Any way, just to get around this, I destroyed the box and ran vagrant up again, just to be faced with Composer could not find a composer.json file in /srv/www/ again…

Really wish there was some “explain it like I’m five” instructions for us slow learners, hehe :slight_smile:

Edit: I seem to have problem with my sync folder as this happens for dev.
Edit2: And clearing /etc/exports seems to have helped with that.