Roots Discourse

Can't deploy from local vagrant


See “Running Ansible Commands” here:



Thanks, yes, I figured this much out I think.

I can SSH to my server from command prompt but I cannot run ./bin/ due to it not being available on windows.

If I vagrant SSH to run the command then the script does run but in the end I am presented with a
SSH Error: data could not be sent to remote host - UNREACHABLE! message.

I’ll try again but I have no idea where to start at the moment. My best guess is that my local windows isn’t being sent by vagrant? Maybe vagrant sends it’s own key files with this method?

I wonder if there is an easy way to deploy directly if I ssh to the remote server?



Not that’s supported by Trellis.

Have you read the section of that same page about key forwarding?



I have indeed. I am using the Git shell that comes with Git for Windows so ssh-agent should be automatically launched, according to the article that is linked in the docs.

I have tried this with putty too and enabled agent forwarding. Same result.

UPDATE: Still not working and I have tried the following:

If anybody has any ideas they would be much appreciated. For the moment I have to get my Mac developer to deploy the code. :expressionless:



I’m still having problems with this one.

I can actually SSH to the remote server from within the vagrant box, so SSH keys are clearly being sent by vagrant.

I had a theory that the ‘root’ username was not being passed to the server when issuing the deploy command from the vagrant box. Perhaps I need to configure something differently?

If I update my hosts/staging file to the following:



Then I get a little further. A different error is produced though:

TASK [deploy : Install npm dependencies] *******************************************************************************
System info:
Ansible 2.7.5; Linux
Trellis 1.0.0: December 27th, 2018

[Errno 2] No such file or directory
fatal: [root@XXX.XXX.XXX.XXX -> localhost]: FAILED! => {“changed”: false, “cmd”: “yarn”, “rc”: 2}



Just to keep this post up to date… I still can’t deploy from Windows.

Seems there is a new guide for windows and how to use ansible from the windows linux subsystem:

I have managed to set it up as described and I can now run the ansible deploy command directly from the WLS prompt without ssh’ing to vagrant:
./bin/ staging

But now I have a new errors:

 [WARNING] Ansible is being run in a world writable directory (/mnt/c/dev/, ignoring it as an ansible.cfg source. For more information see
 [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [Ensure necessary variables are defined] ***********************************************************************************************************************************************************

TASK [Ensure environment is defined] ********************************************************************************************************************************************************************
skipping: [localhost]
 [WARNING]: Could not match supplied host pattern, ignoring: web

 [WARNING]: Could not match supplied host pattern, ignoring: staging

PLAY [Test Connection] **********************************************************************************************************************************************************************************
skipping: no hosts matched

PLAY [Deploy WP site] ***********************************************************************************************************************************************************************************
skipping: no hosts matched

PLAY RECAP **********************************************************************************************************************************************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=0


The error is telling you what’s wrong: ansible is being run from a world-writeable directory. This is because WSL’s default strategy for handling the conversion from Windows permissions to Linux permissions is to just make everything writeable by everyone (not really ideal). The link in the error should get you on the path to fixing it: You’ll need to enable ‘metadata’ on the mount where your project is, and then give it some more reasonable permissions.

1 Like


Cheers for that. It’s a lot to process at the moment but I’ll have another look soon.



Well I figured out the permissions errors with help from @alwaysblank

But I am back to square one with the same error I was getting when running my deploy commands directly from vagrant:

fatal: [XXX.XXX.XXX.XXX]: UNREACHABLE! => {"changed": false, "unreachable": true}

Now I have to figure out how to load my SSH keys into WSL :roll_eyes:



WSL is just Linux, so you’d load them the same way you’d normally load SSH keys in Linux:

ssh-add /path/to/your/key

You may need to adjust the permissions on your keys if they’re in the Windows filesystem (ssh-agent won’t want to load them if the permissions are too…permissive). If your SSH private key isn’t in the correct format (i.e. you created it in Windows w/ puttygen) you’ll need to convert it to OpenSSH (I think) which can be done with puttygen.

1 Like


Still no luck…

I’ve added SSH keys to WSL as described by @alwaysblank but I’m receiving the same ‘unreachable’ error.

I might have found a clue though… If I add this line to trellis/group_vars/staging/main:
web_user: root

Then I can deploy! This is weird because my Mac-based associate doesn’t have to do this.

But now I’ve got a new errors anyway:

RUNNING ansible-playbook deploy.yml -e “ env=staging” FROM WSL

 FAILED! => {"changed": false, "stdout": "Do not run Composer as root/super user! See for details\nLoading composer repositories with package information\nInstalling dependencies from lock file\n

RUNNING ansible-playbook deploy.yml -e “ env=staging” FROM SSH VAGRANT

See stdout/stderr for the exact error
Traceback (most recent call last):
  File "<stdin>", line 113, in <module>
  File "<stdin>", line 105, in _ansiballz_main
  File "<stdin>", line 48, in invoke_module
  File "/tmp/ansible_command_payload_BD1IuR/", line 286, in
  File "/tmp/ansible_command_payload_BD1IuR/", line 226, in main
OSError: [Errno 2] No such file or directory:

An exception occurred during task execution. To see the full traceback, use -vvv. The error was: OSError: [Errno 2] No such file or directory: '/srv/www/'
fatal: [> localhost]: FAILED! => {"changed": false, "module_stdout": "", "rc": 1}




Take this with a grain of salt because Trellis isn’t really my wheelhouse, but it sounds (?) like the SSH key you’re using locally is associated w/ the root user on your remote server, instead of the web user. Your Mac associate’s key is probably correctly associated w/ web, hence the different behavior. I’d take a look in trellis/group_vars/all/users.yaml to see how the section with name: {{ web_user }} is set up w/r/t SSH keys. You should make sure your SSH public key has been provision with the correct user. If it’s not, you may need to fix it and re-provision.

1 Like


That sounds about right. I’ll see if I can get my associate to try some reprovisions.

Thanks again for all of your help.



Just to update…

My colleague added my keys in to users.yml and re-provisioned. We had this done before but the trellis configs never got committed to git so they were lost.

Anyway, now we both get the same error message:

Git repo cannot be accessed. Please
verify the repository exists and you have SSH forwarding set up correctly.

My colleague never had this issue when deploying before and it’s only occurring now that we’ve added my github public keys back into users.yml:

  - name: "{{ web_user }}"
      - "{{ web_group }}"
      - "{{ lookup('file', '~/.ssh/') }}"

Back to the drawing board… We might be one step closer though now that we can see the same error.



Just for the sake of ruling things out: can you confirm the branch listed inwordpress_sites.yml exists on the remote repo? I’ve seen people with this issue before and that was their issue.



I double checked and It does indeed exist. Thanks for the thought.



Why use Vagrant? FWIW all of our docs say to use WSL now, and you can just run the deploy from WSL in the Trellis directory instead of trying to do it from Vagrant.



I’ve been alternating between WSL and vagrant while troubleshooting these deploys. I’ve gotten to the stage where they both produce the same error message:

“Git repo cannot be accessed”

I have confirmed the following:

  • SSH access to the server through WSL using ‘ssh root@serverip’’
  • SSH access to the server through WSL using ‘ssh web@serverip’’
  • SSH access to vagrant through WSL using ‘vagrant ssh’
  • Successfully authenticated with github through WSL using ‘ssh -T
  • Successfully retrieved my git branches through WSL using ‘git ls-remote’

My WSL ~/.ssh/config file looks like so:

Host *servername*
ForwardAgent yes
Hostname *serverip*
User root

All out of ideas.



I’m not sure what to tell you, but this sounds too complicated. I’ve been running deploys in WSL all week, and all I’ve done is this:

  1. When I open up WSL, run ssh-add ~/.ssh/my-key-file (ssh-agent needs to be running already of course).
  2. Make sure my key is on the remote server (I’ve been using Kinsta, so I can do this through their dashboard, but there are lots of ways to get a key on a server).
  3. Configure my Trellis projects by making sure the trellis/group_vars/all/users.yml looks like this:
admin_user: admin
      name: '{{ web_user }}'
          - '{{ web_group }}'
          - '{{ lookup(''file'', ''~/.ssh/'') }}'
          - ''
      name: '{{ admin_user }}'
          - sudo
          - '{{ lookup(''file'', ''~/.ssh/'') }}'
          - ''
web_user: web
web_group: www-data
  - '/usr/sbin/service php7.3-fpm *'
  1. Run my ansible deploy command, i.e. something like this: ansible-playbook deploy.yml -e env=staging -e

No SSH config stuff, no changing to root.

How was your server originally provisioned?



Well I am another step further! But now I have new problem. The deploy works all the way past ‘Compiling assets / Copying production assets’ and then:

"Ansible: could not locate file in lookup: ~/.ssh/"

The file definitely exists in both my windows users folder and in my WSL installation folder /home/username/.ssh/

The servers were originally configured by a Mac user who didn’t have any experience with this kind of thing so it’s probable that something was missed. He has no trouble deploying though, They are Digital Ocean servers by the way.

My other issue was something stupid. I copied and pasted the ssh-add command from a WSL forum and it had a ‘-k’ flag in it.

I was running:
ssh-add -k ~/.ssh/id_rsa

When I should have been running:
ssh-add ~/.ssh/id_rsa


Thanks for all the help!!!