Failure to establish connection when provisioning via ansible-playbook server.yml

I’ve encountered an error wherein I couldn’t establish a connection via ansible-playbook.

  1. When I run ansible-playbook server.yml -e env=staging it throws me an error that the ssh connection cannot be established so I checked my users.yml file and saw a problem under the keys section:
  - name: "{{ admin_user }}"
    groups:
      - sudo
    keys:
      - "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
      - https://github.com/dummyuser.keys

I realised I have an existing id_rsa.pub key but I didn’t have it authorized on my server, I was using https://github.com/dummyuser.keys instead. So I removed that line - "{{ lookup('file', '~/.ssh/id_rsa.pub') }}" however the problem still persists. The response was:

fatal: [10.10.2.5]: UNREACHABLE! => {“changed”: false, “msg”: “Failed to connect to the host via ssh.”, “unreachable”: true}

Also why does the config point to the public key when we need the private key to login via ssh. I usually do ssh -i ~/.ssh/private_key user@10.10.2.5 whenever I login to the server via ssh.

So I used another approach.
2. I specified the key on the cli this time ansible-playbook server.yml -e env=staging -vvvv --key-file=~/.ssh/dummy_rsa and the result was I was able to establish a connection:

<10.10.2.5> ESTABLISH SSH CONNECTION FOR USER: dummy_admin

But there was another error: it says a password is required here’s the full message:

fatal: [10.10.2.5]: FAILED! => {“changed”: false, “failed”: true, “invocation”: {“module_name”: “setup”}, “module_stderr”: “OpenSSH_6.9p1, LibreSSL 2.1.8\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 85702\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 10.10.2.5 closed.\r\n”, “module_stdout”: “sudo: a password is required\r\n”, “msg”: “MODULE FAILURE”, “parsed”: false}

I’m not sure why it is asking for a password I’ve already set it in my group_vars/staging/vault.yml here’s the content of that:

vault_mysql_root_password: stagingpw
vault_sudoer_passwords:
dummy_admin: $6$rounds=656000$8DWzDN3KQkM9SjlF$DhxLkYaayplFmtj9q.EqzMDWmvlLNKsLU0GPL9E0P2EvkFQBsbjcMCXgWkug4a5E66PfwL4eZQXzMLkhXcPBk0

So I finally got logged in using the command below
3. ansible-playbook server.yml -e env=staging -vvvv --key-file=~/.ssh/dummy_rsa --ask-become-pass after asking me for a password it works and provisions my server without problem.

Can anyone give light to this? Am I missing something? Let me know if you need more details.

1 Like

You provide the public key for instance for the web user on the initial provision. I suppose you could also add extra keys for someone else or for another computer by adding them to users.yml for the admin user.

However, the general workflow is for example using Digital Ocean, when you spin up a new droplet, you are able to add public keys from there (which would allow you to SSH directly to the server without using the provided password).

If you do this, then you would not need to use a password. Of course, if you spin up a server and there is no public key added, then yes you do need to use a password to login for the first time.

Also, you add public keys because that’s all that needed to validate your private key that remains on your computer… public keys are just that, public, and can be shared. You should never be sending your private SSH key anywhere.

After typing a bunch, I see that @kalenjohnson already replied. I’ll still share what I typed, just for extra info.

Purpose of public keys listed in users

The ssh-keys docs point out that Trellis …

will create the users defined in group_vars/all/users.yml, assigning their groups and public SSH keys.

This creates the users on the remote server and enables them to connect at a later point by loading their public keys into ~/.ssh/authorized_keys. Once server.yml has run and has set this up, anyone with the appropriate private key can ssh in to the server as the user with the corresponding public key in the remote’s ~/.ssh/authorized_keys file for that user. That’s why the docs say…

List keys for anyone who will need to make an SSH connection as that user.

Given that an authorized_keys file may have multiple keys, your list of keys under users may have multiple keys; you don’t need to limit the list to a single public key. In your case, you could have retained the lookup for ~/.ssh/id_rsa.pub, which would load that key into authorized_keys on the remote, enabling you to use your regular id_rsa private key to connect in the future, if desired.

The clarification that may help is that the users list is not relevant to the initial connection you were attempting. For the initial connection, the docs mention this…

We assume that when you first create your server you’ve already added your SSH key to the root account… server.yml will try to connect to your server as root. If the connection fails, server.yml will try to connect as the admin_user defined in group_vars/all/users.yml (default admin).


###Helping Ansible and ssh to find the necessary private key

This means that you are manually specifying the private key with each ssh command, and yes, the corollary of manually specifying the private key with every ansible-playbook command is to add the --private-key= or key-file= option. However, you could save yourself some hassle by enabling ssh and ansible-playbook commands to automatically find and use your desired private key file. One approach would be to add an entry to your ssh config file, specifying the IdentityFile to be used with Host 10.10.2.5. I’d recommend the alternative of loading the ~/.ssh/dummy_rsa into your ssh-agent, which can handle keys for you, trying multiple private keys when attempting a connection.

  • Make sure your ssh-agent is running: ssh-agent bash
  • Add your key: ssh-add ~/.ssh/dummy_rsa
  • If you’re on mac, add the key to your Keychain: ssh-add -K ~/.ssh/dummy_rsa

Now you should be able to run ssh commands without the -i option, and ansible-playbook commands without the --key-file= option because your ssh-agent will inform those commands of the various available private keys to try in making the ssh connections.


Reasons for the error “sudo: a password is required”

Of the tasks Trellis runs via the server.yml playbook, some require sudo. This is a non-issue when the playbook connects as root, but sometimes the playbook doesn’t connect as root. If this initial connection attempt as root fails, it will fall back to connecting as the admin_user. This user must specify its sudo password via the option --ask-become-pass, as you discovered.

Maybe you already know why your connection as root failed, but here are some possibilities:

  • Maybe your remote is on AWS, where root is disabled by default, and your admin_user: ubuntu.
  • Maybe you’ve already successfully run server.yml with sshd_permit_root_login: false in group_vars/all/security.yml, so root is no longer allowed to log in via ssh (good security).
  • Maybe the private key you are trying to use is not loaded on the remote in the root user’s authorized_keys
4 Likes

First of all thank you very much for your prompt reply. I have a follow up question.

So Trellis expects the admin_user created initially and expects keys already added to it’s authorized_keys in order for the server.yml playbook to create the users listed on group_vars/all/users.yml? Is that correct?

Trellis expects that you do have a public key on the server prior to running any Trellis commands. You typically do not have to manually create a user.

When you create a VPS with a provider like DigitalOcean or AWS, they typically give you the option to put your public key on the VPS at creation time. By default, DigitalOcean puts this public key in the root user’s authorized_keys and AWS puts the key in the ubuntu user’s authorized_keys. In neither case do you have to do more than that. You don’t have to manually create another user.

If you use DigitalOcean or a VPS provider that allows root then server.yml should manage to just connect as root (assuming you did associate a public key with the VPS at creation time). If you use AWS or a provider that doesn’t allow root, change admin_user to the user name allowed by the provider (e.g., ubuntu for AWS), then server.yml should have no trouble connecting.

In short, if you can connect via ssh root@my_server_ip then you should be able to run server.yml as root, no problem.

For example, see the How To Embed your Public Key when Creating your Server section.

1 Like

Ah I already know that. I guess my question was wrong.

My question was suppose to be about what you mentioned prior to that quote. Specifically:

Since the users list in group_vars/all/users.yml are created during run time including the admin_user. How does the playbook connect to the server without the admin_user being created initially? I understand that it tries to use rootand then admin_user (which in this case isn’t available yet). What if my server won’t allow root login even with keys added to its authorized_keysfile because of security purposes. I’m assuming I’m left with adding the admin_user manually to the server so the playbook can run the provision, is that correct?

Sorry I didn’t understand.

Right, the server.yml playbook, won’t have created the admin_user before it the playbook has run. However, you could set the admin_user variable to the name of whatever user the VPS provider automatically creates. That way, the user exists (created by VPS during server creation) even though server.yml hasn’t yet had a chance to try to create the user.

To illustrate, consider the case of AWS, which disallows root but automatically creates ubuntu as an alternative. When root fails and the playbook tries admin_user (set to ubuntu), the attempt to connect as ubuntu will succeed because AWS already created the user, even before server.yml had a chance to run.

Does the VPS provider create some user other than root? If so, set the admin_user variable to that user name.

If the provider offers no user that can connect via ssh, I’m surprised, and I guess you have no choice but to create that user manually, as you surmised. After you have created the user manually, set the admin_user variable to the name of your new user (only if the name is something other than root).

Although it would be most secure to create a non-root user, as above, you could alternatively edit the server’s /etc/ssh/sshd_config so that PermitRootLogin yes, then probably run service ssh reload (or service ssh restart).

1 Like

Got it! Thank you very much! All clear now! :grin:

1 Like

You might want to also post your answer on my SO thread and I’ll accept it. Otherwise if you allow I can post the answer there and credit it back to you by doing a back link to this thread. Just let me know. Again, many thanks!

@jzarzoso Thanks for giving me the option. I don’t plan to post over at SO. You’re welcome to link to or excerpt from this discourse thread.

1 Like

Thanks for the details everyone. I am still having a similar issue when running ansible-playbook server.yml -e env=staging with the following messages:

TASK [setup] *******************************************************************
System info:
Ansible 2.0.2.0; Darwin
Trellis at “Setup permalink structure for multisite installs too”

Failed to connect to the host via ssh.
fatal: [138.68.10.50]: UNREACHABLE! => {“changed”: false, “unreachable”: true}
to retry, use: --limit @server.retry

PLAY RECAP *********************************************************************
138.68.10.11 : ok=3 changed=0 unreachable=1 failed=0
localhost : ok=0 changed=0 unreachable=0 failed=0

I have deleted my ssh keys, made sure that they match within etc/hosts, staging/vault.yml, hosts/staging.
I have deleted and recreated my droplets, and made sure that the id_rsa keys were the same across github and my remote server.
I have deleted my vagrant machine and vagrant up several times.
I can ssh into my remote server with ssh root@138.68.10.11 (changed IP here for security).

Is there some sort of an issue with this new task? Trellis at "Setup permalink structure for multisite installs too"

Any assistance would be much appreciated!

There are many possible causes for UNREACHABLE but here’s one that comes to mind:

If you are in fact using an Ubuntu 14.04 server, could you run your command again and share the full debug info?

ansible-playbook server.yml -e env=staging -vvvv

Very strange,
It seems to be working now. Although I had previously deleted my droplet and created a 14.0.4 version (which didn’t work for possibly the same reason as above), this time, there didn’t seem to be an issue connecting!

Thanks again for the suggestion as I think it was likely the solution.

I have been stuck on this vagrant / ansible / ssh issue for days now. I get this message when trying to ping the remote staging server:

ansible lc-dev1.co.uk -m ping -u root

lc-dev1.co.uk | UNREACHABLE! => {
“changed”: false,
“msg”: “Failed to connect to the host via ssh.”,
“unreachable”: true
}

I can ssh into the server without a password, that suggests to me that the ssh keys are setup correctly, but theres a problem with ansible / vagrant

I have uninstalled and reinstalled different versions of vagrant and ansible but no change.

any help would be greatly appreciated, thanks, simon

@jajouka Could you share the entire Ansible verbose debug info (add -vvvv):

ansible-playbook server.yml -e env=staging -vvvv

Could you let us know…

  • which VPS provider you are using (e.g., Digital Ocean, AWS, etc.)
  • what your admin_user name is (should be admin if DO or ubuntu if AWS)
  • what user (name) is making the successful manual ssh connection

If you dig in and find that the issue seems different from the rest of the thread above, go ahead and start a new thread, so that this one stays focused.

sbeasley➜Sites/lc-blogs-trellis/trellis(master✗)» ansible-playbook server.yml -e env=staging -vvvv [10:40:01]
Using /home/sbeasley/Sites/lc-blogs-trellis/trellis/ansible.cfg as config file
Loaded callback output of type stdout, v2.0

PLAYBOOK: server.yml ***********************************************************
3 plays in server.yml

PLAY [Ensure necessary variables are defined] **********************************

TASK [Ensure environment is defined] *******************************************
task path: /home/sbeasley/Sites/lc-blogs-trellis/trellis/variable-check.yml:8
skipping: [localhost] => {“changed”: false, “skip_reason”: “Conditional check failed”, “skipped”: true}

PLAY [Determine Remote User] ***************************************************

TASK [remote-user : Determine whether to connect as root or admin_user] ********
task path: /home/sbeasley/Sites/lc-blogs-trellis/trellis/roles/remote-user/tasks/main.yml:2
File lookup using /home/sbeasley/.ssh/digital_ocean.pub as file
File lookup using /home/sbeasley/.ssh/id_rsa.pub as file
ESTABLISH LOCAL CONNECTION FOR USER: sbeasley
EXEC /bin/sh -c ‘( umask 77 && mkdir -p “echo $HOME/.ansible/tmp/ansible-tmp-1472035206.49-189187861603852” && echo ansible-tmp-1472035206.49-189187861603852=“echo $HOME/.ansible/tmp/ansible-tmp-1472035206.49-189187861603852” ) && sleep 0’
PUT /tmp/tmpLRyRgT TO /home/sbeasley/.ansible/tmp/ansible-tmp-1472035206.49-189187861603852/command
EXEC /bin/sh -c ‘LANG=en_GB.UTF-8 LC_ALL=en_GB.UTF-8 LC_MESSAGES=en_GB.UTF-8 /usr/bin/python /home/sbeasley/.ansible/tmp/ansible-tmp-1472035206.49-189187861603852/command; rm -rf “/home/sbeasley/.ansible/tmp/ansible-tmp-1472035206.49-189187861603852/” > /dev/null 2>&1 && sleep 0’
ok: [lc-dev1.co.uk → localhost] => {“changed”: false, “cmd”: [“ansible”, “lc-dev1.co.uk”, “-m”, “ping”, “-u”, “root”], “delta”: “0:00:00.634070”, “end”: “2016-08-24 10:40:07.186104”, “failed”: false, “failed_when_result”: false, “invocation”: {“module_args”: {“_raw_params”: “ansible lc-dev1.co.uk -m ping -u root”, “_uses_shell”: false, “chdir”: null, “creates”: null, “executable”: null, “removes”: null, “warn”: true}, “module_name”: “command”}, “rc”: 3, “start”: “2016-08-24 10:40:06.552034”, “stderr”: “”, “stdout”: “lc-dev1.co.uk | UNREACHABLE! => {\n "changed": false, \n "msg": "Failed to connect to the host via ssh.", \n "unreachable": true\n}”, “stdout_lines”: [“lc-dev1.co.uk | UNREACHABLE! => {”, " "changed": false, ", " "msg": "Failed to connect to the host via ssh.", “, " "unreachable": true”, “}”], “warnings”: }

TASK [remote-user : Set remote user for each host] *****************************
task path: /home/sbeasley/Sites/lc-blogs-trellis/trellis/roles/remote-user/tasks/main.yml:8
File lookup using /home/sbeasley/.ssh/digital_ocean.pub as file
File lookup using /home/sbeasley/.ssh/id_rsa.pub as file
ok: [lc-dev1.co.uk] => {“ansible_facts”: {“ansible_ssh_user”: “root”}, “changed”: false, “invocation”: {“module_args”: {“ansible_ssh_user”: “root”}, “module_name”: “set_fact”}}

TASK [remote-user : Announce which user was selected] **************************
task path: /home/sbeasley/Sites/lc-blogs-trellis/trellis/roles/remote-user/tasks/main.yml:12
File lookup using /home/sbeasley/.ssh/digital_ocean.pub as file
File lookup using /home/sbeasley/.ssh/id_rsa.pub as file
Note: Ansible will attempt connections as user = root
ok: [lc-dev1.co.uk] => {}

PLAY [WordPress Server - Install LEMP Stack with PHP 7.0 and MariaDB MySQL] ****

TASK [setup] *******************************************************************
File lookup using /home/sbeasley/.ssh/digital_ocean.pub as file
File lookup using /home/sbeasley/.ssh/id_rsa.pub as file
<lc-dev1.co.uk> ESTABLISH SSH CONNECTION FOR USER: root
<lc-dev1.co.uk> SSH: EXEC ssh -C -vvv -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/sbeasley/.ansible/cp/ansible-ssh-%h-%p-%r lc-dev1.co.uk ‘/bin/sh -c ‘"’"’( umask 77 && mkdir -p “echo $HOME/.ansible/tmp/ansible-tmp-1472035209.83-154266183055644” && echo ansible-tmp-1472035209.83-154266183055644=“echo $HOME/.ansible/tmp/ansible-tmp-1472035209.83-154266183055644” ) && sleep 0’“'”‘’
System info:
Ansible 2.1.1.0; Linux
Trellis 0.9.7: April 10th, 2016

Failed to connect to the host via ssh.
fatal: [lc-dev1.co.uk]: UNREACHABLE! => {“changed”: false, “unreachable”: true}
[WARNING]: Could not create retry file ‘server.retry’. [Errno 2] No such file or directory: ‘’

PLAY RECAP *********************************************************************
lc-dev1.co.uk : ok=3 changed=0 unreachable=1 failed=0
localhost : ok=0 changed=0 unreachable=0 failed=0

sbeasley➜Sites/lc-blogs-trellis/trellis(master✗)»

thanks for your reply, i am using digital ocean, i am using root as the admin user

here’s my users.yml file

admin_user: root

users:

  • name: “{{ web_user }}”
    groups:
    • “{{ web_group }}”
      keys:
    • “{{ lookup(‘file’, ‘~/.ssh/digital_ocean.pub’) }}”
    • “{{ lookup(‘file’, ‘~/.ssh/id_rsa.pub’) }}”
    • https://github.com/sb-lc.keys
  • name: “{{ admin_user }}”
    groups:
    • sudo
      keys:
    • “{{ lookup(‘file’, ‘~/.ssh/digital_ocean.pub’) }}”
    • “{{ lookup(‘file’, ‘~/.ssh/id_rsa.pub’) }}”
    • https://github.com/sb-lc.keys

web_user: web
web_group: www-data
web_sudoers:

  • “/usr/sbin/service php7.0-fpm *”

i am using the user ‘web’ like default. I can successfully connect via ssh using root. I dont know what the password is for ‘web’ so I haven’t managed to ssh with this deploy user

i have been using this command to test connections:

ansible -m ping -u vagrant staging

lc-dev1.co.uk | UNREACHABLE! => {
“changed”: false,
“msg”: “Failed to connect to the host via ssh.”,
“unreachable”: true
}

if I make the change in change users.yml - admin_user: admin, I still get the same result from this command. I have reloaded vagrant and still no change.

I am getting the same output when attempting to provision the server after changing the user to admin:

ansible-playbook server.yml -e env=staging -vvvv [11:02:54]
Using /home/sbeasley/Sites/lc-blogs-trellis/trellis/ansible.cfg as config file
Loaded callback output of type stdout, v2.0

PLAYBOOK: server.yml ***********************************************************
3 plays in server.yml

PLAY [Ensure necessary variables are defined] **********************************

TASK [Ensure environment is defined] *******************************************
task path: /home/sbeasley/Sites/lc-blogs-trellis/trellis/variable-check.yml:8
skipping: [localhost] => {“changed”: false, “skip_reason”: “Conditional check failed”, “skipped”: true}

PLAY [Determine Remote User] ***************************************************

TASK [remote-user : Determine whether to connect as root or admin_user] ********
task path: /home/sbeasley/Sites/lc-blogs-trellis/trellis/roles/remote-user/tasks/main.yml:2
File lookup using /home/sbeasley/.ssh/digital_ocean.pub as file
File lookup using /home/sbeasley/.ssh/id_rsa.pub as file
ESTABLISH LOCAL CONNECTION FOR USER: sbeasley
EXEC /bin/sh -c ‘( umask 77 && mkdir -p “echo $HOME/.ansible/tmp/ansible-tmp-1472036609.09-46831348447686” && echo ansible-tmp-1472036609.09-46831348447686=“echo $HOME/.ansible/tmp/ansible-tmp-1472036609.09-46831348447686” ) && sleep 0’
PUT /tmp/tmpFWvNYd TO /home/sbeasley/.ansible/tmp/ansible-tmp-1472036609.09-46831348447686/command
EXEC /bin/sh -c ‘LANG=en_GB.UTF-8 LC_ALL=en_GB.UTF-8 LC_MESSAGES=en_GB.UTF-8 /usr/bin/python /home/sbeasley/.ansible/tmp/ansible-tmp-1472036609.09-46831348447686/command; rm -rf “/home/sbeasley/.ansible/tmp/ansible-tmp-1472036609.09-46831348447686/” > /dev/null 2>&1 && sleep 0’
ok: [lc-dev1.co.uk → localhost] => {“changed”: false, “cmd”: [“ansible”, “lc-dev1.co.uk”, “-m”, “ping”, “-u”, “root”], “delta”: “0:00:00.312282”, “end”: “2016-08-24 11:03:29.458907”, “failed”: false, “failed_when_result”: false, “invocation”: {“module_args”: {“_raw_params”: “ansible lc-dev1.co.uk -m ping -u root”, “_uses_shell”: false, “chdir”: null, “creates”: null, “executable”: null, “removes”: null, “warn”: true}, “module_name”: “command”}, “rc”: 3, “start”: “2016-08-24 11:03:29.146625”, “stderr”: “”, “stdout”: “lc-dev1.co.uk | UNREACHABLE! => {\n "changed": false, \n "msg": "Failed to connect to the host via ssh.", \n "unreachable": true\n}”, “stdout_lines”: [“lc-dev1.co.uk | UNREACHABLE! => {”, " "changed": false, ", " "msg": "Failed to connect to the host via ssh.", “, " "unreachable": true”, “}”], “warnings”: }

TASK [remote-user : Set remote user for each host] *****************************
task path: /home/sbeasley/Sites/lc-blogs-trellis/trellis/roles/remote-user/tasks/main.yml:8
File lookup using /home/sbeasley/.ssh/digital_ocean.pub as file
File lookup using /home/sbeasley/.ssh/id_rsa.pub as file
ok: [lc-dev1.co.uk] => {“ansible_facts”: {“ansible_ssh_user”: “admin”}, “changed”: false, “invocation”: {“module_args”: {“ansible_ssh_user”: “admin”}, “module_name”: “set_fact”}}

TASK [remote-user : Announce which user was selected] **************************
task path: /home/sbeasley/Sites/lc-blogs-trellis/trellis/roles/remote-user/tasks/main.yml:12
File lookup using /home/sbeasley/.ssh/digital_ocean.pub as file
File lookup using /home/sbeasley/.ssh/id_rsa.pub as file
Note: Ansible will attempt connections as user = admin
ok: [lc-dev1.co.uk] => {}

PLAY [WordPress Server - Install LEMP Stack with PHP 7.0 and MariaDB MySQL] ****

TASK [setup] *******************************************************************
File lookup using /home/sbeasley/.ssh/digital_ocean.pub as file
File lookup using /home/sbeasley/.ssh/id_rsa.pub as file
<lc-dev1.co.uk> ESTABLISH SSH CONNECTION FOR USER: admin
<lc-dev1.co.uk> SSH: EXEC ssh -C -vvv -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=admin -o ConnectTimeout=10 -o ControlPath=/home/sbeasley/.ansible/cp/ansible-ssh-%h-%p-%r lc-dev1.co.uk ‘/bin/sh -c ‘"’"’( umask 77 && mkdir -p “echo $HOME/.ansible/tmp/ansible-tmp-1472036612.09-237376651746114” && echo ansible-tmp-1472036612.09-237376651746114=“echo $HOME/.ansible/tmp/ansible-tmp-1472036612.09-237376651746114” ) && sleep 0’“'”‘’
System info:
Ansible 2.1.1.0; Linux
Trellis 0.9.7: April 10th, 2016

Failed to connect to the host via ssh.
fatal: [lc-dev1.co.uk]: UNREACHABLE! => {“changed”: false, “unreachable”: true}
[WARNING]: Could not create retry file ‘server.retry’. [Errno 2] No such file or directory: ‘’

PLAY RECAP *********************************************************************
lc-dev1.co.uk : ok=3 changed=0 unreachable=1 failed=0
localhost : ok=0 changed=0 unreachable=0 failed=0

sbeasley➜Sites/lc-blogs-trellis/trellis(master✗)»

Do I need to do something else to change the user to admin? I have just changed the users.yml file but maybe there more ssh config I should be doing