SSH Error: Permission denied (publickey,password) - DigitalOcean

Hi there, the team has been trying to wrap our heads around this new workflow with deployment using bedrock-ansible and we are hitting a few snags, but we have been able to resolve most of the issues up until now.

We have Bedrock completely setup locally and we are now ready to deploy to our DigitalOcean droplet. But when we attempt to run the following: ./deploy.sh staging example.com we receive the following error:

PLAY [Deploy WP site] *********************************************************

GATHERING FACTS ***************************************************************
fatal: [example.example.co] => SSH Error: Permission denied (publickey,password).
while connecting to 45.55.***.***:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.

TASK: [deploy | Initialize] ***************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting


PLAY RECAP ********************************************************************
       to retry, use: --limit @/Users/myname/deploy.retry

example.example.co         : ok=0    changed=0    unreachable=1    failed=0

I get the same error when trying the manual ansible-playbook command.

Here’s the group_vars/staging file:

mysql_root_password: stagingpw

wordpress_sites:
  example.com:
site_hosts:
  - example.example.co
local_path: '../example.com' # path targeting local Bedrock project directory (relative to Ansible root)
repo: git@bitbucket.org:myuser/bedrock.git
multisite:
  enabled: false
  subdomains: false
ssl:
 enabled: false
system_cron: true
env:
  wp_home: http://example.example.co
  wp_siteurl: http://example.example.co/wp
  wp_env: staging
  db_name: example_staging
  db_user: example_dbuser
  db_password: example_dbpassword
  auth_key: "generateme"
  auth_salt: "generateme"
  logged_in_key: "generateme"
  logged_in_salt: "generateme"
  nonce_key: "generateme"
  nonce_salt: "generateme"
  secure_auth_key: "generateme"
  secure_auth_salt: "generateme"

And my hosts/staging file:

[web]
example.example.co

[staging:children]
web

Currently DigitalOcean is setup as Ubuntu LEMP on 14.04. I can ssh into root@example.co with no problem.

The local dev seems to be loading the Sage theme and environment just fine. Just have no idea how this whole deployment process is supposed to go.

Any help would be greatly appreciated.

The deploy.yml playbook will try to ssh connect as whatever user you’ve set for web_user in group_vars/all.

And, that user will need to be listed as an authorized user in a github_ssh_keys list that you could add to one of your group_vars files (here’s an example).

My first guess is that you haven’t yet defined github_ssh_keys. If that’s it, after you define them, you’ll need to re-run server.yml before trying to deploy again. You can make server.yml run only the github_ssh_keys role by specifying --tags:

ansible-playbook -i hosts/staging server.yml --tags "github-ssh-keys"

1 Like

Thanks very much for the quick response!

I’ve added the following to my group_vars/staging file:

github_ssh_keys:
- username: firstlastname

authorized:
- "{{ web_user }}"

But I am still getting that same message. I’m using BitBucket does that matter?

That’s not quite formed in a way that will work. Try this:

github_ssh_keys:
  - username: firstlastname
    authorized:
      - "{{ web_user }}"

and firstlastname will need to be your actual github user name so server.yml can go fetch your public key from https://github.com/firstlastname.keys like this

You’ll also need to re-run the server.yml playbook as I added (edited) into my comment above.

Bitbucket can work and we’ll see if you’re set up correctly for it once the deploy.yml playbook manages to connect to your staging server first.

After updating the hosts/staging file and then running: ansible-playbook -i hosts/staging server.yml --tags “github-ssh-keys”, I got this:

PLAY [WordPress Server: Install LEMP Stack with PHP 5.6 and MariaDB MySQL] ****

GATHERING FACTS ***************************************************************
ok: [example.example.co]

TASK: [github-ssh-keys | Get GitHub SSH keys] *********************************
ok: [example.example.co -> 127.0.0.1] => (item={'username': 'example', 'authorized': [u'web']})

TASK: [github-ssh-keys | Add SSH keys] ****************************************
ok: [example.example.co] => (item=({'username': 'example'}, u'web'))

PLAY RECAP ********************************************************************
example.example.co         : ok=3    changed=0    unreachable=0    failed=0

When trying to deploy again (./deploy.sh staging example.com) I still get the same error.

(item={'username': 'example', 'authorized': [u'web']}).

You don’t have username: example in your github_ssh_keys do you? Maybe you just edited the actual username for privacy.

The username must be your actual github username so that server.yml can fetch your public ssh key from github (https://github.com/myusername.keys) and load it on the server for the web_user (e.g., web) so that when Ansible on your local machine tries to connect as web_user your personal local private key will grant you access as web_user.

To ramble more, this setup assumes that the public key available at your username on github will correspond to the private key on your machine. If not, you’ll need to manually add a public key for web_user (on remote server). I suppose you could change web_user to whatever user you connected as for server.yml, but I recommend trying to get the other approach working instead.

If you’ve done all the stuff above, a helpful diagnostic would be to know whether or not manually ssh-ing into the server as web_user allows you to connect, e.g.,
ssh web@example.example.co

1 Like

Sorry if that was confusing but here’s the actual info:

github_ssh_keys:
 - username: newbird
authorized:
  - "{{ web_user }}"

I just tried to manually ssh into DO using ssh web@example.example.co and it’s asking me for a password. I don’t have a root password setup on this server, just an ssh key. Should I add a root password?

You’ll need to be using passwordless ssh keys so that your web_user can connect non-interactively (Ansible doesn’t accommodate a password prompt.) But, maybe you have passwordless ssh set up and the remote is asking for a password as a fallback because no key is found (?? I’m no ssh expert).

I still suspect your github_ssh_keys formatting could be the problem, failing to load up your public key for web_user. Notice the indentation. Each line is indented two spaces more than the line above. I haven’t tested whether it fails using your format, but the exact format may be required for the yaml parsing to work.

github_ssh_keys:
  - username: newbird
    authorized:
      - "{{ web_user }}"

If that formatting doesn’t resolve it, another helpful diagnostic would be to ssh into the server and check whether your public key appears for web_user, e.g., in the file at /home/web/.ssh/authorized_keys
If the key is not there, the problem is still in the github_ssh_keys role of server.yml.

1 Like

All of the keys seem to be showing in that file. I even just deleted everything in this file and re-ran the: ansible-playbook -i hosts/staging server.yml --tags “github-ssh-keys” command which re-populated this file. Still no luck. Looks like the formatting wasn’t effecting it.

Does this “web_user” user need to be added to DigitalOcean?

Strange.
You shouldn’t need to add web_user to DigitalOcean. The users role will set up the web_user.

How are you ssh-ing currently (when ssh works)? Like this?
ssh root@example.example.co

Do you have to type a password when you ssh in? I’m guessing not, but if you do have to type password, then you still need to set up passwordless ssh keys.

You might try wiping and rebuilding the droplet, just to be certain everything is fresh, and to give web_user another shot at successful setup.

When I log in with root@example.example.co it lets me right in without asking for a password. I will try to spin up a new droplet and start from scratch. Thanks so much for all of your help!

I started over completely and ended up with the same error.
Here’s what I did:

Local Machine

mkdir newdir
cd newdir
git clone https://github.com/roots/bedrock-ansible.git
ansible-galaxy install -r requirements.yml -p vendor/roles
cd …/
mkdir bedrock
cd bedrock
git clone https://github.com/roots/bedrock.git .
cd …/bedrock-ansible
vi group_vars/development

mysql_root_password: devpw

web_user: vagrant

wordpress_sites:
  bedrock.com:
      site_hosts:
      - bedrock.dev
local_path: '../bedrock' # path targeting local Bedrock project directory (relative to Ansible root)
repo: git@bitbucket.org:newbird/bedrock.git
site_install: true
site_title: Example Site
admin_user: admin
admin_password: admin
admin_email: admin@example.dev
multisite:
  enabled: false
  subdomains: false
ssl:
 enabled: false
system_cron: true
env:
  wp_home: http://bedrock.dev
  wp_siteurl: http://bedrock.dev/wp
  wp_env: development
  db_name: example_dev2
  db_user: example_dbuser2
  db_password: example_dbpassword2

php_error_reporting: 'E_ALL'
php_display_errors: 'On'
php_display_startup_errors: 'On'
php_track_errors: 'On'
php_mysqlnd_collect_memory_statistics: 'On'
php_opcache_enable: 0

xdebug_install: false

vagrant up

(everything seemed to be running fine up until the last task which threw this error:

TASK: [php | Start php5-fpm service] ******************************************
failed: [default] => {"failed": true}

FATAL: all hosts have already failed -- aborting

PLAY RECAP ********************************************************************

vi group_vars/all

hhvm: true

cd …/
vi Vagrantfile

config.vm.synced_folder local_site_path(site), nfs_path(name), type: 'nfs'
config.bindfs.bind_folder nfs_path(name), remote_site_path(name), u: 'vagrant', g: 'www-data',     :'create-as-user' => true, :perms => "u=rwx:g=rwx:o=rx", :'create-with-perms' => "u=rwx:g=rwx:o=rx", :'chown-ignore' => true, :'chgrp-ignore' => true, :'chmod-ignore' => true

vangrant reload


DigitalOcean

Created a Droplet with Ubuntu 14.04 x64
Selected my SSH Key
Create Droplet


Local Machine

vi group_vars/staging

mysql_root_password: stagingpw

github_ssh_keys:
  - username: newbird
    authorized:
      - "{{ web_user }}"


wordpress_sites:
  bedrock.com:
    site_hosts:
  - 45.55.241.241
local_path: '../bedrock' # path targeting local Bedrock project directory (relative to Ansible root)
repo: git@bitbucket.org:newbird/bedrock.git
multisite:
  enabled: false
  subdomains: false
ssl:
 enabled: false
system_cron: true
env:
  wp_home: http://45.55.241.241
  wp_siteurl: http://45.55.241.241/wp
  wp_env: staging
  db_name: example_staging
  db_user: example_dbuser
  db_password: example_dbpassword
  auth_key: "generateme"
  auth_salt: "generateme"
  logged_in_key: "generateme"
  logged_in_salt: "generateme"
  nonce_key: "generateme"
  nonce_salt: "generateme"
  secure_auth_key: "generateme"
  secure_auth_salt: "generateme"

vi hosts/staging

[web]
45.55.241.241

[staging:children]
web

ansible-playbook -i hosts/staging server.yml

./deploy.sh staging bedrock.com

and I still get this:

PLAY [Deploy WP site] *********************************************************

GATHERING FACTS ***************************************************************
fatal: [45.55.241.241] => SSH Error: Permission denied (publickey,password).
while connecting to 45.55.241.241:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.

TASK: [deploy | Initialize] ***************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting


PLAY RECAP ********************************************************************
       to retry, use: --limit @/Users/daniel/deploy.retry

45.55.241.241              : ok=0    changed=0    unreachable=1    failed=0

Phew, so, I’m kind of at a lose right now. See anything out of the ordinary?

At first glance I don’t see what’s causing the problem.

  • Does manual ssh as web@45.55.241.241 still fail? If so, could /var/log/auth.log (on remote) shed light? Afraid I’m taking stabs in the dark.
  • Are you on Windows?

I haven’t tracked the hhvm stuff. I don’t know why the “Start php5-fpm service” task would fail for you unless hhvm: true. I haven’t needed that on my setup (OS X). Sad to say I’m running out of ideas. Might have to call in the smart guys.

web@45.55.241.241 asks me for a password, even though the server is set to use SSH keys. So, I’m not quite sure what the password might be.

Maybe you’re right, maybe it’s time to give Roots a call. Either way, thanks so much for your dilligent attempts to help me today. I hope the rest of your week goes well!

Do you have the correct SSH key set up on Github?

Hello Kalen!

I deleted all existing keys on Github and added a new one that was copied using:
pbcopy < ~/.ssh/id_rsa.pub

Ok good. That’s the key that will be used on the provisioned server which allows the web user to SSH and do the work.

Does ssh -T git@github.com work without errors?

If your SSH key works you should see:

Hi username! You’ve successfully authenticated, but GitHub does not provide shell access.

If it doesn’t work to connect to GitHub, it won’t work on your DO server either.

Also make sure ssh-agent is running: eval "$(ssh-agent -s)"

Connecting to GitHub with SSH - GitHub Docs has more info.

I should note further that Ansible/deploys aren’t doing anything special in this regard. There may be a problem with your keys on GitHub so you can always try to manually copy your local key to the server for the web user and see if that works. Getting the key added is only a one time process.

Digital Ocean has some good articles on it:

Hey Scott,

Thanks for the input. I ran: ssh -T git@github.com and received:

Hi Newbird! You’ve successfully authenticated, but GitHub does not provide shell access.

then eval “$(ssh-agent -s)” and received:

Agent pid 62256

So I think I’m good on that front.