Deployment bitbucket ssh forwarding problem

Hi, i have problem to deploy my project, because server cant clone my repository on bitbucket. When i try

vagrant@*****:/vagrant$ ssh web@52.31.. 'ssh -T
logged in as payter.

You can use git or hg to connect to Bitbucket. Shell access is disabled.

its all fine, ssh forwarding seems to work, but when i try for example

vagrant@:/vagrant$ ./ staging

i will get error

TASK [deploy : Failed connection to remote repo] *******************************
System info:
Ansible; Linux
Trellis at “Wrap my.cnf password in quotes”

Git repo cannot be accessed. Please verify
the repository exists and you have SSH forwarding set up correctly.
More info:

fatal: [52.31..]: FAILED! => {“changed”: false, “failed”: true}
to retry, use: --limit @deploy.retry

I dont understand why it cant clone repo, when ssh forwarding works well. Thx for any suggetstions.

I’d search for other threads about it. It’s usually an issue with keys or your local forwarding setup.

There’s also more information at:

Specifically to do with making sure your SSH agent is working and the key is added to it.

Hi, thx, but i already read that article and tried it. If i had it wrong, it wouldnt be possible run this command, right?

ssh web@52.31.. ‘ssh -T

Ok, i got it, for some reason, when this file is commited in git it makes problem - its automaticaly modified on server, so i cant pull from bitbucket, because there is a diff. That error message is misleading. When its deleted there is no problem. My question is, is there some option to force pull from git on server, no matter if there are any changes in files on server? thx

Not built it. You’d have to modify this:

1 Like

Hey @payter86, I’m facing the similar issue, and your link is not working, can you please tell me which file is it?

uff, this is old thread, im not really sure, but i think i copied my ssh folder into my project/trellis/.ssh logged into vagrant and then i run

cat /vagrant/.ssh/ >> ~/.ssh/authorized_keys

i think i needed to copy also other keys

cat /vagrant/.ssh/ >> ~/.ssh/authorized_keys 
cp /vagrant/.ssh/id_rsa ~/.ssh/id_rsa && chmod 0600 ~/.ssh/id_rsa
cp /vagrant/.ssh/id_rsa.ppk ~/.ssh/id_rsa.ppk && chmod 0600 ~/.ssh/id_rsa.ppk 
cp /vagrant/.ssh/ ~/.ssh/ && chmod 0600 ~/.ssh/ 
cp /vagrant/.ssh/config ~/.ssh/config && chmod 0600 ~/.ssh/config 

so basically i needed to put my local keys into vagrant to be same, i hope its correct answer

Thanks @payter86, I actually did resolved the issue by reading the comment from @swalkinshaw in another post.

Basically, my local system’s ssh key is being forwarded with the ssh ForwardAgent, the issue what that web user wasn’t able to connect to my gitlab server, because of the old known host entry.

I cleared the entry and checked the connection for web user via “ssh -T” and it did worked.

I learned that all following 3 has to be able to connect to gitlab / github repo.

  1. Local Machine
  2. Vagrant Box
  3. Production server (digital Ocean droplet) - Specifically the web user.

Thanks for your reply.

This is really confounding.

roles/deploy/tasks/update.yml -> clone project files task thinks it’s failing on first run (and subsequent).


/srv/www/ now contains the content of the repository.

So why does Ansible think that the task is failing?

The task and fail response match current Trellis master:

- name: Clone project files
    repo: "{{ project_git_repo }}"
    dest: "{{ project_source_path }}"
    version: "{{ project_version }}"
    accept_hostkey: "{{ project.repo_accept_hostkey | default(repo_accept_hostkey | default(true)) }}"
  ignore_errors: true
  no_log: true
  register: git_clone

- name: Failed connection to remote repo
    msg: |
      Git repo {{ project.repo }} cannot be accessed. Please verify the repository exists and you have SSH forwarding set up correctly.
      More info:
  when: git_clone | failed

Previous troubleshooting:

Successfully run ssh -T as Vagrant user (from guest machine), as myself from local machine and as web from remote server, as long as I have connected with ForwardAgent yes:

# ~/.ssh/config
Host webdems
	User web
        IdentityFile ~/.ssh/id_atlassian 
        ForwardAgent yes

This would be an the response when successful, I believe:

logged in as mikeill.

You can use git or hg to connect to Bitbucket. Shell access is disabled.

Some debugging code for roles/deploy/tasks/update.yml got me this far. Will include it.

- name: See Who I Am
  command: whoami
  register: who_i_am

- debug: msg="{{ who_i_am.stdout }}"

- name: See If I can Connect
  command: ssh -T
  register: git_ssh_test

- debug: msg="{{ git_ssh_test.stdout }}"
- debug: msg="{{ git_ssh_test.stderr }}"

- debug: msg="{{ project_git_repo }}"
- debug: msg="{{ project_version }}"
- debug: msg="{{ project_source_path }}"

- debug: msg="{{ project.repo }}"

Output is:

TASK [deploy : debug] **************************************************************************************************
ok: [ip_address_here_dude]

TASK [deploy : See If I can Connect] ***********************************************************************************
changed: [ip_address_here_dude]

TASK [deploy : debug] **************************************************************************************************
logged in as mikeill.

You can use git or hg to connect to Bitbucket. Shell access is disabled.
ok: [ip_address_here_dude]

TASK [deploy : debug] **************************************************************************************************
ok: [ip_address_here_dude]

TASK [deploy : debug] **************************************************************************************************
ok: [ip_address_here_dude]

TASK [deploy : debug] **************************************************************************************************
ok: [ip_address_here_dude]

TASK [deploy : debug] **************************************************************************************************
ok: [ip_address_here_dude]

TASK [deploy : debug] **************************************************************************************************

Here’s hoping one of you guys is feeling generous with your time…

Possible clue. Logged into the server where I have cloned the repository and a submodule seems to have been modified:

	On branch master
Your branch is up-to-date with 'origin/master'.
Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git checkout -- <file>..." to discard changes in working directory)
  (commit or discard the untracked or modified content in submodules)

	modified:   site/web/dp/a-repository-here (new commits, modified content)

no changes added to commit (use "git add" and/or "git commit -a")

My issue was that there is a submodule (i know, i know) referenced in the repository, to which I hadn’t pushed my latest changes.

@swalkinshaw I wonder if it would make sense to have more specific output when the task fails. Also someone above has said that the Vagrant user needs to be able to access the repository. Is this the case from your experience? I wouldn’t have thought so. Thanks.

This topic has become very confusing. If your remote server can’t clone the git repo then follow the instructions linked in the error message.

That’s probably an improvement. Feel free to open an issue.

Deploys only happen on remote servers. So no Vagrant/vagrant user does not need SSH forwarding like this setup.

1 Like