Just updated, thx
For some reason I assumed that (when defined in vars) it would also update the hosts files.
Now getting SSH issue on the remote server but will check my key configs, thx
Just updated, thx
For some reason I assumed that (when defined in vars) it would also update the hosts files.
Now getting SSH issue on the remote server but will check my key configs, thx
Seem to be failing on admin
PLAY [WordPress Server - Install LEMP Stack with PHP 5.6 and MariaDB MySQL] ***
GATHERING FACTS ***************************************************************
<45.55.25.7> ESTABLISH CONNECTION FOR USER: admin
<45.55.25.7> REMOTE_MODULE setup
<45.55.25.7> EXEC ssh -C -tt -vvv -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/bduzita/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=admin -o ConnectTimeout=10 45.55.25.7 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1440489223.17-128470588729196 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1440489223.17-128470588729196 && echo $HOME/.ansible/tmp/ansible-tmp-1440489223.17-128470588729196'
fatal: [45.55.25.7] => SSH Error: Permission denied (publickey,password).
while connecting to 45.55.25.7:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
ESTABLISH CONNECTION FOR USER: admin
If it is trying to connect as admin
at this point, that means that root
wasn’t able to connect in the previous tasks (assuming you’re using a trellis version at least as new as db63a89 July 26, 2015). However, I’m guessing you haven’t yet had a successful run of server.yml
on staging, so root login probably couldn’t be disabled yet. This leaves the question of why can’t root connect?
Manual ssh. Let us know if you think you’ve already disabled root login.
If not, are you saying that running ssh root@45.55.25.7
succeeds?
If you’ve disabled root, has the full server.yml
playbook ever succeeded on this staging server? If so, let us know if running ssh admin@45.55.25.7
succeeds.
known_hosts. Could you open ~/.ssh/known_hosts
and remove any entries for 45.55.25.7
and your staging server domain name, then see if the ansible-playbook
command succeeds in connecting?
Debug info. If it still fails, could you run it with -vvvv
for more verbose output and share the output here?
ansible-playbook -i hosts/staging server.yml -vvvv
Trellis on your machine. Have you been able to get the server.yml
playbook to work on any staging/production server before? (that would tell us whether your machine/environment is already set up correctly for Trellis)
Any potentially relevant customizations you’ve made? If many, you might try getting it to work with just a vanilla trellis install first, then start adding customizations and testing as you go.
Below is what -vvvv spits out
PLAY [Determine Remote User] **************************************************
TASK: [remote-user | Determine whether to connect as root or admin_user] ******
<127.0.0.1> REMOTE_MODULE command ssh -o PasswordAuthentication=no "echo can_connect" #USE_SHELL
<127.0.0.1> EXEC ['/bin/sh', '-c', 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1440518457.65-23357570418657 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1440518457.65-23357570418657 && echo $HOME/.ansible/tmp/ansible-tmp-1440518457.65-23357570418657']
<127.0.0.1> PUT /var/folders/2j/mrpl8j91291_1p5q7rfz_d700000gn/T/tmpbV4TpX TO /Users/bduzita/.ansible/tmp/ansible-tmp-1440518457.65-23357570418657/command
<127.0.0.1> EXEC ['/bin/sh', '-c', u'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /Users/bduzita/.ansible/tmp/ansible-tmp-1440518457.65-23357570418657/command; rm -rf /Users/bduzita/.ansible/tmp/ansible-tmp-1440518457.65-23357570418657/ >/dev/null 2>&1']
ok: [45.55.25.7 -> 127.0.0.1] => {"changed": false, "cmd": "ssh -o PasswordAuthentication=no root@45.55.25.7 \"echo can_connect\"", "delta": "0:00:00.589608", "end": "2015-08-25 09:00:58.301772", "failed": false, "failed_when_result": false, "rc": 255, "start": "2015-08-25 09:00:57.712164", "stderr": "Permission denied (publickey,password).", "stdout": "", "stdout_lines": [], "warnings": []}
TASK: [remote-user | Set remote user for each host] ***************************
<45.55.25.7> ESTABLISH CONNECTION FOR USER: bduzita
ok: [45.55.25.7] => {"ansible_facts": {"ansible_ssh_user": "admin"}}
PLAY [WordPress Server - Install LEMP Stack with PHP 5.6 and MariaDB MySQL] ***
GATHERING FACTS ***************************************************************
<45.55.25.7> ESTABLISH CONNECTION FOR USER: admin
<45.55.25.7> REMOTE_MODULE setup
<45.55.25.7> EXEC ssh -C -tt -vvv -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/bduzita/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=admin -o ConnectTimeout=10 45.55.25.7 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1440518458.34-81069951706844 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1440518458.34-81069951706844 && echo $HOME/.ansible/tmp/ansible-tmp-1440518458.34-81069951706844'
fatal: [45.55.25.7] => SSH Error: Permission denied (publickey,password).
while connecting to 45.55.25.7:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
TASK: [common | Validate Ansible version] *************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/Users/bduzita/server.retry
45.55.25.7 : ok=2 changed=0 unreachable=1 failed=0
In group_vars.all I have
users:
- name: "{{ web_user }}"
groups:
- "{{ web_group }}"
keys:
- https://github.com/buretta.keys
- name: "{{ admin_user }}"
groups:
- sudo
keys:
- https://github.com/buretta.keys
In group_vars.staging I have
github_ssh_keys:
- username: buretta
authorized:
- "{{ web_user }}"
One thing I am unsure of, when I run ansible-playbook -i hosts/staging server.yml --tags "github-ssh-keys"
I get back the following:
ERROR: tag(s) not found in playbook: github-ssh-keys. possible values: common,composer,configuration,fail2ban,ferm,hhvm,logrotate,mariadb,memcached,nginx,ntp,package,php,remote-user,service,sshd,ssmtp mail,swapfile,users,wordpress,wordpress-setup,wp-cli
Thanks for the quality response, trying things, and posting output.
github_ssh_keys
were completely removed/replaced by roots/trellis#247 June 23, 2015. All ssh key handling is done using that users
dictionary in group_vars/all
. In the future if you want to run just the users
role you can use --tags "users"
, but we’ll need to get your SSH connection working first.
(You can remove your github_ssh_keys
definition from your group_vars/staging
.)
Have you been able to get the server.yml
playbook to work on any staging/production server before?
Would you mind testing whether things will work by forcing the playbook to try connecting as root
instead of how it is ending up trying admin
? It shouldn’t work, but if it does, I think it would show that I missed something in my logic in roots/trellis#274. To try it…
hosts:
remote_user: root
It seems curious that manual ssh root@45.55.25.7
succeeds but things fail when ansible tries ssh -o PasswordAuthentication=no root@45.55.25.7
. If you’re willing, you could also tell us what the output is when you run this:
ssh -o PasswordAuthentication=no root@45.55.25.7 "echo can_connect" || echo cannot_connect
I’m wondering, probably naively, whether the extra option -o PasswordAuthentication=no
somehow isn’t working well on your system.
ssh -o echoed out that root could not connect. I must have saved the pass at some point which would allow me to connect as ssh root@ without continually adding the pass. I tried this on another machine as well and ssh root@ required a pass…argh, my bad.
The main issue was I tried to add the keys to Digital Ocean after the droplet was created and failed to read their docs that explained you cannot do this, the following worked without having to edit the
cat ~/.ssh/id_rsa.pub | ssh root@[your.ip.address.here] "cat >> ~/.ssh/authorized_keys"
Lots of docs to read
Staging and Production are setup! However have some deploy issues now.
When running ansible-playbook -i hosts/production server.yml
I get the follow error
TASK: [mariadb | Install MariaDB MySQL server] ********************************
ok: [45.55.25.7]
TASK: [mariadb | Start MariaDB MySQL Server] **********************************
ok: [45.55.25.7]
TASK: [mariadb | Set root user password] **************************************
failed: [45.55.25.7] => (item=45.55.25.7) => {"failed": true, "item": "45.55.25.7"}
msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials
failed: [45.55.25.7] => (item=127.0.0.1) => {"failed": true, "item": "127.0.0.1"}
msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials
failed: [45.55.25.7] => (item=::1) => {"failed": true, "item": "::1"}
msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials
failed: [45.55.25.7] => (item=localhost) => {"failed": true, "item": "localhost"}
msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials
You may be running into the issue @austin mentioned here, which arose from a PR merged about 24 hours ago. If you can, I’d suggest you rebuild your droplet (starts it over from scratch without a full destroy), then provision fresh.
Will try this, thx…will post back how this goes. May be awhile before I get a chance though.
I was able (with a brand new droplet) to get the production setup and deploy to it. However when I attempted to do this for staging the error persisted. Seems I basically flipped the error, whatever environment that was setup 1st the 2nd resulted in a fail.
TASK: [mariadb | Set root user password] **************************************
failed: [107.170.225.20] => (item=107.170.225.20) => {"failed": true, "item": "107.170.225.20"}
msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials
failed: [107.170.225.20] => (item=127.0.0.1) => {"failed": true, "item": "127.0.0.1"}
msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials
failed: [107.170.225.20] => (item=::1) => {"failed": true, "item": "::1"}
msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials
failed: [107.170.225.20] => (item=localhost) => {"failed": true, "item": "localhost"}
msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials
Since I am dealing with a Vanilla install I may pull the most recent repo and start from scratch for the whole process and see what happens then.
Another question, I have the production setup on the DO droplet but it is in /srv/www/duzita.com/current/web/app
, but DO docs suggest that the web root be located at /var/www/html
. Do I need to config the server now to look where the files are VS where a default WP site normally sits?
Trellis playbooks server.yml
and deploy.yml
will configure your server to find the website files in /srv/www/duzita.com/current/
without any problem. No action needed there.
Yes, I think it would be a good idea to clone the latest Trellis and try it on a new droplet. Try it first with no customizations other than adding your IP to hosts/staging
, changing the site key, and adding a site host. That’s probably what you’re doing since you say it is a “vanilla install.” Once it’s working, start adding customizations, testing as you go. That just makes it easier to debug.
Any chance you have been using the same IP/droplet for hosts/staging
as you are for hosts/production
? I’m not certain that would be a problem, but I’d recommend separate droplets for each.
Yes, been using the same droplet for both staging and production. You should be able to use the same droplet for multiple production sites through, correct? Haven’t tried so assuming a bit.
Will start from scratch regardless. Kinda want to use a single droplet though so will try that first.
Yes. You’d add multiple sites to your wordpress_sites
list, e.g., in group_vars/production
:
wordpress_sites:
site1.com:
⋮
site2.com:
⋮
See this thread for ideas: Staging and Production on same VPS - #23 by fullyint
gonna wait before I try that approach, running server.yml twice is where I’m staying for now.
If you run into the WP config screen after the deploy (which I have) is there something specific causing that? I read this post but my repo has the files needed WP setup screen after Capistrano production deploy
Off the top of my head, I’m not sure what would cause that. Have you made sure your site keys are different for staging and production?
You will see the install WP install screen after your very first deploy, but you shouldn’t see the wp-admin/setup-config.php
asking for database info etc.
Sry, was the install screen not DB and config screen
I’ve redone the vanilla setup. Things worked, however my attempt to have both staging and production on the same droplet was were I got stuck. It was my assuming the ansible setup was intended for this but (as you mentioned) 2 separate droplets are preferred. 1 for staging and the 2nd for production.
Reading this thread things became more clear Staging and Production on same VPS - #8 by btamm
If I am going to have staging and production on the same server I can ignore the staging and use a subdomain that is defined in the production group__vars.
FInal assumption, if I go with the above I would then run
./deploy.sh production mysite.com
./deploy.sh production staging.mysite.com
Is there a way to target a specific branch on deploy?
Nice work!
Trellis README:
branch
- the branch name, tag name, or commit SHA1 you want to deploy (default:master
)
Is there a deploy cmd that allows for this to override the default branch that is already defined in group_vars
?
Maybe it’s not the most common work flow since you could do this all locally and then merge to staging and deploy but what if I wanted to override what is defined in staging ie: branch:staging
and wanted to push branch:staging-test-feature
without having to merge this branch back to staging in order to deploy it.
ie: ./deploy.sh staging <branch_override> staging.mysite.com
You can correct my understanding of what you’d like to do.
If you usually have branch: staging
in group_vars/staging/wordpress_sites.yml
but you’d like to test a new feature you’ve added to a branch named staging-test-feature
, then I think you’d just do this:
staging-test-feature
branch to your remote repo.branch: staging-test-feature
on your local machine. There’s no need to commit this temporary change to your remote repo because the deploy will read your local machine’s ansible files even though it deploys the remote repo’s bedrock project.Trellis does not currently accommodate passing the branch as a CLI parameter. If you’re especially motivated to have a CLI branch parameter instead of temporarily modifying branch
in group_vars/staging/wordpress_sites.yml
, you could edit deploy.yml
like this:
vars:
project: "{{ wordpress_sites[site] }}"
project_root: "{{ www_root }}/{{ site }}"
+ project_version: "{{ branch | default(project.branch) }}"
and run a command like this:
ansible-playbook deploy.yml -i hosts/staging -e "site=example.com branch=staging-test-feature"
Perfect, makes sense.