I can use Centmin Mod to setup a LEMP (staging/production ) server on CentOS 6.5 64 Bits with relative ease. It is a great script that does all for you and allows you to pick and choose from a basic menu.
But I would like to keep my environments the same and as I use bedrock-ansible with vagrant and the roots/bedrock image I will not have my staging and development servers the same.
Is there a script or Centmind Mod setup just like is used for the Roots/Bedrock Vagrant Box to setup my Digital Ocean Ubuntu Droplet with (L)EMP with ease? Any recommendations?
Why not just use bedrock-ansible to set up staging/production as well? Itâs designed for that and then all your environments will be the same.
OK. Well I setup an empty Ubuntu 14.0.4 droplet at Digital Ocean. Did not add keys yet, but can do so later and understood that you do not need it right away to deploy with Ansible. Wanted to test if I can setup Bedrock on that server using the Ansible setup.
In site.com/wprmote.com/ansible/group_vars/ I configured the development and production files. Running vagrant locally after the the intial setup as followed at https://github.com/roots/bedrock-ansible went well:
vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'roots/bedrock' is up to date...
==> default: Resuming suspended VM...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Connection refused. Retrying...
==> default: Machine booted and ready!
==> default: Checking for host entries
Then I tried
./deploy.sh production site.com
and hit a snag
PLAY [Deploy WP site] *********************************************************
GATHERING FACTS ***************************************************************
fatal: [xxx.xxx.xx.xxx] => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
TASK: [deploy | Initialize] ***************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/Users/jasper/deploy.retry
xxx.xxx.xx.xxx : ok=0 changed=0 unreachable=1 failed=0
Using -vvv
is perhaps possible running ssh, but not this script.
Checking out https://github.com/roots/roots-example-project.com/blob/master/ansible/group_vars/production now. Seems I am missing stuff in my production fileâŚ
ran the Ansible playbook command and added asking for password:
ansible-playbook -i hosts/$1 deploy.yml --extra-vars="site.com" --ask-pass
SSH password:
PLAY [Deploy WP site] *********************************************************
GATHERING FACTS ***************************************************************
fatal: [xxx.xxx.xx.xxx] => Traceback (most recent call last):
File "/usr/local/Cellar/ansible/1.8.4/libexec/lib/python2.7/site-packages/ansible/runner/__init__.py", line 590, in _executor
exec_rc = self._executor_internal(host, new_stdin)
File "/usr/local/Cellar/ansible/1.8.4/libexec/lib/python2.7/site-packages/ansible/runner/__init__.py", line 792, in _executor_internal
return self._executor_internal_inner(host, self.module_name, self.module_args, inject, port, complex_args=complex_args)
File "/usr/local/Cellar/ansible/1.8.4/libexec/lib/python2.7/site-packages/ansible/runner/__init__.py", line 957, in _executor_internal_inner
conn = self.connector.connect(actual_host, actual_port, actual_user, actual_pass, actual_transport, actual_private_key_file)
File "/usr/local/Cellar/ansible/1.8.4/libexec/lib/python2.7/site-packages/ansible/runner/connection.py", line 51, in connect
self.active = conn.connect()
File "/usr/local/Cellar/ansible/1.8.4/libexec/lib/python2.7/site-packages/ansible/runner/connection_plugins/paramiko_ssh.py", line 136, in connect
self.ssh = SSH_CONNECTION_CACHE[cache_key] = self._connect_uncached()
File "/usr/local/Cellar/ansible/1.8.4/libexec/lib/python2.7/site-packages/ansible/runner/connection_plugins/paramiko_ssh.py", line 152, in _connect_uncached
ssh.load_system_host_keys()
File "/usr/local/Cellar/ansible/1.8.4/libexec/vendor/lib/python2.7/site-packages/paramiko/client.py", line 152, in load_system_host_keys
self._system_host_keys.load(filename)
File "/usr/local/Cellar/ansible/1.8.4/libexec/vendor/lib/python2.7/site-packages/paramiko/hostkeys.py", line 172, in load
e = HostKeyEntry.from_line(line, lineno)
File "/usr/local/Cellar/ansible/1.8.4/libexec/vendor/lib/python2.7/site-packages/paramiko/hostkeys.py", line 88, in from_line
raise InvalidHostKey(line, e)
InvalidHostKey: ('127.0.0.1 ssh-rsa xxxxxxxxxxxxxxxx', Error('Incorrect padding',))
Judging by that error you seem to have a problem with your SSH key. Iâd try re-generating + adding it.
I removed that key from known_hosts. I re-generated a new key
ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/jasper/.ssh/id_rsa):
/Users/jasper/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /Users/jasper/.ssh/id_rsa.
Your public key has been saved in /Users/jasper/.ssh/id_rsa.pub.
The key fingerprint is:
xx:xxx:xx:xx:xx:xx jasper@Jaspers-Mac-mini.local
The key's randomart image is:
---------------------
but that seems to be only a new key for my user, not one added to known hosts for 127.0.01 (localhost).
ansible-playbook -i hosts/$1 deploy.yml --extra-vars="wprmote.com"
PLAY [Deploy WP site] *********************************************************
GATHERING FACTS ***************************************************************
The authenticity of host '192.168.50.5 (192.168.50.5)' can't be established.
RSA key fingerprint is xx:xx:xx:xx:x
Are you sure you want to continue connecting (yes/no)? yes
fatal: [vagrant] => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
fatal: [188.xxx.xx.xxx] => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
fatal: [192.168.50.5] => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
TASK: [deploy | Initialize] ***************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/Users/jasper/deploy.retry
188.xxx.xx.xxx : ok=0 changed=0 unreachable=1 failed=0
192.168.50.5 : ok=0 changed=0 unreachable=1 failed=0
vagrant : ok=0 changed=0 unreachable=1 failed=0
I frankly do not understand this anymore. I was able to run and work on Vagrant
vagrant ssh
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-35-generic x86_64)
Last login: Sun May 3 14:42:44 2015 from 10.0.2.2
, I can SSH to my Digital Ocean droplet, but this issue with keys I do not understand at all.
Your deploy command is strange.
Why not just use deploy.sh
?:
.deploy.sh production wprmote.com
Or the manual command should look like:
ansible-playbook -i hosts/production deploy.yml --extra-vars="site=wprmote.com"
(replace production
with your environment name if itâs different)
[quote=âswalkinshaw, post:8, topic:3673â]
.deploy.sh production wprmote.com
[/quote]
./deploy.sh production wprmote.com
PLAY [Deploy WP site] *********************************************************
GATHERING FACTS ***************************************************************
fatal: [192.168.50.5] => SSH Error: ssh: connect to host 192.168.50.5 port 22: Operation timed out
while connecting to 192.168.50.5:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
TASK: [deploy | Initialize] ***************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/Users/jasper/deploy.retry
192.168.50.5 : ok=0 changed=0 unreachable=1 failed=0
At 192.168.50.5 I do not have a site running as far as I know so that should be the issue. Vagrant does not run on that ip and just uses localhost on the VB.And that would be development and not staging anywaysâŚ
Could you tell me how you normally do the staging host file? Should I perhaps for now just create a subdomain of wprmote.com and work with that? Not sure why the initial setup is with the local network ip. I am just working on this for testing purposes and once it all works I want to use it for new projects where I start from scratch.
NB For existing sites that I will migrate to DO I prefer to have a droplet with LEMP running that is setup to deal with permalinks. Would be good to have an image like that at the ready but that would be another issue. If you do know of ready made images for that or an issue way do tell though
Itâs best to create a subdomain and just put that into hosts/staging
.