Running single ansible tasks in development

Probably an easy problem, but: on my first vagrant up in development, I enabled caching in development/wordpress_sites.yml.

However, later, I decided I didn’t want caching enabled on development. I changed my wordpress_sites.yml and did a vagrant reload…and caching was still enabled.

So I did vagrant reload --provision and caching was disabled, which is fine.

But…what if I wanted to keep everything else about my server, and change that one configuration? Can I re-run a single ansible task (I guess it would be the nginx task?) with new configuration, without re-provisioning?

I know I could just SSH into the box and edit the appropriate configuration file and restart nginx, but that doesn’t seem ideal.

When I do ansible-playbook -i hosts/development dev.yml -t nginx, I get SSH errors–I’m guessing because I’m running tasks out of order or something.

1 Like

As you discovered, vagrant reload doesn’t necessarily run the provisioning, which was needed to apply the your changes to caching. I believe vagrant provision would have been sufficient, without the need to reload.

As you discovered, tags allow you to run portions of your playbook. The nginx tag is assigned to the entire nginx role, so adding the -t nginx option would run all the tasks in that role. As an aside, Ansible also makes it possible to add tags to individual tasks, for even more precision.

Your attempt to run the tagged nginx role by adding -t nginx to the end of your ansible-playbook command would work without SSH errors for staging or production, but development is a little different. The SSH to dev is normally handled through vagrant by the inventory file it sets up at your_project/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory. You’ll see in that file a line like this:

default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_private_key_file=/Users/bleh/...

With that background, and till and if roots/trellis#314 is merged, you have a few options for using tags.

1 - Edit your hosts/development inventory. You could replace 127.0.0.1 with the line above that includes the ssh port and key file location. Then your ansible-playbook command should work with dev.

2 - Alternatively, you could feed Ansible the port and key file information by making an entry in your ~/.ssh/ssh_config file for 127.0.0.1. Again, then the ansible-playbook command should work with dev.

3 - Alternatively, you could just run vagrant provision after specifying the nginx tag in your Vagrantfile. Add ansible.tags = [‘nginx’] to the config.vm.provision :ansible section. Remember to remove or comment out the ansible.tags definition afterward.

Note that vagrant sometimes changes the port for a vm if a second vm is booted, which would require you to notice and change the port you’ve specified in option 1 or 2. So, option 3 is maybe most reliable, but requires an edit to your Vagrantfile each time you want to specify a tag.

6 Likes

Thanks for the explanation! Learning. It’s neat.

Didn’t realize vagrant provision could be done without reloading.

Just to make sure I’m clear on what’s happening here:

  • In order for Ansible to run tasks on the VM, it must have SSH access
  • Vagrant automatically generates an inventory file with SSH information for Ansible to use to provision the box, but that file is only used on provision
  • Since I’m running Ansible from the CLI, it’s not using Vagrant’s rules to provision the box–it’s looking at the development inventory file in hosts/, which doesn’t have correct information to access the box
  • Computer says no when Ansible tries to access the VM to run the tasks requested

Does that sound right?

Only remaining questions:

There’s no private key specified in that file. It ends at the port specification. So when I add the port to the development inventory file and re-run the Ansible command, I get a host key verification failed error, rather than a connection refused error. I haven’t edited the Vagrantfile in any way. Where the heck does Ansible get its key?

Also: would the nginx-tagged tasks have been the correct tasks to run, if I only wanted to change an nginx config (e.g. if I wanted to keep my DB and everything else, but wanted to disable caching)?

Thanks again for the explanation. I learn something new every time I start a new Trellis/Sage project.

Your understanding expressed in the four bullet points looks correct to me. :white_check_mark:

Strange. I haven’t seen it missing before. I’m guessing you can find the path to vagrant’s key file by running vagrant ssh-config. You could then add it to your hosts/development, like option 1 above. Or, if you’d like to try option 2, you can get the exact entry to put in your ~/.ssh/ssh_config by asking vagrant to spit out an entry for the Host 127.0.0.1 that matches the host name in your hosts/development

vagrant ssh-config --host 127.0.0.1

I think you could copy that output and paste it in your ssh_config and be all set. The only thing is that potential that the port could change if you boot another vm.

Glad you asked, because in the case of the FastCGI caching, I’m pretty sure you just need to rerun the wordpress-setup role, not the nginx role. I’m guessing your toggling on/off of caching was just to toggle true or false for cache.enabled. It’s an nginx-related config specific to each site so it is handled in the wordpress-site.conf.j2 template, which is created/updated as part of the wordpress-setup role.

If you wanted to change other nginx configs that weren’t specific to your WordPress site, they may be handled in the nginx role, so you’d run just the nginx role. I guess you’d have to look through to see where your target task/config is handled, then run the corresponding role.

Both vagrant ssh-config and vagrant ssh-config --host 127.0.0.1 return IdentityFile [myhomedir]/.vagrant.d/insecure_private_key.

Oddly, when I add that path to the inventory file and run ansible-playbook, I still get the Host key verification failed error. Using the -vvvv flag shows that Ansible is looking for the correct private key:

EXEC ssh -C -tt -vvv -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/vincent/.ansible/cp/ansible-ssh-%h-%p-%r" -o Port=2222 -o IdentityFile="/Users/vincent/.vagrant.d/insecure_private_key" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 127.0.0.1 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1443065865.06-94958710150446 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1443065865.06-94958710150446 && echo $HOME/.ansible/tmp/ansible-tmp-1443065865.06-94958710150446'

…so something isn’t right.

This isn’t hugely important to get working, I’m just trying to understand how all these pieces fit together.

Correct. Thanks for pointing me in the right direction–I think I looked at every role but wordpress_setup.

I need to study Ansible a bit more so I can figure these things out more rapidly on my own. The layers and layers of abstraction take me awhile to wrap my head around. :dizzy_face:

Ok, now that you have nailed down the path to the key file, google the Host key verification failed error message and how people resolve it. I think it may not be a ansible/trellis/vagrant issue, but just that you need to clear your VM’s entry from your ~/.ssh/known_hosts file and try the connection again. Sorry I didn’t notice that exact error message earlier. We’ve been stepping toward success.

You’ll also notice that the vagrant ssh-config includes StrictHostKeyChecking no, which makes Vagrant’s own connections ignore the fact that the Host key may have changed since that first time you said “yes” to allowing your ssh to connect to the vm host (or maybe Vagrant did that in the background), and allows Vagrant to connect without getting hung up on Host key verification failed.

Fingers crossed that removing the vm info from known_hosts will resolve it. If not, you can still try option 3 from way up above to avoid many of these issues.

That did it! There were three entries for 127.0.0.1 in known_hosts; clearing them out and pointing Ansible to the correct key worked.

Good to know about the StrictHostKeyChecking. Wasn’t sure what that meant.

Option 3 definitely seems more reasonable, and on a project with a tighter deadline that’s what I would have done. I just really wanted to know what was failing, and why. And now I know.

Without Roots, these tools would always be a series of black boxes strung together by cryptic config files to me. It’s important to me to know how they work.

So, thanks! As always.

1 Like