Failed to lock apt for exclusive operation

Can’t seem to catch any luck with provisioning.

TASK: [common | Update Apt] *************************************************** 
failed: [default] => {"failed": true}
msg: Failed to lock apt for exclusive operation

FATAL: all hosts have already failed -- aborting

Googled a bit and people suggested adding sudo to the task

---
- name: Update Apt
  apt: update_cache=yes
  sudo: True

- name: Checking essentials
  apt: name="{{ item }}" state=present
  sudo_user: root
  with_items:
  - python-software-properties
  - python-pycurl
  - build-essential
  - python-mysqldb
  - curl
  - git-core

That gets me through the Update Apt. Only to have this thrown in my face

TASK: [mariadb | Add MariaDB MySQL apt-key] *********************************** 
failed: [default] => {"cmd": "apt-key add -", "failed": true, "rc": 1}
stdout: ERROR: This command can only be used by root.


FATAL: all hosts have already failed -- aborting

Figured there must be something else that’s wrong as sudo shouldn’t be required?

Somebody suggested a reboot, so I did just that.

Now I’m stuck here:

GATHERING FACTS *************************************************************** 
fatal: [default] => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue

Output with vvvv enabled:

ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false PYTHONUNBUFFERED=1 ansible-playbook --private-key=/Users/intelligence/.vagrant.d/insecure_private_key --user=vagrant --limit='default' --inventory-file=/Users/intelligence/WWW/Zahnarzt-Altena/bedrock-ansible/.vagrant/provisioners/ansible/inventory --extra-vars={"ansible_ssh_user":"vagrant","user":"vagrant"} -vvvv ./site.yml

PLAY [WordPress Server: Install LEMP Stack with PHP 5.5 and MariaDB MySQL] **** 

GATHERING FACTS *************************************************************** 
<127.0.0.1> ESTABLISH CONNECTION FOR USER: vagrant
<127.0.0.1> REMOTE_MODULE setup
<127.0.0.1> EXEC ['ssh', '-C', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/Users/intelligence/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'StrictHostKeyChecking=no', '-o', 'Port=2222', '-o', 'IdentityFile="/Users/intelligence/.vagrant.d/insecure_private_key"', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'User=vagrant', '-o', 'ConnectTimeout=10', '127.0.0.1', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1416563106.0-32586572297426 && echo $HOME/.ansible/tmp/ansible-tmp-1416563106.0-32586572297426'"]
fatal: [default] => SSH encountered an unknown error. The output was:
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data /Users/intelligence/.ssh/config
debug1: /Users/intelligence/.ssh/config line 1: Applying options for 127.0.0.1
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: auto-mux: Trying existing master
debug1: Control socket "/Users/intelligence/.ansible/cp/ansible-ssh-127.0.0.1-2222-vagrant" does not exist
debug2: ssh_connect: needpriv 0
debug1: Connecting to 127.0.0.1 [127.0.0.1] port 2222.
debug2: fd 3 setting O_NONBLOCK
debug1: fd 3 clearing O_NONBLOCK
debug1: Connection established.
debug3: timeout: 10000 ms remain after connect
debug3: Incorrect RSA1 identifier
debug3: Could not load "/Users/intelligence/.vagrant.d/insecure_private_key" as a RSA1 public key
debug1: identity file /Users/intelligence/.vagrant.d/insecure_private_key type -1
debug1: identity file /Users/intelligence/.vagrant.d/insecure_private_key-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.2
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1p1 Ubuntu-2ubuntu2
debug1: match: OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 pat OpenSSH*
debug2: fd 3 setting O_NONBLOCK
debug3: put_host_port: [127.0.0.1]:2222
debug3: load_hostkeys: loading entries for host "[127.0.0.1]:2222" from file "/dev/null"
debug3: load_hostkeys: loaded 0 keys
debug1: SSH2_MSG_KEXINIT sent
Connection closed by 127.0.0.1

Can you describe your setup? You’ve been having a lot of issues so I’m just wondering what’s going on.

Things like:

  • host OS
  • Vagrant version
  • Ansible version
  • Version of bedrock-ansible
  • etc

I suggest re-installing Vagrant to start with. Delete ~/.vagrant. Delete ~/.vagrant.d/insecure_private_key. See this thread: https://github.com/roots/bedrock-ansible/issues/46

I’ve got no .vagrant folder in my user folder (should I?), however there was one in the bedrock-ansible folder, which I removed. Also dropped the insecure_private_key

Reran vagrant up and got stuck here again (I removed sudo etc)

TASK: [common | Update Apt] *************************************************** 
failed: [default] => {"failed": true}
msg: Failed to lock apt for exclusive operation

FATAL: all hosts have already failed -- aborting

I’m running OSX 10.9.5
Vagrant 1.6.5
Ansible: 1.7
How can I see the version of bedrock-ansible? I downloaded it around the time the thread was started.

Edit: Just updated Ansible to 1.8.1, with no luck :–(

Edit2: I found a post about locked apt. As apt could be locked by another service (not seeing how, as the vm is entirely new and just booted) you should run lsof and check for any offending services, so I ssh’ed in and ran lsof and got this: http://paste.ubuntu.com/9298563/ not sure if the permission denied is common? Also upon initial vagrant up I always get: default: Bindfs seems to not be installed on the virtual machine

Edit3: Running apt-get update in the vm gets me:

E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied)
E: Unable to lock directory /var/lib/apt/lists/
E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?

Was looking through the repo and saw that there was some changes, specifically I found that in my site.yml I had # sudo: yes, whereas in the repo it was not commented out probably what was causing all the fuss.