Trellis Provision: Failed to update apt cache: unknown reason

Hello,

I’m currently having issues re-provisioning my Staging server in DigitalOcean. I’m able to re-provision locally and in production using the Trellis CLI, but I have no idea why it’s failing on Staging. I can see in the logs that it failed to connect to host via SSH, but I can SSH into all environments without any issues using trellis ssh staging. Any ideas what’s causing it to fail?

Error log and versions below:

TASK [common : Update apt packages] ********************************************

<server_name> Failed to connect to the host via ssh: OpenSSH_9.0p1, LibreSSL 3.3.6

fatal: [server_name]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "allow_change_held_packages": false,
            "allow_downgrade": false,
            "allow_unauthenticated": false,
            "autoclean": false,
            "autoremove": false,
            "cache_valid_time": 0,
            "clean": false,
            "deb": null,
            "default_release": null,
            "dpkg_options": "force-confdef,force-confold",
            "fail_on_autoremove": false,
            "force": false,
            "force_apt_get": false,
            "install_recommends": null,
            "lock_timeout": 60,
            "only_upgrade": false,
            "package": null,
            "policy_rc_d": null,
            "purge": false,
            "state": "present",
            "update_cache": true,
            "update_cache_retries": 5,
            "update_cache_retry_max_delay": 12,
            "upgrade": null
        }
    },
    "msg": "Failed to update apt cache: unknown reason"
}

MacOS Ventura 13.1 (M1)
Vagrant - 2.2.18
Trellis - 1.19.0
Trellis CLI - 1.9.0
DigitalOcean Droplet: Ubuntu 20.04.3

I don’t think this is the issue because the error doesn’t look the same, but sometimes provisioning or deployments fail to connect via ssh for me because the ssh key is no longer in my keychain for whatever reason.

Try this command to add it back in and then try re-provisioning.

ssh-add --apple-use-keychain ~/.ssh/id_ed25519

That’s very weird :thinking: It always fails on that same task?

Might be useful to comment out or delete that task and see what happens.

1 Like

Also running apt update manually on the staging server could help showing the error that occurs when updating apt, e.g. a network issue, insufficient disk space, a stray apt lockfile.

@swalkinshaw Yes. I did try to remove the task you mentioned and it failed again, but only on the task below.

TASK [common : Checking essentials] ********************************************

<server_name> Failed to connect to the host via ssh: OpenSSH_9.0p1, LibreSSL 3.3.6

failed: [server_name] (item=build-essential) => {
    "ansible_loop_var": "item",
    "changed": false,
    "invocation": {
        "module_args": {
            "allow_change_held_packages": false,
            "allow_downgrade": false,
            "allow_unauthenticated": false,
            "autoclean": false,
            "autoremove": false,
            "cache_valid_time": 3600,
            "clean": false,
            "deb": null,
            "default_release": null,
            "dpkg_options": "force-confdef,force-confold",
            "fail_on_autoremove": false,
            "force": false,
            "force_apt_get": false,
            "install_recommends": null,
            "lock_timeout": 60,
            "name": "build-essential",
            "only_upgrade": false,
            "package": [
                "build-essential"
            ],
            "policy_rc_d": null,
            "purge": false,
            "state": "present",
            "update_cache": null,
            "update_cache_retries": 5,
            "update_cache_retry_max_delay": 12,
            "upgrade": null
        }
    },
    "item": {
        "key": "build-essential",
        "value": "present"
    },
    "msg": "Failed to update apt cache: unknown reason"
}

@strarsis I did try to run apt update manually on the server and here’s the error I got:

W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://repos.insights.digitalocean.com/apt/do-agent main InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY <KEY_HERE>
W: Failed to fetch https://repos.insights.digitalocean.com/apt/do-agent/dists/main/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY <KEY_HERE>
W: Some index files failed to download. They have been ignored, or old ones used instead.

Looks like a repository key (deprecation) warning, which can be manually fixed, see

1 Like

I’m currently dealing with the same issue.

Now deleting this task, but then the playbook fails on the next one, which seem to rely on it ‘Checking essentials’ .

And then it fails on

TASK [fail2ban : ensure fail2ban is installed] *********************************
fatal: [default]: FAILED! => {"changed": false, "msg": "Failed to update apt cache: unknown reason"}

so basically still the same issue.

Running apt update inside the machine gives:

~$ sudo apt update
Get:1 http://nginx.org/packages/mainline/ubuntu jammy InRelease [3,602 B]
Hit:2 http://ppa.launchpad.net/ondrej/php/ubuntu jammy InRelease
Hit:3 http://de.archive.ubuntu.com/ubuntu jammy InRelease
Err:1 http://nginx.org/packages/mainline/ubuntu jammy InRelease
  The following signatures were invalid: EXPKEYSIG ABF5BD827BD9BF62 nginx signing key <signing-key@nginx.com>
Hit:4 https://mirror.rackspace.com/mariadb/repo/10.6/ubuntu jammy InRelease
Hit:5 http://de.archive.ubuntu.com/ubuntu jammy-updates InRelease
Hit:6 http://de.archive.ubuntu.com/ubuntu jammy-backports InRelease
Hit:7 http://de.archive.ubuntu.com/ubuntu jammy-security InRelease
Fetched 3,602 B in 1s (5,858 B/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
All packages are up to date.
W: http://nginx.org/packages/mainline/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://nginx.org/packages/mainline/ubuntu jammy InRelease: The following signatures were invalid: EXPKEYSIG ABF5BD827BD9BF62 nginx signing key <signing-key@nginx.com>
W: http://ppa.launchpad.net/ondrej/php/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
W: https://mirror.rackspace.com/mariadb/repo/10.6/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
W: Failed to fetch http://nginx.org/packages/mainline/ubuntu/dists/jammy/InRelease  The following signatures were invalid: EXPKEYSIG ABF5BD827BD9BF62 nginx signing key <signing-key@nginx.com>
W: Some index files failed to download. They have been ignored, or old ones used instead.
1 Like

For me this comment fixed it:

SSH into the remote server and run:

sudo apt-key del 7BD9BF62
sudo apt-key del B49F6B46
sudo apt-key del 8D88A2B3
sudo apt-key adv --fetch-keys https://nginx.org/keys/nginx_signing.key
12 Likes

Here’s an old post of mine which also resolved this issue. Same resolution as that provided by @Twansparant ultimately.

Thank you! That did it.

@JordanC26 I think it is essentially the same issue. :pray:

1 Like

Encountered the issue again today, this command fixed it:
curl -O https://nginx.org/keys/nginx_signing.key && apt-key add ./nginx_signing.key

Not sure whether this is 100% best-practice, but it appears to be secure enough.

This worked for me as well. Looks like it has something todo with expired nginx keys?

Yes, nginx repository keys expired and had to be renewed. It would be nice if this can be automatically renewed though.

1 Like