Mariadb fails when at task: set root user password when provisioning staging environment for the first time

Hi fellow roots’ers!

I’m experiencing a problem when following various guides - including Roots’ - on setting up the full stack with trellis, bedrock and sage. The part that’s giving me some issues is trellis, specifically provisioning my staging server for the first time. My local site is working perfectly.

Everything seems to start by running fine when I run: ansible-playbook server.yml -e env=staging but when getting to TASK [mariadb : Set root user password] it fails and outputs the following:

System info:
  Ansible 2.2.0.0; Darwin
  Trellis at "Add as`SKIP_GALAXY env var to skip galaxy install in Vagrant"
---------------------------------------------------
unable to connect to database, check login_user and login_password are
correct or /root/.my.cnf has the credentials. Exception message: (1045,
"Access denied for user 'root'@'localhost' (using password: NO)")
failed: [139.59.159.xx] (item=139.59.159.xx) => {"failed": true, "item": "139.59.159.xx"}
---------------------------------------------------
unable to connect to database, check login_user and login_password are
correct or /root/.my.cnf has the credentials. Exception message: (1045,
"Access denied for user 'root'@'localhost' (using password: NO)")
failed: [139.59.159.xx] (item=127.0.0.1) => {"failed": true, "item": "127.0.0.1"}
---------------------------------------------------
unable to connect to database, check login_user and login_password are
correct or /root/.my.cnf has the credentials. Exception message: (1045,
"Access denied for user 'root'@'localhost' (using password: NO)")
failed: [139.59.159.xx] (item=::1) => {"failed": true, "item": "::1"}
---------------------------------------------------
unable to connect to database, check login_user and login_password are
correct or /root/.my.cnf has the credentials. Exception message: (1045,
"Access denied for user 'root'@'localhost' (using password: NO)")
failed: [139.59.159.xx] (item=localhost) => {"failed": true, "item": "localhost"}
	to retry, use: --limit @/Users/username/Dev/exampple.com/ansible/server.retry

PLAY RECAP *********************************************************************
139.59.159.xx          : ok=37   changed=1    unreachable=0    failed=1
localhost                  : ok=0    changed=0    unreachable=0    failed=0

A little more info: I’ve setup a 16.04 Ubuntu droplet on DigitalOcean for my staging server, installed all the dependencies and I can connect to the database fine via Sequel Pro using the root user and my SSH keys.

Sorry if this has been solved before, but I haven’t been able to solve it on my own after many, many hours of trying, so I’m hoping that someone can point me in the right direction. I’m sure I’m just missing something simple.

Thank you in advance!

1 Like

Not sure we’ve ever seen this specific issue or had it reported.

This is on a plain 16.04 droplet? Nothing was manually installed?

My first suggestion is to just reset the droplet and try again at least to see if it happens again.

1 Like

Yes, I just followed the guide here on Roots to setup a standard DigitalOcean 16.04 droplet. The only thing I installed manually was MariaDB.

Okay, will give that a go and report back :slight_smile:

Thanks!

You shouldn’t install anything manually on the droplet. Trellis takes care of everything in terms of software installed. So that definitely could have caused problems.

Ah, okay! Thanks for clearing that up, I’m quite new to this :slight_smile:

I will create a clean 16.04 droplet with my SSH keys and give it another whirl.

Thanks again!

You can always reset your current droplet. There’s an option somewhere in the admin.

This issue happened 1 year ago, but I’m now having the exact same thing. I have already provisioned the server a lot of times succesfully. I’m also able to login to the database by using sequel pro.

I did not install anything myself, nor did I install MariaDB. Just let trellis do it’s thing all along.

The only thing that may have caused this issue is I accidentally removed {{ vault_mysql_root_password }} from group_vars/production/main.yml

mysql_root_password: "{{ vault_mysql_root_password }}"

I’m not really able to just remove the droplet as the site is already live.

Any ideas to resolve this problem?

hey @bramvdpluijm1 , did you get this fixed? Having the same issue …

I did get it fixed, but it is already 2 years ago, so I’m not sure how I did it. Sorry!

Ok. Just in case anyone else has this issue … I redeployed and after that the error was gone.

No idea what the problem was thow …