Getting an error that I cannot get around when trying to deploy to stage/production (have done it multiple times before).
My first course of action was to reinstall Python. But that didn’t help.
HDD is not full and/private/tmp
has drwxrwxrwt
and root/wheel
.
Any ideas?
MODULE FAILURE
Traceback (most recent call last):
File "<stdin>", line 112, in <module>
File "/usr/lib/python2.7/tempfile.py", line 331, in mkdtemp
dir = gettempdir()
File "/usr/lib/python2.7/tempfile.py", line 275, in gettempdir
tempdir = _get_default_tempdir()
File "/usr/lib/python2.7/tempfile.py", line 217, in _get_default_tempdir
("No usable temporary directory found in %s" % dirlist))
IOError: [Errno 2] No usable temporary directory found in ['/tmp',
'/var/tmp', '/usr/tmp', '/home/web']
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: IOError: [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/home/web']
fatal: [x.x.x.x]: FAILED! => {"changed": false, "failed": true, "module_stdout": "", "rc": 1}
Happens during
TASK [Gathering Facts] ********************************************************************************
System info:
Ansible 2.3.2.0; Darwin
Trellis at "Change `remote-user` role to `connection` role: tests host key, user"
Downgrading to Ansible 2.2 gives me another error:
MODULE FAILURE
Traceback (most recent call last):
File "<stdin>", line 133, in <module>
NameError: name 'temp_path' is not defined
Just noticed it deploys just fine to production, but not staging… what the heck?
Could you share the output from these two commands?
(if root
is disabled, replace root
with value of admin_user
)
ansible "web:&production" -m command -a "df -h" -u root
ansible "web:&staging" -m command -a "df -h" -u root
Thanks for your prompt reply @fullyint!
Here’s what I get:
$ ansible "web:&production" -m command -a "df -h" -u ubuntu
x.x.x.x | SUCCESS | rc=0 >>
Filesystem Size Used Avail Use% Mounted on
udev 489M 0 489M 0% /dev
tmpfs 100M 12M 88M 12% /run
/dev/xvda1 7.8G 6.8G 560M 93% /
tmpfs 496M 0 496M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 496M 0 496M 0% /sys/fs/cgroup
tmpfs 100M 0 100M 0% /run/user/1000
$ ansible "web:&staging" -m command -a "df -h" -u ubuntu
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NameError: name 'temp_path' is not defined
x.x.x.x | FAILED! => {
"changed": false,
"failed": true,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 136, in <module>\nNameError: name 'temp_path' is not defined\n",
"module_stdout": "",
"msg": "MODULE FAILURE",
"rc": 1
}
Thanks for posting your output!
In your first post, you mentioned your HDD is not full, demonstrating that you’d researched the error messages.
However, I think the out of disk space issue is not related to your local machine, but to your servers.
The only time I’ve seen the errors you’ve posted was when a server’s space was down below 1G. In your output, the production /dev/xvda1
mount has only 560M available and will probably soon show the same symptom as staging. The staging server has no more space and Ansible can’t get python to run on the server. I’d expect your staging server may no longer be serving web traffic either and that production is running very slow.
I’d recommend you look for ways to free space on both staging and production and probably resize their volumes to around double their current sizes.
2 Likes
Ah, of course, that makes sense! Logged on and checked, and it was full.
Thank you for pointing me in the right direction 
What confuses me a little bit is that the server only has the wordpress install, so why it would be full is a bit weird.
I removed a couple of old releases to get some additional space:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 7.2G 154M 98% /
Figuring that perhaps uploads was the culprit of my problem, I checked if that was the case:
78M current/
1.4G logs/
235M releases/
165M shared/
1.4G of logs is a bit concerning, but still. Most of the space is hogged somewhere else. /usr
is a whopping 2.6G. Is that normal?
1 Like
1.4G of logs does seem a little large. You could glance through them (particularly error.log
) for problems to fix. A server with ~8G HDD/SSD total may not be too forgiving of any problems.
There are probably logrotate details I don’t understand but I expected the logrotate
role to keep only 8 rotated logs of access.log
and error.log
each, with maxsize 50M each.
For comparison on usr
size, here are root directory sizes from a fresh test server with several WP sites installed, but each site is just default WP theme (unrealistically small uploads and assets):
$ sudo du -hsc *
16M bin
38M boot
0 dev
8.2M etc
18M home
0 initrd.img
103M lib
4.0K lib64
16K lost+found
4.0K media
4.0K mnt
4.0K opt
0 proc
17M root
5.5M run
14M sbin
4.0K snap
474M srv
954M swapfile
0 sys
4.5M tmp
1001M usr
533M var
0 vmlinuz
3.2G total
Regularly rebuilding servers in an “immutable infrastructure” approach can help keep system packages up-to-date and avoid having to work out a lot of apt-get
cleanup to conserve disk space.
There are many potential implementations, but one simple example would be to have DNS for your site pointed to a DigitalOcean “floating IP.” You can rebuild a whole new server and when it looks good in your tests (e.g., using new server IP in your local /etc/hosts
), just point the floating IP over to the new server. If there’s a problem, point the IP back to the old server. After the new server proves itself, maybe save an image of the old server then destroy it. Of course, the process can be made much more complex to handle different structures and priorities.
4 Likes
im getting this error when running vagrant up --provision
.
I had just updated trellis locally and deleted my vbox
the folders /tmp and /home/vagrant exist, but /usr/tmp does not