Vagrant up fails on mounting NFS shared folders

If you’ve tried everything then I’m not sure what else I could add

Ok, thanks though.

Maybe someone else will add something.

Just in case anyone runs into this issue, this may solve it for you as it solved it for me.

127.0.0.1 localhost in /etc/hosts somehow was removed since yesterday and adding that back has allowed me to vagrant up again.

3 Likes

@joshb thanks for reporting this diagnosis and solution. I’m not sure how localhost could be removed from your /etc/hosts but the problem and solution is consistent with this prior report:

1 Like

Thanks for the positive response. I’m not sure either as I was working on projects yesterday with no delay.

Hi, i was facing the same problem and, as you, nothing seemed to work.

spent the last week trying to figure out why, and finally found the solution that worked for me:

ended with:

if !Vagrant.has_plugin? 'vagrant-bindfs'
      fail_with_message "vagrant-bindfs missing, please install the plugin with this command:\nvagrant plugin install vagrant-bindfs"
    else
      wordpress_sites.each_pair do |name, site|
        config.vm.synced_folder local_site_path(site), nfs_path(name), type: 'nfs', nfs_version: 4, nfs_udp: false
        config.bindfs.bind_folder nfs_path(name), remote_site_path(name, site), u: 'vagrant', g: 'www-data', o: 'nonempty'
      end
      config.vm.synced_folder ANSIBLE_PATH, '/ansible-nfs', type: 'nfs', nfs_version: 4, nfs_udp: false
      config.bindfs.bind_folder '/ansible-nfs', ANSIBLE_PATH_ON_VM, o: 'nonempty', p: '0644,a+D'
      config.bindfs.bind_folder bin_path, bin_path, perms: '0755'
    end

so the original lines were:

L87:

config.vm.synced_folder local_site_path(site), nfs_path(name), type: 'nfs'

replace it with:

config.vm.synced_folder local_site_path(site), nfs_path(name), type: 'nfs', nfs_version: 4, nfs_udp: false

L90:

config.vm.synced_folder ANSIBLE_PATH, '/ansible-nfs', type: 'nfs'

replace it:

config.vm.synced_folder ANSIBLE_PATH, '/ansible-nfs', type: 'nfs', nfs_version: 4, nfs_udp: false

the key thing here is adding , nfs_version: 4, nfs_udp: false

4 Likes

I’ve tried your suggestion and something changed (still not working as supposed), here is my error now:

The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

mount -o vers=4 192.168.50.1:/home/xxx/Trellis/xxx/site /vagrant-nfs-xxx

Stdout from the command:



Stderr from the command:

mount.nfs: access denied by server while mounting 192.168.50.1:/home/xxx/Trellis/xxx/site

I’m using antergOS (arch) and can’t get vagrant up running. I’ve tried all of above methods and nothing seems to work.

Edit: I’ve changed nfs version to 3 and leave udp: false and now it’s working.

3 Likes

Work for me too.

Thank you :slight_smile:

I had the same issue, since I had upgraded from Ubuntu 16.04 to 18.04, but changing the nfs version( in the Vagrantfile) didn’t solve it …

Checking the status of my nfs server (sudo systemctl status nfs-server.service) I discovered that I had a line from an older trellis site creating problems
exportfs: Failed to stat /home/ben/Code/mytestsite.test: No such file or directory

So all I needed to do was to clean the /etc/export file of the culprit lines pointing to that non existing directory

sudo vi /etc/export

1 Like

FYI, I discovered that using using docker-machine + docker-machine-nfs (which also are powered by VirtualBox) will cause an NFS exports collision. The solution I found for now was to comment out the exports from docker-machine while using Trellis.

I ran into this issue again today. I spun up on older project and was met with the mounting NFS shared folders issue. Prior to spinning up the old project, I was working on a few other projects flawlessly.

Now that the old project spit the error, I’m having a really hard time getting my other projects running. Not every project, every time is doing it… but randomly I’m getting this error:

The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

mount -o vers=3,udp 192.168.50.1:/Volumes/LaCie/_development/_trellis/trellis.project.build/site 
/vagrant-nfs-project.com

Stdout from the command:



Stderr from the command:

mount.nfs: requested NFS version or transport protocol is not supported

One thing I noticed is that my /etc/hosts file is using the same IP address (192.168.50.5) for each project I attempt to launch regardless of the IP address I’ve defined in the trellis hosts files for local development.

I’ve tried a lot of different fixes today that I’ve found here on this discourse and google with little luck.

The one thing I’ve come across multiple times is that iTerm needs to have Full Disk Access but when I visit my Sys Pref > Security & Privacy > Privacy, I do not even see that option in the window:

Sorry if this post is not descriptive enough. I’m really losing it today.

It turns out that somehow my Mac encountered Kernel Panic and had suffered from disk permissions. So far, after the fix, I’ve been able to boot up vagrant boxes just fine (without any NFS errors) but I still can’t get them to assign the correct IP I’ve given them in their vagrant host files. Loading my /etc/host files after booting up boxes show they’re all assigned the default IP that comes shipped with Trellis (192.168.50.5), although I’ve assigned others such as 192.168.50.44, 192,168,50,77, and 192.168.50.98.

The other thing I’m seeing is that 192.168.50.98 is actually returning 192.168.50.21 in /etc/hosts file after the machine boots up and is able to run concurrently with one of the other machines. As soon as I book up another machine, it returns 192.168.50.5 and overrides any other machine that is associated with that same IP of course.

I’ve tried ‘vagrant destroy’, then ‘vagrant up’.
I’ve tried ‘vagrant destroy’, then ‘vagrant up --provision’
I’ve tried ‘vagrant destroy’, then ‘vagrant reload --provision’

All of the above commands work without throwing errors but they do not assign the correct IP. This means I can only run one machine at a time and must halt other machines before moving on to another project. That’s fine but I prefer to have some of them operate concurrently.

If anyone has any suggestions, it would be much appreciated as usual.

Now I’m working on getting a totally different project up and running. I was able to ‘vagrant up --provision’ successfully but could not get the project to load in the browser. It just thinks.

So I went a head and destroyed it and ran ‘vagrant up --provision’ again and now this project is throwing the NFS error again.

The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

mount -o vers=3,udp 
192.168.50.1:/Volumes/LaCie/_development/_trellis/_company/trellis.otherproject.build/site 
/vagrant-nfs-otherproject.com

Stdout from the command:



Stderr from the command:

mount.nfs: requested NFS version or transport protocol is not supported

AND now the NFS error is appearing again on all machines I was just able to boot up just recently.

A little while later, i destroyed all 4 boxes running and noticed a yellow warning earlier in the output that states:

So I found this:

I performed those steps but step 4 failed on the NFS error again. I added step 5 and the box completed almost completed successfully before this error:

Composer could not find a composer.json file in
/srv/www/redacted.com/current To initialize a project, please
create a composer.json file as described in the https://getcomposer.org/
"Getting Started" section
failed: [default] (item=redacted.com) => {"changed": false, "failed": true, "item": 
"redacted.com", "stdout": "Composer could not find a composer.json file in 
/srv/www/redacted.com/current\nTo initialize a project, please create a composer.json file 
as described in the https://getcomposer.org/ \"Getting Started\" section\n", "stdout_lines": ["Composer 
could not find a composer.json file in /srv/www/redacted.com/current", "To initialize a 
project, please create a composer.json file as described in the https://getcomposer.org/ \"Getting 
Started\" section"]}

I have a composer in my /site directory.

I found there is no /site directory here:

vagrant@trellis:/srv/www/redacted.com/current$ ls
web

I’ll be back to try and fix stuff tomorrow. This is strange and annoying.

You need to define it here, too:

If you don’t update your vagrant VM IP’s and are regularly working on multiple Trellis projects that all share the same IP… you’re going to keep running into these problems

Thanks Ben. I actually realized that this morning, that I’ve been defining it in the wrong place Regardless, I’m still struggling with NFS issues today. Can’t seem to get them sorted out. Since I’ve destroyed all my boxes, I’m wondering if it would help to completely uninstall VirtualBox, Vagrant, Ansible and everything else and start over or if that would just be a waste of time.

If you destroy the boxes, delete the .vagrant directories, clear out any relevant entries in /etc/exports, then update the IPs to be unique in vagrant.default.yml, then everything should be fine if you just provision again

1 Like

Thanks I will give it a try!

I’ve destroyed all boxes and deleted .vagrant on one project and cleared out my exports. Running vagrant up --provision still gives me the following error:

The nfsd service does not appear to be running.
/bin/launchctl exited with status 3
==> default: Mounting NFS shared folders...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mount -o vers=3,udp 
192.168.50.1:/Volumes/LaCie/_development/_trellis/_project/trellis.project.build/site /vagrant-nfs-project.com

Stdout from the command:



Stderr from the command:

mount.nfs: requested NFS version or transport protocol is not supported

Following these steps:

And I don’t have the Full Disk Access option:


image
image.png448×778 53.6 KB

I’m running High Sierra 10.13.6

Now is probably a good time for me to update to Mojave.

After updating to Mojave, my Full Disk Access option has returned. I was able to successfully boot up 3 different boxes with different IPs. All of them were loading in my browser at the same time.

I went to boot up a 4th box but it was hanging at some point, probably due to low memory issues. I canceled the process.

I returned to the other 3 boxes that were running to find 500 errors being displayed. Next I SSH’d into the machines and attempted to access the /current directory but was given the following error:

cannot access 'current': Stale file handle

I returned to one of the boxes, halted it and ran vagrant up again and was presented with the same NFS error again.

Now I’ve destroyed all running boxes again, installed the latest version of Vagrant and performed a vagrant up on one of the boxes. The box fails to complete and here’s the output:

https://pastebin.com/1StXzyHn

*** UPDATE ***

Before I waste anyones time trying to help, I’ll add that I decided to restore MacOS, get everything up and running again and so far everything is back to where it was. So I’m not sure what caused the issue but it must have been deep rooted. I’m about to bring up the box that had originally caused all my pain and see if it does it again or not. Thanks everyone.

Today I ran into this problem as well. While running vagrant up I got an error while vagrant up was performing task ==> default: Mounting NFS shared folders... and the output was the following:

...

==> default: Mounting NFS shared folders...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

mount -o vers=3,udp 192.168.50.1:/home/user/Sites/clientsite.com/site /vagrant-nfs-clientsite.com

Stdout from the command:



Stderr from the command:

mount.nfs: requested NFS version or transport protocol is not supported

While trying to vagrant up another projects vm I got the same error message so logic says it must me related to my host machine. When tried running systemctl status nfs-kernel-server if got the following:

● nfs-server.service - NFS server and services
   Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Fri 2019-11-15 14:56:22 EET; 2min 1s ago
  Process: 26879 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS)
  Process: 26878 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS)
  Process: 26877 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=1/FAILURE)

nov   15 14:56:22 pc systemd[1]: Starting NFS server and services...
nov   15 14:56:22 pc exportfs[26877]: exportfs: Failed to stat /home/user/Sites/sitename.com/tre
nov   15 14:56:22 pc exportfs[26877]: exportfs: Failed to stat /home/user/Sites/sitename.com/sit
nov   15 14:56:22 pc systemd[1]: nfs-server.service: Control process exited, code=exited status=1
nov   15 14:56:22 pc systemd[1]: nfs-server.service: Failed with result 'exit-code'.
nov   15 14:56:22 pc systemd[1]: Stopped NFS server and services.

So now it turns out that nfs-server.service randomly has errors. Randomly - since I don’t recon making any changes to the host machine except for rebooting.