Going loco over local dev not working, cannot get server to respond

Not sure what happened / triggered it but all of a sudden my Trellis projects are “broken” and I cannot wrap my head around it or even less find a solution…

The issue
At first I had one project running with Trellis and things were fine, I could spin the dev server and deploys worked as expected. Then I went ahead and created another Trellis-based project and my troubles began (mustn’t be related directly but that is when the error started popping up).

Now any project I run (vagrant up) ends up being completely unresponsive and behaves utterly weird… I can load the dev page once but it almost instantly stops to load. The backend (/wp/wp-admin) isn’t reachable at all.

Things I tried

  • Updating all dependencies (Ansible, VirtualBox, Vagrant, bower, …)
  • Pinging the server (192.168.50.5), all good
  • Installing (an de-installing) the vagrant-vbguest plugin since some users reported that this fixed the issue, unfortunately not for me though
  • Making sure nginx is running (see this reply), it’s running and the error.log is empty…
  • Checked /etc/exports for any duplicate or conflicting entries (see docs), deleted all of them to make sure this isn’t causing any trouble
  • Destroyed and Upped the VM multiple times
  • Un-installed VirtualBox and re-installed, restarted my machine afterwards
  • Installed earlier versions of VirtualBox, hoping for a recent bug I could evade thereby
  • Did a clean install of the example project, same issue

Possible causes (??)

  • The VirtualBox shows that the Guest machine is ‘unresponsive’ (see log below), that would explain that I can load the page to some extend but it immediately holds afterwards (~4 seconds)
  • I deleted my OS Cache (User/Library/Caches) to free some HD space, maybe that killed some vital stuff? Then again: This is merely the cache folder, the stuff should simply be re-created, right?

Logs

vagrant up

Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'bento/ubuntu-16.04' is up to date...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Adapter 2: hostonly
==> default: Forwarding ports...
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Remote connection disconnect. Retrying...
==> default: Machine booted and ready!
[default] GuestAdditions 5.1.22 running --- OK.
==> default: Checking for guest additions in VM...
==> default: Setting hostname...
==> default: Configuring and enabling network interfaces...
==> default: Exporting NFS shared folders...
==> default: Preparing to edit /etc/exports. Administrator privileges will be required...
Password:
==> default: Mounting NFS shared folders...
==> default: Mounting shared folders...
default: /vagrant => /xx/xx/xx/xx/htdocs/roots-example/trellis
==> default: Checking for bindfs in VM...
default: Package manager detected: apt-get
default: Fuse kernel module is installed
default: Fuse kernel module is loaded
default: Bindfs 1.12.6 is installed
==> default: Machine is ready to use bindfs!
==> default: Creating bind mounts after synced_folders...
default: /vagrant-nfs-example.com => /srv/www/example.com/current
default: /ansible-nfs => /home/vagrant/trellis
default: /home/vagrant/trellis/bin => /home/vagrant/trellis/bin
==> default: Updating /etc/hosts file on active guest machines...
==> default: Updating /etc/hosts file on host machine (password may be required)...
==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> default: flag to force provisioning. Provisioners marked to run always will still run.
==> default: Machine 'default' has a post `vagrant up` message. This is a message
==> default: from the creator of the Vagrantfile, and not from Vagrant itself:
==> default:
==> default: Your Trellis Vagrant box is ready to use!
==> default: * Composer and WP-CLI commands need to be run on the virtual machine.
==> default: * You can SSH into the machine with `vagrant ssh`.
==> default: * Then navigate to your WordPress sites at `/srv/www`
==> default:   or to your Trellis files at `/home/vagrant/trellis`.

VirtualBox

00:00:00.579116 ******************** End of CPUID dump **********************
00:00:00.586687 VMEmt: Halt method global1 (5)
00:00:00.586732 VMEmt: HaltedGlobal1 config: cNsSpinBlockThresholdCfg=2000
00:00:00.586739 Changing the VM state from 'CREATING' to 'CREATED'
00:00:00.590085 SharedFolders host service: Adding host mapping
00:00:00.590098     Host path '/xx/xx/xx/xx/htdocs/roots-example/trellis', map name 'vagrant', writable, automount=false, create_symlinks=true, missing=false
00:00:00.590211 Changing the VM state from 'CREATED' to 'POWERING_ON'
00:00:00.590834 AIOMgr: Endpoints without assigned bandwidth groups:
00:00:00.590841 AIOMgr:     /Users/xx/VirtualBox VMs/example.dev/ubuntu-16.04-amd64-disk1.vmdk
00:00:00.591119 Changing the VM state from 'POWERING_ON' to 'RUNNING'
00:00:00.591133 Console: Machine state changed to 'Running'
00:00:00.601606 VMMDev: Guest Log: BIOS: VirtualBox 5.1.22
00:00:00.605609 PIT: mode=2 count=0x10000 (65536) - 18.20 Hz (ch=0)
00:00:00.617631 Display::handleDisplayResize: uScreenId=0 pvVRAM=0000000000000000 w=720 h=400 bpp=0 cbLine=0x0 flags=0x0
00:00:00.658591 AHCI#0: Reset the HBA
00:00:00.671322 AHCI#0: Port 0 reset
00:00:00.676842 VMMDev: Guest Log: BIOS: AHCI 0-P#0: PCHS=16383/16/63 LCHS=1024/255/63 0x0000000005000000 sectors
00:00:00.692627 PIT: mode=2 count=0x48d3 (18643) - 64.00 Hz (ch=0)
00:00:00.697084 Display::handleDisplayResize: uScreenId=0 pvVRAM=000000010e4aa000 w=640 h=480 bpp=32 cbLine=0xA00 flags=0x0
00:00:01.594798 NAT: Old socket recv size: 334KB
00:00:01.594825 NAT: Old socket send size: 79KB
00:00:03.168116 Display::handleDisplayResize: uScreenId=0 pvVRAM=0000000000000000 w=720 h=400 bpp=0 cbLine=0x0 flags=0x0
00:00:03.182697 PIT: mode=2 count=0x10000 (65536) - 18.20 Hz (ch=0)
00:00:03.183452 VMMDev: Guest Log: BIOS: Boot : bseqnr=1, bootseq=0032
00:00:03.185898 VMMDev: Guest Log: BIOS: Booting from Hard Disk...
00:00:03.749693 Display::handleDisplayResize: uScreenId=0 pvVRAM=000000010e4aa000 w=640 h=480 bpp=32 cbLine=0xA00 flags=0x0
00:00:09.186790 Display::handleDisplayResize: uScreenId=0 pvVRAM=0000000000000000 w=720 h=400 bpp=0 cbLine=0x0 flags=0x0
00:00:09.491683 GIM: KVM: VCPU  0: Enabled system-time struct. at 0x000000003ffe7000 - u32TscScale=0xcb1ee9d0 i8TscShift=-1 uVersion=2 fFlags=0x1 uTsc=0x5391e80ad uVirtNanoTS=0x2127c38b3
00:00:09.491721 TM: Switching TSC mode from 'Dynamic' to 'RealTscOffset'
00:00:09.571290 GIM: KVM: Enabled wall-clock struct. at 0x00000000020fc668 - u32Sec=1497442729 u32Nano=162487698 uVersion=2
00:00:09.636625 PIT: mode=2 count=0x12a5 (4773) - 249.98 Hz (ch=0)
00:00:10.279247 PIT: mode=0 count=0x10000 (65536) - 18.20 Hz (ch=0)
00:00:11.545702 PIIX3 ATA: Ctl#0: RESET, DevSel=0 AIOIf=0 CmdIf0=0x00 (-1 usec ago) CmdIf1=0x00 (-1 usec ago)
00:00:11.545793 PIIX3 ATA: Ctl#0: finished processing RESET
00:00:11.549036 PIIX3 ATA: Ctl#1: RESET, DevSel=0 AIOIf=0 CmdIf0=0x00 (-1 usec ago) CmdIf1=0x00 (-1 usec ago)
00:00:11.549143 PIIX3 ATA: Ctl#1: finished processing RESET
00:00:15.435857 AHCI#0: Reset the HBA
00:00:15.468228 AHCI#0: Port 0 reset
00:00:17.395115 NAT: Link up
00:00:21.280272 AIOMgr: Host limits number of active IO requests to 16. Expect a performance impact.
00:00:27.468047 VMMDev: Guest Additions information report: Version 5.1.22 r115126 '5.1.22'
00:00:27.468098 VMMDev: Guest Additions information report: Interface = 0x00010004 osType = 0x00053100 (Linux >= 2.6, 64-bit)
00:00:27.468230 VMMDev: Guest Additions capability report: (0x0 -> 0x0) seamless: no, hostWindowMapping: no, graphics: no
00:00:27.468316 VMMDev: vmmDevReqHandler_HeartbeatConfigure: No change (fHeartbeatActive=false).
00:00:27.468344 VMMDev: Heartbeat flatline timer set to trigger after 4 000 000 000 ns
00:00:27.468414 VMMDev: Guest Log: vgdrvHeartbeatInit: Setting up heartbeat to trigger every 2000 milliseconds
00:00:27.509272 VMMDev: Guest Log: vboxguest: misc device minor 55, IRQ 20, I/O port d020, MMIO at 00000000f0400000 (size 0x400000)
00:00:28.623211 VBVA: InfoScreen: [0] @0,0 800x600, line 0xc80, BPP 32, flags 0x1
00:00:28.623252 Display::handleDisplayResize: uScreenId=0 pvVRAM=000000010e4aa000 w=800 h=600 bpp=32 cbLine=0xC80 flags=0x1
00:00:28.625414 Display::handleDisplayResize: uScreenId=0 pvVRAM=000000010e4aa000 w=800 h=600 bpp=32 cbLine=0xC80 flags=0x0
00:00:36.375900 NAT: IPv6 not supported
00:00:36.563400 NAT: DHCP offered IP address 10.0.2.15
00:00:48.248184 VMMDev: Guest Log: VBoxService 5.1.22 r115126 (verbosity: 0) linux.amd64 (Apr 28 2017 17:25:51) release log
00:00:48.248208 VMMDev: Guest Log: 00:00:00.000741 main     Log opened 2017-06-14T12:19:27.885171000Z
00:00:48.248357 VMMDev: Guest Log: 00:00:00.000952 main     OS Product: Linux
00:00:48.248394 VMMDev: Guest Log: 00:00:00.000992 main     OS Release: 4.4.0-31-generic
00:00:48.248430 VMMDev: Guest Log: 00:00:00.001028 main     OS Version: #50-Ubuntu SMP Wed Jul 13 00:07:12 UTC 2016
00:00:48.248467 VMMDev: Guest Log: 00:00:00.001064 main     Executable: /opt/VBoxGuestAdditions-5.1.22/sbin/VBoxService
00:00:48.248475 VMMDev: Guest Log: 00:00:00.001065 main     Process ID: 2095
00:00:48.248480 VMMDev: Guest Log: 00:00:00.001066 main     Package type: LINUX_64BITS_GENERIC
00:00:48.255871 VMMDev: Guest Log: 00:00:00.008431 main     5.1.22 r115126 started. Verbose level = 0
00:00:54.274094 VMMDev: Guest Additions capability report: (0x0 -> 0x0) seamless: no, hostWindowMapping: no, graphics: no
00:01:07.139213 VMMDev: vmmDevHeartbeatFlatlinedTimer: Guest seems to be unresponsive. Last heartbeat received 4 seconds ago
00:01:07.766040 VMMDev: GuestHeartBeat: Guest is alive (gone 4 631 677 588 ns)
00:01:11.766195 VMMDev: vmmDevHeartbeatFlatlinedTimer: Guest seems to be unresponsive. Last heartbeat received 4 seconds ago
00:01:13.035743 VMMDev: GuestHeartBeat: Guest is alive (gone 5 269 682 861 ns)
00:01:17.039519 VMMDev: vmmDevHeartbeatFlatlinedTimer: Guest seems to be unresponsive. Last heartbeat received 4 seconds ago
00:01:18.812816 VMMDev: GuestHeartBeat: Guest is alive (gone 5 777 078 486 ns)
00:01:22.815334 VMMDev: vmmDevHeartbeatFlatlinedTimer: Guest seems to be unresponsive. Last heartbeat received 4 seconds ago
00:01:23.384297 VMMDev: GuestHeartBeat: Guest is alive (gone 4 571 496 552 ns)
00:01:27.384863 VMMDev: vmmDevHeartbeatFlatlinedTimer: Guest seems to be unresponsive. Last heartbeat received 4 seconds ago
00:01:28.128128 VMMDev: GuestHeartBeat: Guest is alive (gone 4 743 826 966 ns)
00:01:32.129809 VMMDev: vmmDevHeartbeatFlatlinedTimer: Guest seems to be unresponsive. Last heartbeat received 4 seconds ago
00:01:33.425124 VMMDev: GuestHeartBeat: Guest is alive (gone 5 296 996 549 ns)
00:01:37.426334 VMMDev: vmmDevHeartbeatFlatlinedTimer: Guest seems to be unresponsive. Last heartbeat received 4 seconds ago
00:01:39.640412 VMMDev: GuestHeartBeat: Guest is alive (gone 6 215 287 001 ns)
00:01:43.640511 VMMDev: vmmDevHeartbeatFlatlinedTimer: Guest seems to be unresponsive. Last heartbeat received 4 seconds ago
00:01:43.653085 VMMDev: GuestHeartBeat: Guest is alive (gone 4 012 679 103 ns)
00:01:53.579580 VMMDev: vmmDevHeartbeatFlatlinedTimer: Guest seems to be unresponsive. Last heartbeat received 4 seconds ago
[... and so on]

I would be utterly thankful if anyone has got an idea how to tackle this – I cannot work on my projects at the moment and the deadlines are closing in so I am virtually stuck…

Thanks for your time.

1 Like

May be totally obvious but have you halted all your boxes then just tried bringing and single box back up (vagrant up)?

Yep, I did – only running one box at a time!

Not that this really helps, but it has to be a host OS or Vagrant or Virtualbox issue. Trellis itself can’t really cause this. Not to pass the blame, just might help for debugging purposes.

If you don’t care about the data on the VMs (which seems like you don’t since you destroyed them?), then there’s two additional places you should clear data from:

  • ~/.vagrant.d/boxes
  • .vagrant in the root of a Trellis project
1 Like

Thanks for your input – I cleared those but that also didn’t help… In the end I had to revert to restoring a backup from before the trouble began, so much time just down the drain :dizzy_face:

Some last questions:

  • It shouldn’t be a problem at all to have multiple VMs set up alongside each other, right? Besides the VMs taking up quite some space on my HD?
  • Is my MacBook Pro late 2009 maybe not up for the task? Any experiences regarding hardware requirements?

Thanks again.


For the record, these work fine together:

  • VirtualBox 5.1.14
  • VB Guest Additions 5.0.26
  • Vagrant 1.91.
  • Ansible 2.2.0.0

No it shouldn’t be a problem. As long as you manually edit the IP in Vagrantfile so they don’t conflict.

2009 is a little old I guess but it’s more based on RAM. And even then it doesn’t matter if they aren’t running at the same time. So if you can ran one, you can have multiple inactive ones.

Back to square one – my VM is unusable yet again with similar / identical symptoms… :sob:

Stuff I did in the meantime

  • Decrypted the internal harddrive the project was residing on since the was some indication that might cause issues (although it worked fine for a while on the encrypted drives…)
  • Opened a thread over at the VirtualBox-forums, no reply as of yet

NFS

It all seems to boil down to the NFS not working, at least that’s what I gathered from the syslog (although I can ping 192.168.50.1 just fine – more info / debug stuff is in the VirtualBox forum post, link see above):

Jun 27 08:35:13 wp-api-test systemd[1]: Starting NFS status monitor for NFSv2/3 locking....
Jun 27 08:35:13 wp-api-test rpc.statd[2640]: Version 1.2.8 starting
Jun 27 08:35:13 wp-api-test rpc.statd[2640]: Flags: TI-RPC
Jun 27 08:35:13 wp-api-test systemd[1]: Started NFS status monitor for NFSv2/3 locking..
Jun 27 08:35:13 wp-api-test kernel: [   65.587553] FS-Cache: Loaded
Jun 27 08:35:13 wp-api-test kernel: [   65.695998] FS-Cache: Netfs 'nfs' registered for caching
Jun 27 08:35:48 wp-api-test mailhog[1903]: [APIv1] KEEPALIVE /api/v1/events
Jun 27 08:36:34 wp-api-test kernel: [  146.752717] nfs: server 192.168.50.1 not responding, still trying

What edits are necessary so that the Vagrantfile defaults to RSync instead of NFS? I’d like to give that a go to see if it really is NFS-related…

UPDATE

I followed this advice, replaced ‘nfs’ with ‘sshfs’ in the Vagrantfile and it seems to be working…! Will report back after some days, fingers crossed.

1 Like