Hot reloading taking 5-10 seconds each time

I’ve a Lima server, running Trellis/Bedrock/Sage on a Mac. Each save takes around 5-10 seconds for the site to reload - sometimes reloading and losing all CSS before finally refreshing. I don’t touch anything other than save, and yarn dev does the rest.

This is much slower than on Virtual Box, oddly, so I’m wondering if I’ve got something wrong or where I should look to speed this up.

(starting a new thread as requested by @ben )

1 Like

Anybody else have this issue? I’ve tried to track it down but struggling, so any pointers would be hugely appreciated

Are you able to reproduce this issue on a fresh install of Sage with no modifications made?

Are you running yarn dev from the VM or from your host machine?

What version of Node are you using?

Hi Ben

This was a fresh install, so latest versions running yarn from host not VM. I’ll try a fresh install this weekend and report back.

OK, so fresh install, everything ‘new’. Trellis/Bedrock/Sage, running on Lima, MacOS.

localhost:3000 seems to be the issue; it responds generally very slowly on Lima (was previously not to bad on Virtual box), so perhaps not a specific issue related to Trellis, etc.

Console tells me:

http://localhost:3000/runtime.js.map is not found, with a 404 error

Which I’m not sure is particularly relevant.

When I make a simple change in the base install of sage, it waits for around 5 seconds for localhost to respond, then a further 5 to download the stylesheet. This happens very time.

So possibly an issue with my httpd.conf file, from what I can find on a search?

I’m a little confused because the dev server doesn’t serve a stylesheet — changes are injected via JS. What stylesheet is being downloaded?

Is the dev server (http://localhost:3000) slow when you try to navigate around the site? Or is it only slowing down after you’ve made a change to the assets?

It’s slow all the time, even just loading localhost:3000. Here’s a screenshot of the console:

This is just a blank install of sage/bedrock/trellis on lima.

Does the site that is being proxied by the dev server (http://example.test) load quickly, or is that also slow?

Example.test is much faster:

Sorry to chip in late here, but is DNS resolution fast from the point-of-view of bud dev?

I’ve seen this sort of latency (admittedly in other scenarios), when the DNS server list contains an address that is unreachable; perhaps on another unrouteable subnet, or a local DNS proxy that’s misconfigured.

If you dig example.test from within VM / Lima - how does that perform?

edit: Was just mowing the lawn, and realised (obviously :roll_eyes:) it should be hitting a host file record, not reaching NS. But perhaps that’s misconfigured.

Can you hover over the localhost item in that network console and take a screenshot of the timing breakdown (or click the Timing tab)? It will provide clues about DNS resolution.

And run ping localhost to verify response times. If DNS + ping all respond quickly (as they should), then it points to an issue within yarn dev (in bud I assume) and likely not networking.

I was thinking more along the lines of the resolution of Bud’s request to the proxy URL, not host machine localhost resolution.

Thanks for everyone’s help on this. Here’s the screenshot:

Pinging localhost seems fine and responsive:

Dig example.test from within Lima gives me this:

Thanks @RobDobsonNC!

That looks like a DNS resolver issue. The fact that the query timed out twice before resolving fairly fast (from the same NS), would cause the exact issue you’re experiencing with a long TTFB on the HMR server.

As for why… I am stumped! I was expecting it to be trying a number of IPs, before finally resolving…

Here’s some info on how Lima’s DNS resolution system works.

@swalkinshaw - Can you shed any light on how Lima’s resolver works? Will Trellis set useHostResolver to true or false? Seems like there are some caveats, and a compiler flag that needs to be set in some cases. CGO_ENABLED=1

1 Like

:thinking: I wonder if you have another DNS server running on your Mac or conflicting entries for .test domains. Laravel Valet also handles that for example. Do you know if you’re running that?

Seeing 127.0.0.53 as your DNS server was interesting. Mine is 192.0.2.42 for example.

Can you run cat /etc/resolv.conf as well?

I don’t really know much more beyond that. Except that Go finally fixed DNS resolution as of 1.20 so CGO_ENABLED isn’t needed anymore.

edit: I don’t think those time outs on port 53 are Lima’s hostagent though since that binds to a free port (in a much higher range).

3 Likes

Thanks both of you, again, I really appreciate the help.

So I did have valet installed previously when looking at using Trellis; I removed it though a while ago. I have local installed, using .local domains, but that’s not currently running at all.

Running cat /etc/resolve gives the following:

# macOS Notice

#

# This file is not consulted for DNS hostname resolution, address

# resolution, or the DNS query routing mechanism used by most

# processes on this system.

#

# To view the DNS configuration used by this system, use:

# scutil --dns

#

# SEE ALSO

# dns-sd(1), scutil(8)

#

# This file is automatically generated.

#

search Home

nameserver 192.168.0.1

nameserver fdd8:68b5:5d04:0:26a7:dcff:fe78:3a00

Running scutil --dns gives this:



DNS configuration

resolver #1
  search domain[0] : Home
  nameserver[0] : 192.168.0.1
  nameserver[1] : fdd8:68b5:5d04:0:26a7:dcff:fe78:3a00
  if_index : 6 (en0)
  flags    : Request A records, Request AAAA records
  reach    : 0x00020002 (Reachable,Directly Reachable Address)

resolver #2
  domain   : local
  options  : mdns
  timeout  : 5
  flags    : Request A records, Request AAAA records
  reach    : 0x00000000 (Not Reachable)
  order    : 300000

resolver #3
  domain   : 254.169.in-addr.arpa
  options  : mdns
  timeout  : 5
  flags    : Request A records, Request AAAA records
  reach    : 0x00000000 (Not Reachable)
  order    : 300200

resolver #4
  domain   : 8.e.f.ip6.arpa
  options  : mdns
  timeout  : 5
  flags    : Request A records, Request AAAA records
  reach    : 0x00000000 (Not Reachable)
  order    : 300400

resolver #5
  domain   : 9.e.f.ip6.arpa
  options  : mdns
  timeout  : 5
  flags    : Request A records, Request AAAA records
  reach    : 0x00000000 (Not Reachable)
  order    : 300600

resolver #6
  domain   : a.e.f.ip6.arpa
  options  : mdns
  timeout  : 5
  flags    : Request A records, Request AAAA records
  reach    : 0x00000000 (Not Reachable)
  order    : 300800

resolver #7
  domain   : b.e.f.ip6.arpa
  options  : mdns
  timeout  : 5
  flags    : Request A records, Request AAAA records
  reach    : 0x00000000 (Not Reachable)
  order    : 301000

resolver #8
  domain   : test
  nameserver[0] : 127.0.0.1
  flags    : Request A records, Request AAAA records
  reach    : 0x00030002 (Reachable,Local Address,Directly Reachable Address)

DNS configuration (for scoped queries)

resolver #1
  search domain[0] : Home
  nameserver[0] : 192.168.0.1
  nameserver[1] : fdd8:68b5:5d04:0:26a7:dcff:fe78:3a00
  if_index : 6 (en0)
  flags    : Scoped, Request A records, Request AAAA records
  reach    : 0x00020002 (Reachable,Directly Reachable Address)

And when I start trellis vm start it also takes three tries to get the initial test completed:

Running command => limactl start example.com
INFO[0000] Using the existing instance "example.com"    
INFO[0001] [hostagent] Starting VZ (hint: to watch the boot progress, see "/Users/robdobson/.lima/example.com/serial.log") 
INFO[0001] SSH Local Port: 51121                        
INFO[0001] [hostagent] [VZ] - vm state change: running  
INFO[0001] [hostagent] Waiting for the essential requirement 1 of 3: "ssh" 
INFO[0001] [hostagent] new connection from  to          
INFO[0004] [hostagent] 2023/04/08 23:36:20 tcpproxy: for incoming conn 127.0.0.1:51125, error dialing "192.168.5.15:22": connect tcp 192.168.5.15:22: no route to host 
INFO[0014] [hostagent] Waiting for the essential requirement 1 of 3: "ssh" 
INFO[0016] [hostagent] 2023/04/08 23:36:32 tcpproxy: for incoming conn 127.0.0.1:51128, error dialing "192.168.5.15:22": connect tcp 192.168.5.15:22: connection was refused 
INFO[0026] [hostagent] Waiting for the essential requirement 1 of 3: "ssh" 
INFO[0029] [hostagent] The essential requirement 1 of 3 is satisfied 
INFO[0029] [hostagent] Waiting for the essential requirement 2 of 3: "user session is ready for ssh" 
INFO[0042] [hostagent] Waiting for the essential requirement 2 of 3: "user session is ready for ssh" 
INFO[0042] [hostagent] The essential requirement 2 of 3 is satisfied 
INFO[0042] [hostagent] Waiting for the essential requirement 3 of 3: "the guest agent to be running" 
INFO[0042] [hostagent] The essential requirement 3 of 3 is satisfied 
INFO[0042] [hostagent] Waiting for the final requirement 1 of 1: "boot scripts must have finished" 
INFO[0042] [hostagent] Forwarding "/run/lima-guestagent.sock" (guest) to "/Users/robdobson/.lima/example.com/ga.sock" (host) 
INFO[0043] [hostagent] The final requirement 1 of 1 is satisfied 
INFO[0043] READY. Run `limactl shell example.com` to open the shell. 

Updating /etc/hosts file (sudo may be required, see `trellis vm sudoers` for more details)

This is the problem here. Some other DNS server is set up for .test and has a default timeout of 5 seconds.

You’ll have to run lsof and find what process is listening on that port. Run sudo lsof -n -i -P | grep LISTEN and look for port 53.

Also check for files in /etc/resolver/

3 Likes

Ah. Possibly due to a previous install of either VM or similar I guess. On port 53 I have:

mDNSRespo  223  _mdnsresponder   48u  IPv4 0x20108fb09042c09      0t0    TCP *:53 (LISTEN)
mDNSRespo  223  _mdnsresponder   52u  IPv6 0x20108f64641e3c1      0t0    TCP *:53 (LISTEN)

resolver just holds:

nameserver 127.0.0.1

Removing the content of resolver gives me this result, much quicker. Does this seem about what’s expected?

Well it definitely removed the 5s timeout :sweat_smile: I have that mDNS process running too but the culprit was the file in /etc/resolver. At some point, some tool create that file there so any request for .test was being served.

1 Like