I'm getting a 'Site can't be reached" page when I go to my local dev site

I’m trying to spin up a local trellis environment, and I get a ‘site can’t be reached’ page when I try to go to mydomain.dev

Anyone help had problems with this?

How did you go with vagrant up ? Did you get confirmation that Your Trellis Vagrant box is ready to use! ?

Yes I did, but then nothing when I go to the url.

I’ve had the same issue here.

It’s not a true fix, but I was able to keep developing by doing this.

vagrant ssh
cd /srv/www/$SITENAME/current/
wp db export
vagrant destroy -f && vagrant up
vagrant ssh
cd /srv/www/$SITENAME/current/
wp db import $DBNAME.sql

1 Like

Hello guys,

I have the same problem now already 3 times in a short period.
Is there already a solid solution or possible reason why this is happening.
Thanks a lot!

Greetings Andre

Hey Andre. Given the ambiguity of the original post and the posts that follow it, the problem actually hasn’t been identified so there is no solid solution. Posting some relevant information pertaining to your setup would help us identify what the issue actually is.

One possible reason — of many — that you could be facing this issue is if you are using a .dev TLD for your local development site. If so, then you should change it to .test since browsers now force HTTPS on .dev (Google owns it). Follow this guide to fix that:



I had the same issue and the problem was that the publicPath in config.json was wrong after the setup of trellis (it was “wp-content/themes/mytheme” instead of “/app/themes/mytheme”).
After fixing the path and running vagrant reload --provision it worked for me.


I’ve noticed this on a site where I updated trellis and now on a new spinup. every time I vagrant halt and vagrant up I get ‘Site can’t be reached’ and the only way to alleviate is to vagrant destroy and vagrant up. No errors in ngnx logs or site error log. It’s a pretty basic setup. I’ve only setup local so far and my files look like this:

      - canonical: azoptical.test
          - www.azoptical.test
    local_path: ../site # path targeting local Bedrock site directory (relative to Ansible root)
    admin_email: admin@example.test
      enabled: false
      enabled: false
      provider: self-signed
      enabled: false

I think the only other thing I’ve changed so far is the manifest.json to point to the test url. I’ve done some acf templates, but as there are no errors in the log or onscreen, that seems unrelated. But without fail, anytime I halt and do a vagrant up “Site can’t be reached.” I tried running a ‘vagrant provision’ but that doesn’t help. The only thing that solves it is destroying and running ‘vagrant up’ again.

The only off thing I’ve noticed is I went to clean up some old vagrant boxes and remove them and I get a warning:

/Users/djames/Sites/azoptical.com/trellis/Vagrantfile:4: warning: already initialized constant ANSIBLE_PATH
/Users/djames/Sites/dentalimplants/trellis/Vagrantfile:4: warning: previous definition of ANSIBLE_PATH was here
/Users/djames/Sites/azoptical.com/trellis/Vagrantfile:5: warning: already initialized constant ANSIBLE_PATH_ON_VM
/Users/djames/Sites/dentalimplants/trellis/Vagrantfile:5: warning: previous definition of ANSIBLE_PATH_ON_VM was here

I was destroying an old powered-off site (dentalimplants) and I get this warning when azoptical has been destroyed - so not even showing in vagrant global-status or virtualbox. No other boxes ever running concurrently.

I tried doing the vagrant halt and vagrant up with a clean install/db and the twentyseventeen theme to rule out it being anything specific to my code/db and the problem persists.

Using trellis at master, don’t have the changes from the commit 6 days ago, but up to date otherwise.

When this happens, you should check your /etc/hosts file. DNS entries might be out of date if Vagrant hostsmanager isn’t cleaning/updating them properly.

I didn’t see the entry in /etc/hosts, so I updated manually and no change. I ran vagrant provision to sanity check myself but I still get that the site can’t be reached until I destroy and start again.

@dkjames you might be running into this if you created this Trellis install recently: https://github.com/roots/trellis/issues/979

If so, apply this change to your Trellis installation:

…and make sure to re-provision :slight_smile:


That did it. Thanks, man.

patching that file fixed for me as well