I’m new to the whole toolchain – vagrant, trellis, bedrock, etc.
I’ve successfully provisioned a VM using the example project (example.test) found in the docs.
I’d like to keep that as a reference.
Secondly, I’d like to create a new project where I can provision a second VM.
I’ve followed the Trellis/Bedrock tutorial the same way, and configuring wordpress_sites.yml dev with new values, such as canonical: myuniqesite.test and changing vagrant.default.yml's vagrant_ip to 192.168.50.6 (incremented up one from the default 192.168.50.5)
For whatever reason, after I run vagrant up on the new project, it seems to map itself to the old example vm. I discovered this when running vagrant ssh on the old vm. After myuniquesite.test finishes provisioning, I can visit the url in a browser. It hangs for a very long time, but immediately registers in the nginx access.log of the old vm. Then the browser times out and returns nothing. nginx access.log returns a 444
I assume that I have misconfigured something, but I cannot find other places within group_vars/development that seem relevant to configure. Can you please point me in the right direction? Thank you!
Ah yes, thank you for bringing that up. I have also changed the trellis/hosts/development file to match the trellis/vagrant.default.ymlvagrant_ip. I assume this is the file you’re referring to?
Perhaps I should also mention that I am on a mac!
See here:
trellis/hosts/development
# ...
# the commands `vagrant up` and `vagrant provision` would only run the
# `dev.yml` playbook with such options if you were edit the options
# into the Vagrantfile's `config.vm.provision` section.
[development]
192.168.50.6 ansible_connection=local
[web]
192.168.50.6 ansible_connection=local
I must admit that I did not read this file, I just edited it.
No, I’m referring to the hosts file, which is used by your machine to resolve domains to IPs. On a Mac, (or really any Linux-y machine) it should be at /etc/hosts. The provisioning process will try and modify it for you, but it’s usually the first place to start looking when you’re debugging issues were a VM seems to be in the wrong place.
Fantastic. I see here in the /etc/hosts file that there are multiple duplicate entries for the default 192.168.50.5 mapped to myuniquesite.test. I suppose I can delete these duplicate entries and try a fresh provision.
Thank you so much! I will report back.
My /etc/hosts file for reference:
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
## vagrant-hostmanager-start id: 32506ef4-0b4b-4650-a074-6bcb1e798f2f
192.168.50.5 example.test
192.168.50.5 www.example.test
## vagrant-hostmanager-end
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
## vagrant-hostmanager-start id: a9819679-af35-472e-9f01-f0c88ecaae17
192.168.50.5 example.test
192.168.50.5 www.example.test
## vagrant-hostmanager-end
## vagrant-hostmanager-start id: a0aaab68-7645-45ae-adfb-37000399f4e7
192.168.50.5 myuniquesite.test
192.168.50.5 www.myuniquesite.test
## vagrant-hostmanager-end
## vagrant-hostmanager-start id: e96511e9-86c4-48e4-bef3-115022dee304
192.168.50.5 myuniquesite.test
192.168.50.5 www.myuniquesite.test
## vagrant-hostmanager-end
## vagrant-hostmanager-start id: 3a6de66f-9601-433d-9223-04240287bfd3
192.168.50.6 myuniquesite.test
192.168.50.6 www.myuniquesite.test
## vagrant-hostmanager-end