Newly Provisioned VM Erroneously Shares IP with Example VM

Hi there,

I’m new to the whole toolchain – vagrant, trellis, bedrock, etc.

I’ve successfully provisioned a VM using the example project (example.test) found in the docs.
I’d like to keep that as a reference.

Secondly, I’d like to create a new project where I can provision a second VM.
I’ve followed the Trellis/Bedrock tutorial the same way, and configuring wordpress_sites.yml dev with new values, such as canonical: myuniqesite.test and changing vagrant.default.yml's vagrant_ip to 192.168.50.6 (incremented up one from the default 192.168.50.5)

See here:

trellis/group_vars/development/wordpress_sites.yml

wordpress_sites:
  myuniquesite.test:
    site_hosts:
      - canonical: myuniquesite.test
        redirects:
          - www.myuniquesite.test
    local_path: ../site # path targeting local Bedrock site directory (relative to Ansible root)
    admin_email: admin@myuniquesite.test
    multisite:
      enabled: false
    ssl:
      enabled: false
      provider: self-signed
    cache:
      enabled: false

And here:
trellis/vagrant.default.yml

---

vagrant_ip: '192.168.50.6'

vagrant_cpus: 1

vagrant_memory: 1024 # in MB

vagrant_box: 'bento/ubuntu-18.04'

vagrant_box_version: '>= 201807.12.0'

vagrant_ansible_version: '2.8.0'

vagrant_skip_galaxy: false

vagrant_mount_type: 'nfs'

vagrant_install_plugins: true

vagrant_plugins:

- name: vagrant-bindfs

- name: vagrant-hostmanager

# Array of synced folders:

# - local_path: .

# destination: /path/on/vm

# create: false

# type: nfs

# bindfs: true

# mount_options: []

# bindfs_options: {}

# See https://www.vagrantup.com/docs/synced-folders/basic_usage.html#mount_options

vagrant_synced_folders: []

For whatever reason, after I run vagrant up on the new project, it seems to map itself to the old example vm. I discovered this when running vagrant ssh on the old vm. After myuniquesite.test finishes provisioning, I can visit the url in a browser. It hangs for a very long time, but immediately registers in the nginx access.log of the old vm. Then the browser times out and returns nothing. nginx access.log returns a 444

I assume that I have misconfigured something, but I cannot find other places within group_vars/development that seem relevant to configure. Can you please point me in the right direction? Thank you!

Jess

What are the contents of your hosts file?

Hi alwaysblank,

Ah yes, thank you for bringing that up. I have also changed the trellis/hosts/development file to match the trellis/vagrant.default.yml vagrant_ip. I assume this is the file you’re referring to?

Perhaps I should also mention that I am on a mac!

See here:

trellis/hosts/development

# ...
# the commands `vagrant up` and `vagrant provision` would only run the
# `dev.yml` playbook with such options if you were edit the options
# into the Vagrantfile's `config.vm.provision` section.

[development]
192.168.50.6 ansible_connection=local

[web]
192.168.50.6 ansible_connection=local

I must admit that I did not read this file, I just edited it.

Did I miss something?

No, I’m referring to the hosts file, which is used by your machine to resolve domains to IPs. On a Mac, (or really any Linux-y machine) it should be at /etc/hosts. The provisioning process will try and modify it for you, but it’s usually the first place to start looking when you’re debugging issues were a VM seems to be in the wrong place.

1 Like

Fantastic. I see here in the /etc/hosts file that there are multiple duplicate entries for the default 192.168.50.5 mapped to myuniquesite.test. I suppose I can delete these duplicate entries and try a fresh provision.

Thank you so much! I will report back.

My /etc/hosts file for reference:

##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##
127.0.0.1	localhost
255.255.255.255	broadcasthost
::1             localhost

## vagrant-hostmanager-start id: 32506ef4-0b4b-4650-a074-6bcb1e798f2f
192.168.50.5	example.test
192.168.50.5	www.example.test
## vagrant-hostmanager-end

# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section

## vagrant-hostmanager-start id: a9819679-af35-472e-9f01-f0c88ecaae17
192.168.50.5	example.test
192.168.50.5	www.example.test
## vagrant-hostmanager-end

## vagrant-hostmanager-start id: a0aaab68-7645-45ae-adfb-37000399f4e7
192.168.50.5	myuniquesite.test
192.168.50.5	www.myuniquesite.test
## vagrant-hostmanager-end

## vagrant-hostmanager-start id: e96511e9-86c4-48e4-bef3-115022dee304
192.168.50.5	myuniquesite.test
192.168.50.5	www.myuniquesite.test
## vagrant-hostmanager-end

## vagrant-hostmanager-start id: 3a6de66f-9601-433d-9223-04240287bfd3
192.168.50.6	myuniquesite.test
192.168.50.6	www.myuniquesite.test
## vagrant-hostmanager-end

That did it! I edited out those duplicate entries, re provisioned the project, and I’m up and running. Much appreciated for the time you’ve saved me!

Thank you thank you!

1 Like

This topic was automatically closed after 42 days. New replies are no longer allowed.