First of all, huge thanks to Roots team and all contributors for developing and maintaing great Worpress stack. It’s literally night and day difference between manual provisioning/deployment and Ansible-automated process.
Now to my issue: I’ve set up staging environment using Vagrant box on my little home server. I have static IP from my ISP so I was able to set up proper DNS resolution for staging domain names, and using port forwarding rules (on internet router and Vagrant host machine) I’ve got staging box accessible from the Internet via ssh and http ports.
All working just fine (provisioning/deployment/http access) for several staging sites - but eventually Vagrant host becomes unaccessble, and I think it’s due to multiple unsuccessfull attempts to break in via ssh. After box becomes unaccessible from outside world I can ssh in using LAN IP (like 192.168.50.77), or if I was already connected to the box via ssh - I can see it’s up and running.
The reason in my view is that Vagrant box seeing all break in attemps coming from host - in /var/log/auth.log I’m seeing multiple ssh error entries originated from 10.0.2.2, and only from this address (and it’s perfectly expected). After several attemps 10.0.2.2 gets blocked and VM becomes unaccessible.
I was not able to figure out which component is responsible for block and how can I alter staging envrironment to avoid this (I know that it’s probably not a best idea).
I confirmed that if I completely reset iptables rules - staging Vagrant box again becomes accessible via both ssh and http.
Thanks,
Vladimir