Staging environment on Vagrant box

First of all, huge thanks to Roots team and all contributors for developing and maintaing great Worpress stack. It’s literally night and day difference between manual provisioning/deployment and Ansible-automated process.

Now to my issue: I’ve set up staging environment using Vagrant box on my little home server. I have static IP from my ISP so I was able to set up proper DNS resolution for staging domain names, and using port forwarding rules (on internet router and Vagrant host machine) I’ve got staging box accessible from the Internet via ssh and http ports.

All working just fine (provisioning/deployment/http access) for several staging sites - but eventually Vagrant host becomes unaccessble, and I think it’s due to multiple unsuccessfull attempts to break in via ssh. After box becomes unaccessible from outside world I can ssh in using LAN IP (like 192.168.50.77), or if I was already connected to the box via ssh - I can see it’s up and running.

The reason in my view is that Vagrant box seeing all break in attemps coming from host - in /var/log/auth.log I’m seeing multiple ssh error entries originated from 10.0.2.2, and only from this address (and it’s perfectly expected). After several attemps 10.0.2.2 gets blocked and VM becomes unaccessible.

I was not able to figure out which component is responsible for block and how can I alter staging envrironment to avoid this (I know that it’s probably not a best idea).

I confirmed that if I completely reset iptables rules - staging Vagrant box again becomes accessible via both ssh and http.

Thanks,
Vladimir

It’s probably fail2ban.

If so, you’ll find the answer here. (That page is bookmarked in my browser, I use it so much.)

In summary:

  • Run iptables -L -n to find the rule that is blocking that IP address.
  • Run fail2ban-client status to match the rule with the fail2ban jail name.
  • Run fail2ban-client set YOURJAILNAMEHERE unbanip IPADDRESSHERE

You can prevent it from happening again by adding your IP address to the Trellis whitelist.

fail2ban has 2 jails configured - ssh and sshd, both show “Currently banned - 0”. So seems it’s not fail2ban.

It may be ferm config, however the there is something more: I’ve dumped iptables rules right after boot when all is working fine (iptables -L), then waited for box became unaccessible, ssh in via 192.168.50.77, dumped iptables again - and two dupms are identical (confirmed by diff).