Trellis/Bedrock and Amazon AWS

Hello!

I’m working on a project which soon is ready for production. I’ve been reading a lot and realized that the Amazon AWS ecosystem seems really great for hosting your application.

What I find interesting is the fact that you can scale your app when needed, firing up more EC2 instances. The database is moved to a separate RDS instance with failover and possibility scale when needed. S3/Cloudfront is great for serving your static assets, ensuring load time is reduced for your visitors.

This is all good stuff. But what I came to think of when I was about to start deploying my project is that I can’t wrap my head around how to get Trellis/Bedrock play nice with this setup.

The idea is that the app is entirely stateless, like I said, DB is on RDS, files are on S3. So with every instance you fire up, they connect to these sources to serve up your app. But this also means that whenever an instance fails and dies, it gets replaced with a new one, and this new instance needs to be provisioned. It’s here I’m a bit stumped.

This is how I’m thinking provisioning could work:

  1. Instance fails and gets removed by AWS.
  2. New instance is fired up.
  3. Start up script installs ansible
  4. Script pulls production branch from github
  5. ?

Yeah, at step 5 I’m stumped.
Do I setup additional env that allows me to run the remote script as local?

Also bit curious what happens during a deploy, let’s say that I have 4 instances up and running at the moment the deploy script runs. Can’t figure out how it’s deployed to all instances. Imagine I need to look at something heavily customized.

Anyone that’s attempted to set this up and could share some thoughts?

Edit: After some digging around it seems like creating an ami file to be distributed to your instances is one way to go. Any other solution that requires less build time?

3 Likes

There’s two parts to this:

  1. Provisioning new instances
  2. Deploying to instances

There’s a lot of options for the first one:

  1. Make a custom AMI and update it (manually) when you change your playbooks. Then each new instance in an ASG will use that AMI.
  2. Use an ASG lifecycle hook to run a playbook on a separate remote instance.
  3. Do something like this: https://www.reddit.com/r/aws/comments/3szglq/idea_on_how_to_bootstrap_new_instances_created_by/cx2y3au

For deploying, Ansible supports “dynamic inventories”. Basically you specify the ec2 inventory script and it queries the AWS api to get a list of instances. Ansible just deploys to every server returned by it. This would still require a centralized deployment server.

4 Likes

Cool! Thanks so much for your insights on this.

I guess the AMI route could take care of both provisioning and deployment, although it seems to be a lot of work and long build times.

Do you have any reading material for the deployment via dynamic inventories?

Also, are you looking into making Roots more AWS friendly? It seems (at least for me) that there are a lot of benefits going with AWS over DigitalOcean, except maybe pricing (which can be adjusted after your needs of course) and complexity.

Thanks again!

Ansible docs have more info: http://docs.ansible.com/ansible/intro_dynamic_inventory.html#example-aws-ec2-external-inventory-script

Yeah, ideally Trellis should be more AWS friendly. Nothing is currently geared towards DigitalOcean. It’s just done at its simplest by default which means a single static server.

1 Like