Can deploy to production, but not staging

FWIW I followed the same steps as Josh did in his first post here:

Should running evalssh-agent -s`` return a new agent pid 26167? Each time I run it I get a new pid. I’m not sure if that’s expected behavior.

Also when I $git log -- trellis I don’t show anything being changed there since before deployments to staging stopped working. I realize that not all files were checked into git but trying to determine what had changed and if that change broke staging deployments.

I’ve seen people with the same issue before and it was because they were deploying a non-existent branch. Have you checked which branch you’re deploying and that it exists on your remote repo?

Thanks for the reply. The branch is staging and hasn’t changed. I think it’s likely something with my local. I checked SSH Forwarding, SSH Agent, modified my ssh config, and more.

What else is left? ¯_(ツ)_/¯

Can you SSH in to the server and pull down the repo manually? Not a solution, but it could help diagnose where the problem is occuring.

Has the staging branch been pushed up to GitHub?

edit:

  1. Sorry, just saw that you said production uses the same branch - so sanity check here: Are you deploying your staging branch to production with no problems?
  2. Have you ever been able to deploy to this staging environment? Did provisioning run smoothly?
  1. I meant to say that we have been using the staging for our staging server. While master is used for production. We were able to deploy just fine the other day. I think something on the server got messed up somehow, but I’m not sure what else to try.
  2. Yes and yes.

Thanks for all of your help!

Doesn’t look like that’s an option here since git isn’t deployed to the server.

Trellis deploys by (in part) git cloneing your repository on your remote server (see: https://roots.io/trellis/docs/remote-server-setup/#deploy). If you don’t have git on your remote server, it would be impossible for it to do that.

Thanks for the response. I should have said that git itself isn’t installed, but I do understand that trellis utilizes that to deploy the updates. The same can be said of production too, where deployments work just fine.

Is there anything that I can check that may have been foobarred on the server?

I can successfully provision with $ ansible-playbook server.yml -e "site=SITENAME.com env=staging" but not deploy $ ansible-playbook deploy.yml -e "site=SITENAME.com env=staging"

Any other ideas besides provisioning another fresh server?

Ok, have you tried deploying to staging with your master branch?

Can you show us the contents of your trellis/group_vars/staging/wordpress_sites.yml please?

git should be installed during provision - https://github.com/roots/trellis/blob/49cf5de36c7bfb0661a922288da898228cc526e9/roles/common/defaults/main.yml#L27


Change https://github.com/roots/trellis/blob/49cf5de36c7bfb0661a922288da898228cc526e9/roles/deploy/tasks/update.yml:

  - name: Clone project files
    git:
      repo: "{{ project_git_repo }}"
      dest: "{{ project_source_path }}"
      version: "{{ project_version }}"
      accept_hostkey: "{{ project.repo_accept_hostkey | default(repo_accept_hostkey | default(true)) }}"
      force: yes
-   ignore_errors: true
-   no_log: true
+   ignore_errors: false
+   no_log: false
    register: git_clone

Then, re-deploy with $ ansible-playbook deploy.yml -e "site=MYSITE.com env=staging" -vvv , you will get a better error.

Here it is:

wordpress_sites:
  SITENAME.com:
    site_hosts:
      - canonical: staging.SITENAME.com
    local_path: ../site # path targeting local Bedrock site directory (relative to Ansible root)
    repo: git@github.com:SITENAME/SITENAME.com-wordpress.git # replace with your Git repo URL
    repo_subtree_path: site # relative path to your Bedrock/WP directory in your repo
    branch: staging
    multisite:
      enabled: false
    ssl:
      enabled: true
      provider: letsencrypt
      hsts_include_subdomains: false
    cache:
      enabled: false
    htpasswd:
     - name: SITENAME
       password: staging

After updating that file I get this:

TASK [deploy : Clone project files] **********************************************************************************************************************************************************************************************************************
System info:
Ansible 2.5.3; Darwin
Trellis version (per changelog): "Update xdebug tunnel configuration"
---------------------------------------------------
Local modifications exist in repository (force=no).
fatal: [staging.SITENAME.com]: FAILED! => {"before": "b0af714b56ce0f0b39ef303b653d18dbc04ad2c5", "changed": false}

Does that help determine the issue at all?

Updating force, per this link, looks like it corrected my issues:

- name: Clone project files
  git:
    repo: "{{ project_git_repo }}"
    dest: "{{ project_source_path }}"
    version: "{{ project_version }}"
    accept_hostkey: "{{ project.repo_accept_hostkey | default(repo_accept_hostkey | default(true)) }}"
    force: yes
  ignore_errors: false
  no_log: false
  register: git_clone

I was able to deploy to staging. I have since removed the force option, and can still deploy!

Thank you all for the help in pointing me in the right direction!

For those who coming here from google, the reason is that /srv/www/XXXX/shared/source being modified manually (which you should never do!)

To make deploy works again:

Option 1: Set force: true to discard local changes caught by git
Option 2: Delete /srv/www/XXXX/shared/source
Option 3 (best): Update Trellis, specifically this pull request - https://github.com/roots/trellis/pull/999

No matter which option you go for, don’t forget reverting to no_log: true as logging sometimes prints out git credentials.

Rule of thumb: If you are changing remote server source code manually, you probably doing it wrong.

1 Like

Completely agree. I think panic mode crept in when we had an issue with the site, then at the same time deployments w/staging occurred. Sometimes it’s better to take a step back. #LessonLearned.

Thanks again all!

1 Like

This topic was automatically closed after 42 days. New replies are no longer allowed.