What happens to the site dot Git directory on production deploy

I’m looking in trellis > roles > deploy > update.yml and build.yml

Trying to figure out what is removing and or not including the .git directory on deploy to /srv/www//releases//site.

Basically, I’m trying to include my .git directory to each current releases deploy.

Trellis clones the repo to a project_source_path that is shared across deploys. Each deploy runs git archive (omits .git dir) to put the site contents in place.

Not knowing why you want the .git dir in release dir, I can’t comment on whether it’s a good idea (and maybe I couldn’t comment even if I were to know :thinking:).

You might play around with overriding project_shared_children to include the .git dir, something like this (reference; untested, and maybe not recommended):

# group_vars/all/main.yml
project_shared_children:
  - path: web/app/uploads
    src: uploads
  - path: .git
    src: source/.git
1 Like

Trying to use it to keep track of plugin autoupdates I allow on a couple of production servers. Shame on me, I know, but it’s my way of trying to deal with premium plugins that will only update with their license. I wanted to trigger an automated git commit anytime there’s a file change in the current releases directory so I can keep track of everything in case any of their autoupdates ever break anything

So yeah, this is probably not recommendable but this appears to work-ish

What I’ve found so far.

I realized that the release directory contents are a bit different than what is stored in the project git repo. So that causes more problems if I want to include that git repo in the current releases directory, but don’t want to interfere with the Trellis way of life as far as my rollbacks and vendor files are concerned. Which I don’t. Because I’m not entirely sure what’s going on in that directory.

Solution: Do not include the git repo in every releases directory. Instead add a build step to symlink the contents of the /srv/www//sources/site/ directory to releases on each deploy build instead of actually copying the files from site to that directory. That way the site files stay neatly in their /srv/www/*/source/ folder where the .git directory natively resides on the server with trellis its deployment. From there I can trigger automated commits on file changes when the site directory updates.

/srv/www/*/current/
(this picture is inaccurate)

This setup ^ currently works. I can let admin users update/install plugins, git commit, and git pull to a local dev environment so they’re in sync. Also noteworthy, the .env file needs to be in the /site folder as the symlinks can’t read from it in the current directory to establish a database connection. Can copy it (being wary to leave nothing with root as owner. Anything with root ownership in the above photo was deleted) or symlink it back to the source/site directory.

as to the implications beyond bad practices and symlink insanity… ¯\(ツ)

Currently, whatever I’ve done here does breaks the deploy process, but that’s because I need to go digging into some of the ansible bits. If the files here need to be rolled back, they still need to be copied to the older release folders if I don’t want to break those rollback scripts. So on a new deploy build, before the /site directory is updated, those files need to then finally be copied to that release directory over top of their symlinks. Then the new incoming release directory needs to be symlinked back to the new site folder.

I feel like this should work.

Idunno, I always feel like I barely know what I’m doing. Thoughts?

edit: this is also beneficial info here in understanding what’s going on with the deploy/release steps https://docs.ansible.com/ansible/deploy_helper_module.html

disclaimer: I have limited familiarity with this topic.

Managing dependencies with composer is ideal. However, It seems that challenges arise with 1) private/paid plugins, 2) non-dev clients who want to update without having to deal with composer, and 3) large number of composer deps to be constantly updated.

I respect that you’re ambitiously trying to deal with a tough issue. After doing only a superficial reading of your process, my main concerns are 1) the complexity of the process, 2) the modifications required to Trellis (but ultimately maybe Trellis core will need modification to address the issue?), and 3) the result that you would end up with version mismatch between installed plugins and plugin versions listed in composer.json and composer.lock. On this last point, for example, imagine that a user updates a plugin but your next Trellis deploy downgrades back to the older version listed in composer.lock. But maybe your process somehow prevents that.

I wonder if it would actually be less hassle – and easier for someone else to inherit your project – if you were to just have the clients or an update notifier plugin email you about needed updates, then you just do them manually via the regular composer process.

Composer access to private plugins

This isn’t directly on topic, but for the sake of completeness, I’ll mention that there is a fair amount of discussion and option related to dealing with private plugins in composer. Here’s a big long thread on various options with private plugins. A couple big agencies I know of have begun using private packagist with great satisfaction.

Updating composer dependencies

I’m even less familiar with the issue of autoupdating composer dependencies. I suspect that few people have found a broadly applicable solution.

Some recent posts that are related in my mind:

As for being notified of needed plugin updates, I don’t have a service I’ve used personally, but there are plugins that will at least offer notification via email, e.g., WP Updates Notifier and Wordfence. I haven’t evaluated them, so these aren’t endorsements. Even if you are notified you still face the labor of manually updating all the composer deps.

I haven’t even tried looking yet, but it seems like there should be a service that…

  • monitors/detects updates for plugins you specify
  • autoupdates a project’s composer.json and composer.lock
    (and commits updates to project)
  • triggers a CI build and deploy

I’d be interested to hear about solutions people have discovered.

1 Like

Thanks for the input, I agree, in an ideal situation I’d be using Composer to manage everything.

It makes the most sense. But yeah, reality isn’t so nice. Doesn’t help that I’m not the most familiar with composer whatsoever, but I have it pegged as pretty analogous to npm or any other package manager…(he says blindly without reading anything about it). After banging my head against this for the past couple of days, I’ve arrived at this.

Roughly, as I understand it, the deploy process is as follows

- include: initialize.yml

File sets project path as /srv/www/{{ sitename }} for the pre/post hooks and main deploy playbook

 - include: update.yml

This file handles most if not all of the git processing. The clone project step broke on me here due to parity issues between git in production VS dev Deployment failure due to local modifications on production server

I bandaided by changing the clone project files step to

- name: Clone project files if they don't exist
  git:
    repo: "{{ project_git_repo }}"
    dest: "{{ project_source_path }}"
    version: "{{ project_version }}"
    accept_hostkey: "{{ repo_accept_hostkey | default(true) }}"
    force: true
    ignore_errors: true
no_log: true
register: git_clone
when: "'{{ project_source_path }}/.git' is not defined"

(this was changed later)

I added the ‘when’ step so it’s supposed to only clone the git repo if it doesn’t already exist. I should probably have another step do a git pull if it does exist.
~ added

- name: Git pull
  command: git pull
  args:
    chdir: "{{ project_source_path }}"
  when: git_project.stat.exists

(need to make sure user www has the correct deploy keys)

Because no, I don’t really want to be updating plugins on a production server, but if I’m relying on developers who aren’t composing, and I am in fact updating plugins in a live environment, then I need to separate out the mayonnaise repos from the composed up peanut butter packages. Plus I need to keep all packages mirrored between dev environment and production environment to maintain sanity. I’ll do that with git, but going back to the packages organization.

I thought I could separate composer packages VS private 3rd party packages by putting the packages I wanted to keep updated automatically via license with auto git commits in the mu folder, but after reading WP documentation for that directory, I don’t even feel like trying to deal with the nuances of packaging there. Since they’re already not composing, I can almost guarantee with 9000% probability if I allow 3rd party packages to try and auto update in the mu directory, I’m gonna have a bad time.

Would probably be easier to just enable plugin updates in production, and then restrict access to updating plugins via user roles. (not actually sure how intricate wordpress is in that regard, but I feel like I should have the ability to say only these users are allowed to update plugins. Which should help out in the, “oblivious users messing up composer versioning department”).

Either way, like I said, this is a bad idea. I can get away with this because the sites I’m going to test this on are pretty much completely static builds, and not very traffic heavy whatsoever. So I’ll ask for forgiveness on production plugin updates rather than permission.

#So at this point,
before even getting into the next couple of files, I’ve realized that I’m going to be the crazy person that completely throws ansible’s deploy helper out the window. And I’ve really only got two, probably bad reasons why.

  1. Isn’t this what git is for?
    Can I not snapshot the DB and the site files, commit them to my git repo, and then rollback to these if necessary?

  2. I’m cheap, and with the deploy_helper, my tiny ec2 instances fill up at 10 releases. So I either need to manually clean that up, or make something that will zip those releases and export them before nuking, so they don’t fill up at 10. The deploy helper already has a “keep releases” switch to dump releases after an arbitrary number of releases. Still, I’m already paying for private git repos, with a gitlab project on the backburner. Might as well use that for storage/backup vs paying AWS costs. (Tad bit of my reasonings for flagrantly bad practices. I feel like I’m apologizing while I continue banging on you guys’ elegant glass building with a rock hammer). Plus, it looks like the release and rollback roles are only related to site files anyway? So I really don’t understand the reasoning behind using Ansible’s deploy_helper module over just calling a git revert anyway. I’m not too great in that department either, so that is more of a question than a statement.

     - include: prepare.yml
     - include: build.yml
     - include: share.yml
     - include: finalize.yml
    

Aside from changing the site pointer, every current change I’ve made has been edited in roles/deploy.

I pretty much just ripped out everything that adds to the release directory releasing and sym linking process, and replaced it with a single static directory full of symlinks back to /shared/source/site. Which can now be updated with git pull.

From here, I’m pretty sure I no longer need to deploy via ansible to update plugins. I can just sync/rollback plugin updates with git. I’m sure this has some type of security implications that I’m not well read enough to be aware of, but it does work.

I’ll continue to develop on my dev box, and push DB updates from there. Then my production server can also just update its premium plugins+notify me when they autoupdate. Just need to make sure my dev files either check for git updates on directory mount, or just intermittently git pull in general.




@MikePadge I’m trying to integrate automatic Git commits from my remote server as well. Although I am doing it in an attempt to get Trellis to work with VersionPress.

I’ve come to the same general conclusion as you that I’m not going to use Ansible to deploy and will rely on Git instead and so I would like to have a single static directory which can be updated with git pull.

My question is: Can you share your files from the roles/deploy directory with the changes you made? That would be a big help and save me from having to go through the trial and error you did.

Thanks!

So, I was told this was actually a bad idea and git shouldn’t be used in this manner. I kind of fell off of dev for this and onto another project, and my script setup for what I did change is so embarrassingly bad and never completed that it would probably end up causing you more problems than it helps. (And honestly, it’s been so long since I looked at it, I’m not really even sure what I changed anymore)

If you do want to make those changes still, what you’re looking for is the built in Ansible module for deployments and rollbacks. I ripped all of that out, and just made sure it was pooting everything out into the same directory on top of my existing prod files. But committing to git before the overwrite.