Thanks for the input, I agree, in an ideal situation I’d be using Composer to manage everything.
It makes the most sense. But yeah, reality isn’t so nice. Doesn’t help that I’m not the most familiar with composer whatsoever, but I have it pegged as pretty analogous to npm or any other package manager…(he says blindly without reading anything about it). After banging my head against this for the past couple of days, I’ve arrived at this.
Roughly, as I understand it, the deploy process is as follows
- include: initialize.yml
File sets project path as /srv/www/{{ sitename }} for the pre/post hooks and main deploy playbook
- include: update.yml
This file handles most if not all of the git processing. The clone project step broke on me here due to parity issues between git in production VS dev Deployment failure due to local modifications on production server
I bandaided by changing the clone project files step to
- name: Clone project files if they don't exist
git:
repo: "{{ project_git_repo }}"
dest: "{{ project_source_path }}"
version: "{{ project_version }}"
accept_hostkey: "{{ repo_accept_hostkey | default(true) }}"
force: true
ignore_errors: true
no_log: true
register: git_clone
when: "'{{ project_source_path }}/.git' is not defined"
(this was changed later)
I added the ‘when’ step so it’s supposed to only clone the git repo if it doesn’t already exist. I should probably have another step do a git pull if it does exist.
~ added
- name: Git pull
command: git pull
args:
chdir: "{{ project_source_path }}"
when: git_project.stat.exists
(need to make sure user www has the correct deploy keys)
Because no, I don’t really want to be updating plugins on a production server, but if I’m relying on developers who aren’t composing, and I am in fact updating plugins in a live environment, then I need to separate out the mayonnaise repos from the composed up peanut butter packages. Plus I need to keep all packages mirrored between dev environment and production environment to maintain sanity. I’ll do that with git, but going back to the packages organization.
I thought I could separate composer packages VS private 3rd party packages by putting the packages I wanted to keep updated automatically via license with auto git commits in the mu folder, but after reading WP documentation for that directory, I don’t even feel like trying to deal with the nuances of packaging there. Since they’re already not composing, I can almost guarantee with 9000% probability if I allow 3rd party packages to try and auto update in the mu directory, I’m gonna have a bad time.
Would probably be easier to just enable plugin updates in production, and then restrict access to updating plugins via user roles. (not actually sure how intricate wordpress is in that regard, but I feel like I should have the ability to say only these users are allowed to update plugins. Which should help out in the, “oblivious users messing up composer versioning department”).
Either way, like I said, this is a bad idea. I can get away with this because the sites I’m going to test this on are pretty much completely static builds, and not very traffic heavy whatsoever. So I’ll ask for forgiveness on production plugin updates rather than permission.
#So at this point,
before even getting into the next couple of files, I’ve realized that I’m going to be the crazy person that completely throws ansible’s deploy helper out the window. And I’ve really only got two, probably bad reasons why.
-
Isn’t this what git is for?
Can I not snapshot the DB and the site files, commit them to my git repo, and then rollback to these if necessary?
-
I’m cheap, and with the deploy_helper, my tiny ec2 instances fill up at 10 releases. So I either need to manually clean that up, or make something that will zip those releases and export them before nuking, so they don’t fill up at 10. The deploy helper already has a “keep releases” switch to dump releases after an arbitrary number of releases. Still, I’m already paying for private git repos, with a gitlab project on the backburner. Might as well use that for storage/backup vs paying AWS costs. (Tad bit of my reasonings for flagrantly bad practices. I feel like I’m apologizing while I continue banging on you guys’ elegant glass building with a rock hammer). Plus, it looks like the release and rollback roles are only related to site files anyway? So I really don’t understand the reasoning behind using Ansible’s deploy_helper module over just calling a git revert anyway. I’m not too great in that department either, so that is more of a question than a statement.
- include: prepare.yml
- include: build.yml
- include: share.yml
- include: finalize.yml
Aside from changing the site pointer, every current change I’ve made has been edited in roles/deploy.
I pretty much just ripped out everything that adds to the release directory releasing and sym linking process, and replaced it with a single static directory full of symlinks back to /shared/source/site. Which can now be updated with git pull.
From here, I’m pretty sure I no longer need to deploy via ansible to update plugins. I can just sync/rollback plugin updates with git. I’m sure this has some type of security implications that I’m not well read enough to be aware of, but it does work.
I’ll continue to develop on my dev box, and push DB updates from there. Then my production server can also just update its premium plugins+notify me when they autoupdate. Just need to make sure my dev files either check for git updates on directory mount, or just intermittently git pull in general.