Gitlab private repo as composer package

Following this guide,, everything works well locally, but the auth.json I was required to create containing my generated Gitlab oauth token needs to also be on my remote server, otherwise trellis deploy fails at the composer step when composer can’t authenticate.

Committing auth.json to my repo and deploying works, but… is that the best/only way? Should I be committing a file containing my Gitlab oauth token? It’s a private repo, so it’s not a huge deal, but is there no better way?

You definitely should not be doing this. Once you’ve removed the auth.json from your repo, you should also delete or regenerate that token on Gitlab.

Generally if some remote service (i.e. a build process; a server; etc) needs an API key, an auth token, or some other sensitive information, a good way to deal with that is to set it as an environment variable in that environment, and then have whatever script you’re running that needs access to that key/token/etc pull it from the environment.

1 Like

I know there are some environment variables handled in the various vault.yml files in Trellis. Would I be able to set it in one of those? If so, how do I then make composer aware of that variable?

My goal is to have trellis deploy handle all plugins and themes, so I’m not left rsyncing all the leftovers after a deploy. Sage themes are no problem, but other themes and plugins are public plugins available on wpackagist, premium plugins that I’ve made available to composer via Satispress, and custom plugins that we house in private Gitlab repos. The wpackagist plugins are no problem, but the other two categories require authentication.

So I also use a private GitLab repo for the site and plugin repos.
I let composer access the GitLab repo using SSH and allowed the public key of the web server to pull from the GitLab repository (generate a keypair on the web server and copy + paste the public key into GitLab as authorized).

  "repositories": [
      "url": "ssh://",
      "type": "vcs"

  "require": {
    "own-company/my-private-plugin": "1.2.0"

Note: If you have to use a separate SSH port for GitLab (e.g. GitLab in Docker container),
you can add this as Port option to the host of the web server ~/.ssh/config (user web) configuration.

Hmm, this makes sense!

But where do you put the server’s public key in Gitlab? In your account? Over time, assuming more websites, won’t you end up with a lot of public keys in your Gitlab account? When provisioning with Trellis, I set SSH keys in the users.yml with these lines:

      # - "{{ lookup('file', '~/.ssh/') }}"

Notice I actually don’t use the first line because among me and my collaborators, we don’t all have the same name for our public keys and Trellis throws an error if it cannot find So we just have it pull from our Gitlab public keys. Not to mention I don’t have their public keys on my machine anyway, so I have to provision this way if I want them to be able to work with the remote. If I add a remote server’s public key to my Gitlab account, all subsequent servers I provision will have public keys of all preceding servers. I’m not sure if that’s wise from a security standpoint.

The great thing about PKI is that each member has its own private-/public-key-pair.
The public key can be safely shared. So it isn’t actually a bad thing that the public keys of web-servers - or rather hosts that are allowed to pull repo data from the private GitLab repo - accumulate. You can assign names to them and easily revoke permissions, reducing the likelihood of forgotten accounts.

The keys from Trellis you mentioned are meant as SSH keys that can log into the web server.
It is not about the generation of the private key of the web server itself or what key it should use for connecting to the GitLab server or what GitLab should actually trust.

Right, but it will load all those public keys on the remote servers when I provision them. Then in the unlikely event that one of the servers becomes compromised, it would provide SSH access to all servers provisioned after it. Of course, that would require the attacker to know that those other servers exist. Is that not a problem?

I see how it would be fine for Gitlab though, since I could revoke access.

You can assign only certain permissions to each key (certain repository, read only).

You can even use SSH Agent Forwarding from your deploying system to allow the web servers to pull from GitLab using the workstation SSH authentication in a secure way.

I’m more concerned with the remote servers’ possible connectivity with each other than I am their permissions to Gitlab. But the SSH forwarding sounds great! I’ll look into that and see if I can figure it out. I assume I will still need to list the repository in the composer file the way you demonstrated in your example? "url": "ssh://git@gitlab...",

Yes, you list it as a SSH connection to a git repository so composer is able to use it.

As a more complex alternative you can use a private composer repository.

Have you tried using a global auth.json that’s outside of your repo but still available to Composer? I do this myself, and have no issues deploying everything with Trellis (in my case to Kinsta).

My setup:

  • I create a personal access token (not an oauth token) in GitLab. I only give it the read_api scope.
  • Add it to composer with composer config --global <token> (documentation here and here)
  • composer.json contains vcs-type repositories with normal https:// URLs:
   "type": "vcs",
   "url": ""

Composer and GitLab also support deploy tokens if you’ve got an automated deploy job that only needs access to a single project or group. I haven’t experimented with those yet, but it’s the same principle.

I imagine the SSH forwarding way would work, too, but personally I like my method because it minimizes the permissions available at every step without having to manage a bunch of SSH keypairs. SSH agent forwarding brings along its own set of security concerns, since anything with root access on the forwarding server has access to your local ssh agent and can authenticate as you. If you’ve got all your server identities saved in the agent, that could be a significant concern.


Do you mean on the remote server /home/web/.composer/auth.json? Yes that is what I’m doing now after @alwaysblank suggested I get the auth.json out of my source control. That works, I was just hoping I could trellis provision and trellis deploy on a fresh server without having any additional steps in between. Based on what’s been covered in this thread, it seems like I have 3 options:

  • Pass the auth info to composer via an environment variable, which I’m not yet sure how to do. Assuming this could be done by modifying some trellis group_vars or deploy hooks, this might be my end goal since it wouldn’t require any additional steps on my part between provision and deploy.
  • After provisioning, generate key pairs on the remotes and add them to my Gitlab account before deploying
  • After provisioning, rsync my auth.json to the web user’s home folder before deploying. This is where I currently am, as it seems the simplest.

The solution I used for auth.json on deployed servers was to have Trellis generate it when deploying. Mostly this was to allow using Delicious Brains’ composer repo, but it could be easily extrapolated to basically anything.

# trellis/deploy-hooks/build-before.yml
- name: Create composer auth.json
    src: "{{ playbook_dir }}/deploy-hooks/auth.json.j2"
    dest: "{{ deploy_helper.new_release_path }}/auth.json"
    mode: "0600"
# trellis/group_vars/all/vault.yml
   vault_deliciousbrains_user: 'a username'
   vault_deliciousbrains_pass: 'a password'
# trellis/deploy-hooks/auth.json.j2
  "http-basic": {
    "": {
      "username": "{{ vault_deliciousbrains_user }}",
      "password": "{{ vault_deliciousbrains_pass }}"

Interestingly, no. I’m only using my computer’s ~/.composer/auth.json file — same as I do for the local development environment. Somehow Trellis picks that up and uses the same credentials to composer install on the server, I guess? I don’t understand Ansible enough to figure out how it manages that, but it works.

Obviously if you do want to run composer independently on the server, you’d need auth.json there. @alwaysblank’s solution looks really clever for that.

Interesting! My deploy failed on the composer step until I added auth.json to the server. But that could be because I’m not using normal https:// URLs. I will try to change that instead of using git URLs.

Wow… I think this was the kicker all along. I just removed the auth.json from the server but was still able to successfully deploy. Unless composer was using some sort of cache, it seems to work a lot better without git URLs.

After I deploy a few different sites, I’ll know for sure. Thanks a ton! I can’t mark both yours and @alwaysblank’s responses as solutions. Yours more directly (and simply) does what I was looking for, but I’ll definitely be putting @alwaysblank’s information to use for other reasons. That really helped my understanding of how the vault files work, since I don’t really know anything about ansible beyond the trellis documentation.

1 Like

Awesome! Glad I could help :smile:

(Composer does use a global cache, btw. On my Mac it’s at ~/.composer/cache, with different subdirectories by type. I’ve never had any problems with it, but it’s good to know it exists!)

Wellll. Spoke too soon. Just tried deploying a different site, and composer failed to authenticate. Guess I’ll be trying alwaysblank’s method!

Trellis supports setting up composer HTTP basic authentication for multiple repositories since

Rather than generating your own auth.json, you can set the basic auth credentials as:

# group_vars/<env>/vault.yml

      - { hostname:, username: your-deliciousbrains-username, password: your-deliciousbrains-password }

However, Gitlab/GitHub/BitBucket oauth tokens are not supported. PRs are welcomed.


This topic was automatically closed after 42 days. New replies are no longer allowed.