App.css not used and hmr in production

I’ve been using bedrock/sage/trellis for a while now and I’m seeing some odd behavior lately with a sage 10 site (10.2.0).

When running a build and/or deploying, it looks like the app.css file isn’t actually getting pulled in as expected. I’m seeing the styles applied, but it looks like they’re maybe getting injected with bud/hmr?

I’m looking for a little confirmation around whether this is the expected behavior after a production build/deploy, and where exactly the handling of actually enqueuing the app.css file.

I’m also seeing these 404s repeatedly in my console after my production deploy, so I’m assuming something must just be wildly misconfigured on my end.


How do the (Trellis) deploy hooks of the site look like, is the normal build script ran and the artifacts rsynced to server? Current Trellis ships example deploy hooks for Sage 10 (albeit commented out).

I guess this is on production, what environment type (development; staging; production) does the Bedrock site actually see?

Does the (Trellis) deployment succeed? Is the work directory from which the build happens (this should happen on a workstation or CI server) up to date, is the VCS correct?

Sorry for the delay in responding here - holidays and all.

It’s pretty much all standard trellis and sage stuff. The build succeeds, the WP_ENV is set to “production” and the public directory has all the appropriate files with the cache-busting strings appended to the assets - it just appears that they’re all inserted via bud/hmr and no app.css file actually getting loaded.

So bud checks during build whether HMR should be included or not:

Is bud.isProduction true when you build the site for deployment on production?

Where would this be getting set? Prior to sage 10/bud, we would just specify something like “yarn build:production” in the actual build-before.yml file so I’m curious where I should be expecting to see this set.

Yes, yarn build should be enough for a production build.

When you inspect the built code, do you find some hmr occurences?

Yep, tons. App.js is mostly the webpack/bud stuff with lots of react checking etc.

I’ll also see the following in my console:

Failed to load resource: the server responded with a status of 404 () from /__bud/hmr:1

TypeError: Failed to construct 'URL': Invalid URL
    at Object.<anonymous> (proxy-click-interceptor.cjs:14:1)
    at (<anonymous>)
    at fulfilled (proxy-click-interceptor.cjs:5:1)
(anonymous) @ proxy-click-interceptor.cjs:17

And then repeated

GET 404

When you run yarn build on the workstation, in the folder from which the site would be deployed,
is this hrm-related code generated inside the build files (that would be rsynced by Trellis otherwise)?

My issue seems to be resolved right now. I started recreating a handful of files from what is in the current sage and trellis and things have sorted themselves out.

I’m going to undo a few things 1 by 1 later to attempt to identify where this was going wrong so I can document it here, but I appear to be in the clear at the moment.

Thanks for the help thus far @strarsis

So this wound up being related to a minor syntax error in one of my scss files but lead me to something larger that I didn’t realize.

bud wanted an “!optional” added to an @extend I had implemented - no big deal. I didn’t see it locally because “yarn dev” was able to work through it with HMR and the single error was buried above a bunch of green successes that came afterwards (other compiled assets) and locally it still surface all my styles.

The builds run during the deployment result in the error, but it doesn’t “fail” the “build” task — it just kind of stopped it in it’s tracks, resulting in the build sort of going through while a bunch of the assets weren’t being created — but the contents of /public (largely from my last “yarn dev”) were picked up and synced to the production server still— which were at least partially the products of a dev build and resulting in the hmr stuff taking place.

My initial thoughts are admittedly largely just from how previous sage versions worked, and how running a “build” would wipe the public (or dist) directory, but also outright fail if there were blocking errors.

I sort of feel as though it would be good for a “build” to wipe the existing public directory as a first step so it doesn’t end potentially up being a mash of 2 different builds, but there could be some justification behind this working as it does.

I’m going to mark this closed because I at least understand what happened now and got it fixed, but I’ve opened a separate thread asking about whether this is all anticipated behavior and whether anyone has some good modifications to the existing build and deploy tasks to prevent scenarios like this resulting in deployed code to production environments.