Roots Discourse

FastCGI PHP Fatal Error Memory Exhausted

We are experiencing some memory issues and can’t identify what is going on. It doesn’t happen everytime you try to load a page but does happen very frequently (like every other time). The error we are seeing in /srv/www/www.woodgrain.com/logs/error.log is:

FastCGI sent in stderr: “PHP message: PHP Fatal error: Allowed memory size of 100663296 bytes exhausted (tried to allocate 20480 bytes) in /srv/www/www.woodgrain.com/releases/20190423055327/web/wp/wp-includes/meta.php on line 846” while reading response header from upstream, client: 69.92.17.106, server: www.woodgrain.com, request: “GET /autodiscover/autodiscover.xml HTTP/1.1”, upstream: “fastcgi://unix:/var/run/php-fpm-wordpress.sock:”, host: “www.woodgrain.com

We have tried adjusting the FastCGI Buffer size to 8 32k and that hasn’t made a difference.

Thanks!

Have you changed or updated anything recently? What’s on wp/wp-includes/meta.php at line 846? (I tried to look it up but I don’t know what version of WP you’re running.)

I hadn’t but you mentioning WP version reminded me there was an update available… unfortunately we are getting the same error after updating. The code inside wp/wp-includes/meta.php is as follows:

if ( ! empty( $meta_list ) ) {
	foreach ( $meta_list as $metarow ) {
		$mpid = intval( $metarow[ $column ] );
		$mkey = $metarow['meta_key'];
		$mval = $metarow['meta_value'];

		// Force subkeys to be array type:
		if ( ! isset( $cache[ $mpid ] ) || ! is_array( $cache[ $mpid ] ) ) {
			$cache[ $mpid ] = array();
		}
		if ( ! isset( $cache[ $mpid ][ $mkey ] ) || ! is_array( $cache[ $mpid ][ $mkey ] ) ) {
			$cache[ $mpid ][ $mkey ] = array();
		}

		// Add a value to the current pid/key:
		$cache[ $mpid ][ $mkey ][] = $mval;
	}
}

Based on that, my uneducated guess is that you have a huge number of rows of whatever type of content this is iterating over, and it’s leading to a loop that runs for a very very very long time. If you can duplicate your production site locally, I’d start sticking some var_dump($var); die(); in meta.php to see what kind of data it’s processing and what the scale of that data looks like. Based solely on the variable names used in the snippet of code it looks like this is iterating over meta fields, so maybe you’ve got something that’s generating a huge number of meta fields on something?

Just the buffers or the buffer size too? Needed this on a recent build:

nginx_fastcgi_buffers: 8 16k
nginx_fastcgi_buffer_size: 16k

@benword I put the following inside group_vars/main.yml and re-provisioned with nginx tags. Should I have put them somewhere else?

nginx_fastcgi_buffers: 8 32k
nginx_fastcgi_buffer_size: 32k 

Still getting the same errors.

What about updating some PHP settings?

php_max_execution_time: 300
php_max_input_time: 300
php_memory_limit: 256M
php_max_input_vars: 2000

That’s correct :+1:

2 Likes

@benword I think updating the PHP settings did the trick, although I suspect it was a combination of the PHP settings and increase to fastCGI cache buffers. I’ll keep an eye on it to make sure it’s not happening still.

Did you restart NGINX as well after making changes to the conf file? I believe PHP changes take affect without a reboot but NGINX needs to be reloaded.

Yes I did… Everything is working now.

This topic was automatically closed after 42 days. New replies are no longer allowed.