[wp-trac] [WordPress Trac] #62057: Addressing Autosave Memory Issues
WordPress Trac
noreply at wordpress.org
Sat Apr 26 14:29:29 UTC 2025
#62057: Addressing Autosave Memory Issues
-------------------------------------------+-----------------------------
Reporter: whyisjake | Owner: (none)
Type: defect (bug) | Status: assigned
Priority: normal | Milestone: Future Release
Component: Editor | Version:
Severity: normal | Resolution:
Keywords: has-patch dev-reviewed commit | Focuses:
-------------------------------------------+-----------------------------
Comment (by tallulahhh):
Back again with a big discovery on this:
- Gutenberg only ever calls the autosaves route for one autosave at a
time, be it GET or POST. There’s getAutosave singular and getAutosaves
plural in Gutenberg – and only the single one is ever used.
This means the autosave route and all Gutenberg’s interactions with it
work fine when it only returns one result – and the memory consumption
issue stemming from it returning all autosaves per-post by default might
just be an accident after all, since Gutenberg never needs any more than
one autosave.
As such, **the autosave route only ever needs return one autosave**.
I figured out a way to do this with filters, and limiting the autosave
route payload to one autosave only can un-break a site whose editor would
otherwise OOM with a crit on a post with a huge autosaves response.
---
== Replication
To make testing easy, I made a mu-plugin that autosaves frequently with a
different userID each time. This makes it trivial to create a huge
autosave payload quickly:
{{{#!php
<?php
function custom_autosave_dynamic_author( $data, $postarr ) {
// Only apply to autosaves
if (defined('DOING_AUTOSAVE') && DOING_AUTOSAVE) {
$timestamp = time();
$pseudo_user_id = 100000 + ($timestamp % 1000);
$data['post_author'] = $pseudo_user_id;
}
return $data;
}
add_filter('wp_insert_post_data', 'custom_autosave_dynamic_author', 10,
2);
define( 'AUTOSAVE_INTERVAL', 1 );
}}}
With this active, you can start with a fresxh test env, edit **hello-
world**, then type (or paste stuff) into it to create lots of autosaves
from different users at a one per sec rate. After a while of doing this,
the autosave route's response will be gigantic, and on editor load, the
load of the autosaves route as part of the preloads will kill the editor
in a fatal OOM.
At this point, taking the autosaves route out of the preload_paths can
resolve the OOM as previously discussed (as well as making them easy to
inspect payload-wise when using the editor).
But the autosaves route payload route is still wasteful for GET requests,
and will, at some point of adding autosaves, OOM all on its own - which
has recently been observed in the wild for particularly massive posts with
many editor touches.
---
== Singular autosave filter
Unfortunately `per_page` doesn't work for the autosaves route, so rather
than changing the request in gutenberg, I had to change the response
instead.
Since I've been working on this from a "unbreak the site, don't hack core"
perspective for live sites, I made the autosaves route singular return
only for GET requests like this:
{{{#!php
<?php
// singular autosave route for wp-json
add_action( 'rest_api_init', function() {
register_rest_route( 'wp/v2', '/posts/(?P<id>d+)/autosave', array(
'methods' => WP_REST_Server::READABLE,
'callback' => 'get_latest_autosave',
'permission_callback' => 'autosave_permission_check',
'args' => array(
'id' => array(
'description' => __( 'Unique identifier for the post.' ),
'type' => 'integer',
),
),
) );
} );
function autosave_permission_check( WP_REST_Request $request ) {
$post_id = (int) $request['id'];
if ( ! current_user_can( 'edit_post', $post_id ) ) {
return new WP_Error(
'rest_cannot_view',
__( 'Sorry, you are not allowed to view autosaves for this
post.' ),
array( 'status' => rest_authorization_required_code() )
);
}
return true;
}
function get_latest_autosave( WP_REST_Request $request ) {
$post_id = (int) $request['id'];
$post = get_post( $post_id );
if ( ! $post ) {
return new WP_Error( 'rest_post_invalid', __( 'Invalid post ID.'
), array( 'status' => 404 ) );
}
// wp_get_post_autosave() returns the latest autosave WP_Post object
if one exists.
$autosave = wp_get_post_autosave( $post_id );
if ( ! $autosave ) {
// No autosave found; return an empty array.
return rest_ensure_response( [] );
}
// Use the existing autosaves controller to prepare the autosave data.
$controller = new WP_REST_Autosaves_Controller( 'post' );
$response = $controller->prepare_item_for_response( $autosave,
$request );
return rest_ensure_response( $response );
}
}}}
Applying this to the previous replication step un-breaks the editor for
the affected post and allows normal operation.
This has been enough to un-break sites with big autosave payloads, and
though it would obviously take a different shape in a core patch, I wanted
to share how we've been doing it filter-style for affected envs.
Removing the route from preload_paths can also help avoid OOM for sites
whose singular autosave payload is huge all on its own.
Of course a Core patch would look very different, but this filter and
replication method should be enough to prove that Core needs the current
autosaves response to GET requests to be singular-autosave only. Perhaps a
second route or `?all` arg could be used for Gutenberg's getAutosaves
(plural) method.
I'll be away from this a while, but hopefully that helps move things along
:)
--
Ticket URL: <https://core.trac.wordpress.org/ticket/62057#comment:16>
WordPress Trac <https://core.trac.wordpress.org/>
WordPress publishing platform
More information about the wp-trac
mailing list