[wp-trac] [WordPress Trac] #64865: AI Client: Add agentic loop support for auto-resolving abilities

WordPress Trac noreply at wordpress.org
Tue Mar 17 07:48:05 UTC 2026


#64865: AI Client: Add agentic loop support for auto-resolving abilities
-------------------------+-----------------------------
 Reporter:  gziolo       |       Owner:  (none)
     Type:  enhancement  |      Status:  new
 Priority:  normal       |   Milestone:  Future Release
Component:  AI           |     Version:  trunk
 Severity:  normal       |  Resolution:
 Keywords:               |     Focuses:
-------------------------+-----------------------------

Comment (by satollo):

 An alternative is to leave the developer to execute the abilities, but
 then be able to push back the Message resulting from the execution. I was
 not able to find a suitable method (other than changing the signature of
 the $message attribute in PromptBuilder to public).

 What was partially working on my tests:

 * Create the prompt builder with {{{$prompt =
 wp_ai_client_prompt($starting_user_text_prompt)}}}
 * Add selected abilities {{{$prompt->using_abilities(...$abilities)}}}
 * Call {{{$result = $prompt->generate_text_result()}}}
 * Create an instance of {{{$fr = new
 WP_AI_Client_Ability_Function_Resolver(...$abilities)}}} providing the
 same abilities added to the prompt builder
 * Check if the generated {{{$result}}} is a function call {{{
 $fr->has_ability_calls($result->toMessage())}}}
 * If true, execute them {{{$fc_result =
 $fr->execute_abilities($result->toMessage()) }}}
 * Now, with the modified PromptBuilder, add the two messages to the
 {{{$messages}}} array: {{{$prompt->builder->messages[] =
 $result.toMessage()}}} (this is the one with the function call request and
 {{{$prompt->builder->messages[] = $fc_result.toMessage()}}}

 The next "generate" call will return the answer from the LLM using the
 results of the function call.

 For example, asking for "site info", we get a table of all the values
 returned by the ability "Get Site Information".

 Here are a couple of observations; if useful, they could be part of
 another ticket.

 Having access to the {{{$messages}}} I can serialize them to a file.
 When the user, after getting the answer to the first request, asks, for
 example, "which is the site name", the steps are repeated, but when
 building the {$prompt} I can unserialize the messages and use
 {{{with_history(...)}}}. Now the LLM does not call the function again and
 extracts the information for the previous messages.

 I don't know if that is the formally correct sequence, it is what I used
 with Neuron AI.

 A final note: after the first run, when I get the site information from
 the LLM built from the function result, if I push back that message into
 the message history (it is actually part of the conversation), I get an
 error: Bad Request (400) - Invalid value: 'output_text'.

 The output_text is set by the OpenAI provider plugin when the message has
 a Model role. Changing to input_text, the conversation with the LLM can
 continue. But I stopped here.

-- 
Ticket URL: <https://core.trac.wordpress.org/ticket/64865#comment:2>
WordPress Trac <https://core.trac.wordpress.org/>
WordPress publishing platform


More information about the wp-trac mailing list