logo
On this page

SendAgentInstanceTTS

POST

https://aigc-aiagent-api.zegotech.cn/

This interface can be used to actively call the TTS service to send voice messages as the identity of the AI agent. Please refer to AI proactively speaking: proactive invocation of LLM and TTS for details and examples.

Request

Query Parameters

    Action stringrequired

    Possible values: [SendAgentInstanceTTS]

    API Prototype Parameter

    https://aigc-aiagent-api.zegotech.cn?Action=SendAgentInstanceTTS

    AppId uint32required

    💡Public parameter. Application ID, assigned by ZEGOCLOUD. Get it from the ZEGOCLOUD Admin Console.

    SignatureNonce stringrequired

    💡Public parameter. A 16-character hexadecimal random string (hex encoding of 8-byte random number). Refer to Signature sample code for how to generate.

    Timestamp int64required

    💡Public parameter. Current Unix timestamp, in seconds. Refer to Signature sample code for how to generate, with a maximum error of 10 minutes.

    Signature stringrequired

    💡Public parameter. Signature, used to verify the legitimacy of the request. Refer to Signing the requests for how to generate an API request signature.

    SignatureVersion stringrequired

    Possible values: [2.0]

    Default value: 2.0

    💡Public parameter. Signature version number.

Body

required
    AgentInstanceId stringrequired

    The unique identifier of the AI agent instance, obtained through the response parameters of the Create AI Agent Instance interface.

    Text stringrequired

    Possible values: <= 300 characters

    The text content used for TTS, with a maximum of 300 characters.

    AddHistory boolean

    Default value: true

    Whether to record the text message in the conversation message history as input to the LLM.

    InterruptMode integerdeprecated

    Default value: 0

    (Deprecated) Please use Priority and SamePriorityOption parameters instead. The mode of interruption when the AI agent is speaking:

    • 0: Interrupt immediately. If the user speaks while the AI is speaking, the AI will be immediately interrupted and stop speaking (default). Please use Priority=Medium and SamePriorityOption=ClearAndInterrupt parameters instead.
    • 1: Do not interrupt. If the user speaks while the AI is speaking, the AI will not be affected until the content is finished. Please use Priority=High and SamePriorityOption=ClearAndInterrupt parameters instead.
    Priority string

    Possible values: [Low, Medium, High]

    Default value: Medium

    Task priority, the default value is Medium.

    SamePriorityOption string

    Possible values: [ClearAndInterrupt, Enqueue]

    Default value: ClearAndInterrupt

    The interruption strategy when the same priority occurs, the default value is ClearAndInterrupt. Optional values:

    1. ClearAndInterrupt:Clear and interrupt
    2. Enqueue:Queue up to wait, the maximum number of queues is 5

Responses

Success
Schema
    Code integer

    Return code. 0 indicates success, other values indicate failure. For more information on error codes and response handling recommendations, please refer to Return Codes.

    Message string

    Explanation of the request result

    RequestId string

    Request ID

Previous

Trigger LLM

Next

Query The Status of An AI Agent Instance

On this page

Back to top