UpdateAgent
https://aigc-aiagent-api.zegotech.cn/
By calling this API, you can update an existing AI agent.
📌 Note: Only parameters passed in this interface will take effect; parameters not passed will not be updated.
Request
Query Parameters
Possible values: [UpdateAgent
]
API Prototype Parameter
https://aigc-aiagent-api.zegotech.cn?Action=UpdateAgent
The unique Application ID assigned to your project by ZEGOCLOUD. Get it from the ZEGOCLOUD Admin Console.
Random string.
Unix timestamp, in seconds. The maximum allowed error is 10 minutes.
Signature, used to verify the legitimacy of the request. Refer to Signing the requests for how to generate an API request signature.
Possible values: [2.0
]
Signature version number, default value is 2.0.
- application/json
Body
required
- MiniMax:https://api.minimax.chat/v1/text/chatcompletion_v2
- Volcano Engine (Doubao): https://ark.cn-beijing.volces.com/api/v3/chat/completions
- Aliyun Bailei (Tongyi Qianwen): https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions
- Stepfun: https://api.stepfun.com/v1/chat/completions
- MiniMax:
- MiniMax-Text-01
- Volcano Engine (Doubao):
- doubao-1-5-pro-32k-250115
- doubao-1-5-lite-32k-250115
- Aliyun Bailei (Tongyi Qianwen):
- qwen-plus
- Stepfun:
- step-2-16k
- room_id: RTC room ID
- user_id: User ID
- agent_instance_id: AI agent instance ID
- Array[
- ]
The unique identifier of the registered AI agent.
Possible values: <= 64 characters
AI Agent name
LLM object
The endpoint that receives the request (can be your own service or any LLM service provider's service) and must be compatible with OpenAI Chat Completions API.
For example: https://api.openai.com/v1/chat/completions
📌 Important Note
If ApiKey is set to "zego_test", you must use one of the following Url addresses:
The parameter used for authentication by the LLM service provider. It is empty by default, but must be provided in production environments.
📌 Important Note
During the test period (within 2 weeks after the AI Agent service is enabled), you can set this parameter value to "zego_test" to use this service.
The LLM model. Different LLM service providers support different models, please refer to their official documentation to select the appropriate model.
📌 Important Note
If ApiKey is set to "zego_test", you must use one of the following models:
The system prompt of the AI agent. It is the predefined information that is added at the beginning when calling the LLM, used to control the output of the LLM. It can be role settings, prompts, and answer examples.
Possible values: >= 0
and <= 2
Default value: 0.7
The higher the value, the more random the output; the lower the value, the more concentrated and determined the output.
Possible values: >= 0
and <= 1
Default value: 0.9
The sampling method. The smaller the value, the stronger the determinism; the larger the value, the stronger the randomness.
Other parameters supported by the LLM service provider, such as the maximum token limit. Different LLM providers support different parameters, please refer to their official documentation and fill in as needed.
Default value: false
If this value is true, the AI Agent server will include the AI agent information in the request parameters when requesting the LLM service. You can use this parameter to execute additional business logic in your custom LLM service.
The structure of agent_info is as follows:
TTS object
Possible values: [Aliyun
, ByteDance
, ByteDanceFlowing
, MiniMax
, CosyVoice
]
The TTS service provider. Please refer to Configuring TTS > TTS Parameters for details.
Params objectrequired
Used for TTS service authentication, the structure of the app parameter required by different Vendor values is different, please refer to Configuring TTS > Params Parameters for details.
📌 Important Note
other_params is not a valid parameter, it is only to explain how to pass the vendor parameters. Except for the app parameter, other parameters are directly passed to the vendor parameters. Please refer to Configuring TTS > Params Parameters for details.
FilterText object[]
The start punctuation mark of the filtered text. For example, if you want to filter the content in (), set it to (.
The end punctuation mark of the filtered text. For example, if you want to filter the content in (), set it to ).
Possible values: <= 4 characters
Can be used to set the termination text of TTS. If the content in the input TTS text matches the TerminatorText string, the content from the TerminatorText string (including) will not be synthesized for this round of TTS.
📌 Important Note
Only one character can be set for bidirectional streaming.
ASR object
Possible values: [Tencent
, AliyunParaformer
, AliyunGummy
, Microsoft
]
Default value: Tencent
ASR provider. Please refer to Configuring ASR > ASR Parameters for details.
Vendor parameters, please refer to Configuring ASR > Params Parameters for details.
Possible values: >= 200
and <= 2000
Default value: 500
Set the number of seconds after which two sentences are no longer considered as one. Unit is ms, range [200, 2000], default is 500. Please refer to Speech Segmentation Control for details.
Possible values: >= 200
and <= 2000
Set the number of seconds within which two sentences are considered as one, i.e., ASR multi-sentence concatenation. Unit is ms, range [200, 2000]. Only when this value is greater than VADSilenceSegmentation, ASR multi-sentence concatenation will be enabled. Please refer to Speech Segmentation Control for details.
This parameter has been deprecated. Please set it through the Params vendor parameters.
Responses
- 200
- application/json
- Schema
- Example (from schema)
Schema
Return code. 0 indicates success, other values indicate failure. For more information on error codes and response handling recommendations, please refer to Return Codes.
Explanation of the request result
Request ID
{
"Code": 0,
"Message": "Success",
"RequestId": "8825223157230377926"
}
{
"Code": 0,
"Message": "Success",
"RequestId": "8825223157230377926"
}
- curl
- python
- go
- nodejs
- ruby
- csharp
- php
- java
- powershell
- CURL
Click the "Send" button above and see the response here!