RegisterAgent
https://aigc-aiagent-api.zegotech.cn/
By calling this API, you can register an AI agent (Agent) for creating AI agent instances.
Request
Query Parameters
Possible values: [RegisterAgent]
API Prototype Parameter
https://aigc-aiagent-api.zegotech.cn?Action=RegisterAgent
💡Public parameter. Application ID, assigned by ZEGOCLOUD. Get it from the ZEGOCLOUD Admin Console.
💡Public parameter. A 16-character hexadecimal random string (hex encoding of 8-byte random number). Refer to Signature sample code for how to generate.
💡Public parameter. Current Unix timestamp, in seconds. Refer to Signature sample code for how to generate, with a maximum error of 10 minutes.
💡Public parameter. Signature, used to verify the legitimacy of the request. Refer to Signing the requests for how to generate an API request signature.
Possible values: [2.0]
Default value: 2.0
💡Public parameter. Signature version number.
- application/json
Body
required
- OpenAIChat: OpenAI's Chat Completions interface type.
- OpenAIResponses: OpenAI's Responses API interface type.
- MiniMax:https://api.minimax.chat/v1/text/chatcompletion_v2
- Volcano Engine (Doubao): https://ark.cn-beijing.volces.com/api/v3/chat/completions
- Aliyun Bailei (Tongyi Qianwen): https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions
- Stepfun: https://api.stepfun.com/v1/chat/completions
- MiniMax:
- MiniMax-Text-01
- Volcano Engine (Doubao):
- doubao-1-5-pro-32k-250115
- doubao-1-5-lite-32k-250115
- Aliyun Bailei (Tongyi Qianwen):
- qwen-plus
- Stepfun:
- step-2-16k
- room_id: RTC room ID
- user_id: User ID
- agent_instance_id: AI agent instance ID
- Array[
- ]
-
For bidirectional streaming, only one character can be set.
-
Typically, the LLM is guided via prompts in LLM > SystemPrompt to specify which parts of the content should be enclosed with special punctuation.
-
This parameter cannot be updated when updating an agent instance.
Possible values: <= 128 characters
Only supports numbers, English characters, and the following special characters: !#$%&()+-:;<=.>?@[]^_{}|~,.
Possible values: <= 256 characters
AI Agent name, with a maximum length of 256 bytes.
LLM objectrequired
Possible values: [OpenAIChat, OpenAIResponses]
Default value: OpenAIChat
The vendor type of the LLM interface.
The endpoint that receives the request (can be your own service or any LLM service provider's service).
If Vendor is OpenAIChat, it must be compatible with OpenAI Chat Completions API.
For example: https://api.openai.com/v1/chat/completions
If Vendor is OpenAIResponses, it must be compatible with OpenAI Responses API.
For example: https://ark.cn-beijing.volces.com/api/v3/responses
📌 Important Note
If ApiKey is set to "zego_test", you must use one of the following Url addresses:
The parameter used for authentication by the LLM service provider. It is empty by default, but must be provided in production environments.
📌 Important Note
During the test period (within 2 weeks after the AI Agent service is enabled), you can set this parameter value to "zego_test" to use this service.
The LLM model. Different LLM service providers support different models, please refer to their official documentation to select the appropriate model.
📌 Important Note
If ApiKey is set to "zego_test", you must use one of the following models:
The system prompt of the AI agent. It is the predefined information that is added at the beginning when calling the LLM, used to control the output of the LLM. It can be role settings, prompts, and answer examples.
Possible values: >= 0 and <= 2
Default value: 0.7
The higher the value, the more random the output; the lower the value, the more concentrated and determined the output.
Possible values: >= 0 and <= 1
Default value: 0.9
The sampling method. The smaller the value, the stronger the determinism; the larger the value, the stronger the randomness.
Other parameters supported by the LLM service provider, such as the maximum token limit. Different LLM providers support different parameters, please refer to their official documentation and fill in as needed.
Default value: false
If this value is true, the AI Agent server will include the AI agent information(agent_info) in the request parameters when requesting the LLM service. The example of the AI agent information is as follows: Using Custom LLM. You can use this parameter to execute additional business logic in your custom LLM service.
The structure of agent_info is as follows:
📌 Important Note: Only effective when Vendor is "OpenAIChat".
AgentExtraInfo object
Extra information key.
Extra information value, can be of any type.
TTS objectrequired
Possible values: [Aliyun, ByteDance, ByteDanceV3, ByteDanceFlowing, MiniMax, CosyVoice]
The TTS service provider. Please refer to Configuring TTS > TTS Parameters for details.
Params objectrequired
Used for TTS service authentication, the structure of the app parameter required by different Vendor values is different, please refer to Configuring TTS > Params Parameters for details.
📌 Important Note
other_params is not a valid parameter, it is only to explain how to pass the vendor parameters. Except for the app parameter, other parameters are directly passed to the vendor parameters. Please refer to Configuring TTS > Params Parameters for details.
FilterText object[]
- This parameter cannot be updated when updating an agent instance.
The start punctuation mark of the filtered text. For example, if you want to filter the content in (), set it to (.
The end punctuation mark of the filtered text. For example, if you want to filter the content in (), set it to ).
Possible values: <= 4 characters
Can be used to set the termination text for TTS. If the input to TTS (usually the content returned by LLM or the Text parameter of the SendAgentInstanceTTS API) contains the TerminatorText string, then the content from the TerminatorText string (inclusive) onward will no longer be synthesized in this TTS round.
📌 Important Note
The specified strings in the content input to TTS (usually the content returned by LLM or the Text parameter of the SendAgentInstanceTTS API) will not be involved in speech synthesis. Each string in the array indicates a string to be filtered out, and each string can have up to 2 characters.
ASR object
Possible values: [Tencent, AliyunParaformer, AliyunGummy, Microsoft]
Default value: Tencent
ASR provider. Please refer to Configuring ASR > ASR Parameters for details.
Vendor parameters, please refer to Configuring ASR > Params Parameters for details.
Possible values: >= 200 and <= 2000
Default value: 500
Set the number of seconds after which two sentences are no longer considered as one. Unit is ms, range [200, 2000], default is 500. Please refer to Speech Segmentation Control for details.
📌 Important Note
When Vendor is "Tencent" (default), the maximum value is 1500.
Possible values: >= 200 and <= 2000
Set the number of seconds within which two sentences are considered as one, i.e., ASR multi-sentence concatenation. Unit is ms, range [200, 2000]. Only when this value is greater than VADSilenceSegmentation, ASR multi-sentence concatenation will be enabled. Please refer to Speech Segmentation Control for details.
Possible values: [0, 1, 2, 3]
Default value: 0
VAD sensitivity, 0: medium sensitivity, default value; 1: low sensitivity; 2: high sensitivity; 3: custom mode, needs to be used in combination with VADMinSpeechDur and VADEnergyThreshold. Please refer to Voice Interruption Sensitivity Adjustment for details.
Possible values: >= 0 and <= 1000
VAD minimum speech duration, unit ms, the larger the value, the less likely it is to be detected, but it may cause some short speech to be missed, the value range is [0, 1000].
Possible values: >= 0 and <= 1
VAD energy threshold, the smaller the value, the higher the sensitivity, the value range is [0, 1].
This parameter has been deprecated. Please set it through the Params vendor parameters.
Responses
- 200
- application/json
- Schema
- Example (from schema)
Schema
Return code. 0 indicates success, other values indicate failure. For more information on error codes and response handling recommendations, please refer to Return Codes.
Explanation of the request result
Request ID
{
"Code": 0,
"Message": "Success",
"RequestId": "8825223157230377926"
}{
"Code": 0,
"Message": "Success",
"RequestId": "8825223157230377926"
}