logo
On this page

Quick Start Voice Call

This document explains how to quickly integrate the client SDK (ZEGO Express SDK) and achieve voice interaction with an AI Agent.

Prerequisites

Sample Codes

The following is the example code for the business backend that integrates the real-time interactive AI Agent API. You can refer to the example code to implement your own business logic.

Below are the client sample codes, you can refer to these sample codes to implement your own business logic.

The following video demonstrates how to run the server and client (Web) sample code and interact with an AI agent by voice.

Overall Business Process

  1. Server side: Follow the Server Quick Start guide to run the server sample code and deploy your server
    • Integrate ZEGOCLOUD AI Agent APIs to manage AI agents.
  2. Client side: Run the sample code
    • Create and manage AI agents through your server.
    • Integrate ZEGO Express SDK for real-time communication.

After completing these two steps, you can add an AI agent to a room for real-time interaction with real users.

Core Capability Implementation

Integrate ZEGO Express SDK

Please refer to Integrate SDK > Method 2 to use npm to integrate SDK v3.9.123 or above. After integrating the SDK, initialize ZegoExpressEngine as follows.

  1. Instantiate ZegoExpressEngine
  2. Check system requirements (WebRTC support and microphone permissions)
import { ZegoExpressEngine } from "zego-express-engine-webrtc";

const appID = 1234567 // Obtain from ZEGOCLOUD Console
const server = 'xxx' // Obtain from ZEGOCLOUD Console

// Instantiate ZegoExpressEngine with appId and server configurations
// !mark
const zg = new ZegoExpressEngine(appID, server);
// Check system requirements
// !mark
const checkSystemRequirements = async () => {
    // Detect WebRTC support
    const rtc_sup = await zg.checkSystemRequirements("webRTC");
    if (!rtc_sup.result) {
      // Browser does not support WebRTC
  }
    // Detect microphone permission status
    const mic_sup = await zg.checkSystemRequirements("microphone");
    if (!mic_sup.result) {
      // Microphone permission is not enabled
  }
}
checkSystemRequirements()

Notify Your Server to Start Call

You can notify your server to start the call immediately after the real user enters the room on the client side. Asynchronous calls can help reduce call connection time. After receiving the start call notification, your server creates an AI agent instance using the same roomID and associated userID and streamID as the client, so that the AI agent can interact with real users in the same room through mutual stream publishing and playing.

Note
In the following examples, roomID, userID, streamID and other parameters are not passed when notifying your server to start the call because fixed values have been agreed between the client and your server in this example. In actual use, please pass the real parameters according to your business requirements.
// Notify your server to start call
async function startCall() {
  try {
    const response = await fetch(`${YOUR_SERVER_URL}/api/start`, { // YOUR_SERVER_URL is the address of your Your Server
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
      }
    });

    const data = await response.json();
    console.log('Start call result:', data);
    return data;
  } catch (error) {
    console.error('Failed to start call:', error);
    throw error;
  }
}

User logs in a RTC room and starts publishing a stream

After a real user logs into the room, they start publishing streams.

The token used for login needs to be obtained from your server; please refer to the complete sample code.

Note

Please ensure that the roomID, userID, and streamID are unique under one ZEGOCLOUD APPID.

  • roomID: Generated by the user according to their own rules, it will be used to log into the Express SDK room. Only numbers, English characters, and '~', '!', '@', '#', '$', '%', '^', '&', '*', '(', ')', '_', '+', '=', '-', '`', ';', ''', ',', '.', '<', '>', '' are supported. If interoperability with the Web SDK is required, do not use '%'.
  • userID: Length should not exceed 32 bytes. Only numbers, English characters, and '~', '!', '@', '#', '$', '%', '^', '&', '*', '(', ')', '_', '+', '=', '-', '`', ';', ''', ',', '.', '<', '>', '' are supported. If interoperability with the Web SDK is required, do not use '%'.
  • streamID: Length should not exceed 256 bytes. Only numbers, English characters, and '-', '_' are supported.
Client login to room and publish a stream
const userId = "" // User ID for logging into the Express SDK room
const roomId = "" // RTC Room ID
const userStreamId = "" // User stream push ID
async function enterRoom() {
  try {
    // Generate RTC Token [Reference Documentation] (https://www.zegocloud.com/docs/video-call/token?platform=web&language=javascript)
    const token = await Api.getToken();
    // Login to room
    await zg.loginRoom(roomId, token, {
      userID: userId,
      userName: "",
    });

    // Create local audio stream
    const localStream = await zg.createZegoStream({
      camera: {
        video: false,
        audio: true,
      },
    });
    if (localStream) {
// !mark(1:2)
      // Push local stream
      await zg.startPublishingStream(userStreamId, localStream);
    }
  } catch (error) {
    console.error("Failed to enter room:", error);
    throw error;
  }
}
enterRoom()

Play the AI Agent Stream

By default, there is only one real user and one AI agent in the same room, so any new stream added is assumed to be the AI agent stream.

Client request to play the AI agent stream
// Listen to remote stream update events
function setupEvent() {
  zg.on("roomStreamUpdate",
    async (roomID, updateType, streamList) => {
      if (updateType === "ADD" && streamList.length > 0) {
        try {
          for (const stream of streamList) {
            // Play the AI agent stream
// !mark
            const mediaStream = await zg.startPlayingStream(stream.streamID);
            if (!mediaStream) return;
            const remoteView = await zg.createRemoteStreamView(mediaStream);
            if (remoteView) {
             // A container with the id 'remoteSteamView' is required on the page to receive the AI agent stream [Reference Documentation](https://www.zegocloud.com/article/api?doc=Express_Video_SDK_API~javascript_web~class~ZegoStreamView)
              remoteView.play("remoteSteamView", {
                enableAutoplayDialog: false,
              });
            }
          }
        } catch (error) {
          console.error("Failed to pull stream:", error);
        }
      }
    }
  );
}

Congratulations🎉! After completing this step, you can ask the AI agent any question by voice, and the AI agent will answer your questions by voice!

Delete the agent instance and the user exits the room

The client calls the logout interface to exit the room and stops publishing and playing streams. At the same time, it notifies your server to end the call. After receiving the end call notification, your server will delete the AI agent instance, and the AI agent instance will automatically exit the room and stop publishing and playing streams. This completes a full interaction.

// Exit room
async function stopCall() {
  try {
// !mark
    const response = await fetch(`${YOUR_SERVER_URL}/api/stop`, { // YOUR_SERVER_URL is the address of your Your Server
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
      }
    });

    const data = await response.json();
    console.log('End call result:', data);
    return data;
  } catch (error) {
    console.error('Failed to end call:', error);
    throw error;
  }
}
stopCall();
zg.destroyLocalStream(localStream);
// !mark
zg.logoutRoom();

This is the complete core process for you to achieve real-time voice interaction with an AI agent.

Best Practices for ZEGO Express SDK Configuration

To achieve the best audio call experience, it is recommended to configure the ZEGO Express SDK according to the following best practices. These configurations can significantly improve the quality of AI agent voice interactions.

  • Enable traditional audio 3A processing (Acoustic Echo Cancellation AEC, Automatic Gain Control AGC, and Noise Suppression ANS)
  • Set the room usage scenario to High Quality Chatroom, as the SDK will adopt different optimization strategies for different scenarios
  • When pushing streams, configure the push parameters to automatically switch to available videoCodec
// Import necessary modules
import { ZegoExpressEngine } from "zego-express-engine-webrtc";
import { VoiceChanger } from "zego-express-engine-webrtc/voice-changer";

// Load audio processing module, must be called before new ZegoExpressEngine
ZegoExpressEngine.use(VoiceChanger);

// Instantiate ZegoExpressEngine, set room usage scenario to High Quality Chatroom
const zg = new ZegoExpressEngine(appid, server, { scenario: 7 })

// Traditional audio 3A processing is enabled by default in SDK

// Create local media stream
const localStream = await zg.createZegoStream();

// Push local media stream, need to set automatic switching to available videoCodec
await zg.startPublishingStream(userStreamId, localStream, {
  enableAutoSwitchVideoCodec: true,
});

// Check system requirements
async function checkSystemRequirements() {
  // Check WebRTC support
  const rtcSupport = await zg.checkSystemRequirements("webRTC");
  if (!rtcSupport.result) {
    console.error("Browser does not support WebRTC");
    return false;
  }

  // Check microphone permission
  const micSupport = await zg.checkSystemRequirements("microphone");
  if (!micSupport.result) {
    console.error("Microphone permission not granted");
    return false;
  }

  return true;
}

Additional Optimization Recommendations

  • Browser Compatibility: Recommended to use the latest versions of modern browsers such as Chrome, Firefox, Safari
  • Network Environment: Ensure stable network connection, recommend using wired network or Wi-Fi with good signal
  • Audio Equipment: Use high-quality microphones and speakers
  • Page Optimization: Avoid running too many JavaScript tasks on the same page, which may affect audio processing performance
  • HTTPS Environment: Use HTTPS protocol in production environment to ensure microphone permission access

Listen for Exception Callback

Note
Due to the large number of parameters for LLM and TTS, it is easy to cause various abnormal problems such as the AI agent not answering or not speaking during the test process due to parameter configuration errors. We strongly recommend that you listen for exception callbacks during the test process and quickly troubleshoot problems based on the callback information.

Previous

Release Notes

Next

Quick Start with Digital Human