Talk to us
Talk to us
menu

Blog Post

start building
Developer

How to Make a Voice Call App

How to Make a Voice Call App

Voice call is now a vital part of many social media apps. WhatsApp, Facebook, and Twitter have voice features that can be used for passing information through calls and voice notes. Adding voice calling features to an app can be very troublesome, especially if you’re doing everything from scratch. We can easily solve this problem using ZEGOCLOUD.

Why ZEGOCLOUD’s Voice Call?

There are numerous reasons why you should use ZEGOCLOUD in your app for voice calls and audio functionality. Below are some of the reasons:

1. Boost user engagement with fascinating audio effects

You can easily add some amazing voice effects in order to create a more awesome experience for users. With the SDK for video calling, you can, for example, add voice beautification, change voice, and other amazing voice transformations.

2. Stay connected anywhere in the world

It supports over 200 countries in the world. You can make voice calls or enable voice calling features in your app and stay connected anywhere in the world. With an ultra-low latency of 200–300 ms globally, making long-distance and international calls feel like speaking face-to-face.

3. Deliver interactive, real-time voice with outstanding audio quality

You can create an immersive audio experience by mixing background music, accompaniments, sound effects, and third-party audio sources. You can easily deliver high-fidelity real-time voice using 48 kHz full-band audio sampling.

4. Voice recording feature

Aside from high-quality audio encoding and decoding, you can easily implement voice call recorder functionality to record high-quality audio for later reference and distribution.

Preparation for Video Call SDK to Make Apps

  • A ZEGOCLOUD developer account Sign up
  • A Windows or macOS device that is connected to the internet with audio and video support
  • Examine browser compatibility (check browser compatibility).
  • Basic understanding of web development

SDK for Free Voice Call Integration Guide

Integrating ZEGOCLOUD’s voice call SDK is very easy. Just follow the steps below:

Step1: Create a new project

  1. Create a project with an index.html, an index.css, and an index.js file. The folder structure should look like the one in the image below:
├── index.html
├── js
│   └── index.js
└── css
   └── index.css
  1. Copy the following code to the index.html file:
<!DOCTYPE html>
<html>
<head>
    <meta charset="UTF-8">
    <title>Zego RTC Video Call</title>
    <link rel="stylesheet" href="./css/index.css">
</head>
<body>
    <h1>
        Zego RTC Video Call
    </h1>
    <h4>Local video</h4>
    <video id="local-video" autoplay muted playsinline controls></video>
    <h4>Remote video</h4>
    <video id="remote-video" autoplay muted playsinline controls></video>
    <script src="./js/index.js"></script>
</body>
</html>
  1. Copy the following code to the index.css file:
*{
    font-family: sans-serif;
}
h1,h4{
    text-align: center;
}
video {
    width: 300px;
    height: auto;
}
#local-video{
    position: relative;
    margin: 0 auto;
    display: block;
}

#remote-video{
    display: flex;
    height: auto;
    margin: auto;
    position: relative !important;
}
  1. Run and test out your project on the local Web server.

You can do this using the life-server extension. Run the command below to install life-server if you don’t have it installed:

npm i live-server -g

Step 2: Import the SDK

We’re done creating our project. The next step is importing the SDK into our project. You can import the audio call SDK by following the steps below:

  1. Execute the npm i zego-express-engine-webrtc command to install the dependencies.
  2. Import the SDK in the index.js file.
var ZegoExpressEngine = require('zego-express-engine-webrtc').ZegoExpressEngine

How to Implement a Basic Voice Call Online

We have imported our SDK into the online voice call project. We can now proceed with the implementation of voice call functionality. The image below shows the working principle of User A playing a stream published by User B:

working principle of online voice call

Follow the steps below to implement a basic voice calling feature:

Step 1: Make a new ZegoExpressEngine instance

Before clients A and B can publish and play streams, the ZegoExpressEngine SDK needs to be initialized. This can be done by simply creating a ZegoExpressEngine instance and passing in your AppID as the appID parameter and the Server URL as the Server parameter. You can obtain the credentials from the ZegoCloud Admin Console.

// Initialize the ZegoExpressEngine instance
const zg = new ZegoExpressEngine(appID, server);

Step 2: Check your browser’s WebRTC support

We’ll be testing our app in a web browser. Unfortunately, ZEGOCLOUD’s Express Audio supports all browsers. The good news is that you can always run the browser’s WebRTC compatibility test to know if it’s supported.

To do so, run the codes below:

const result = await zg.checkSystemRequirements();
// The [result] indicates whether it is compatible. It indicates WebRTC is supported when the [webRTC] is [true]. For more results, see the API documents.
console.log(result);
// {
//   webRTC: true,
//   customCapture: true,
//   camera: true,
//   microphone: true,
//   videoCodec: { H264: true, H265: false, VP8: true, VP9: true },
//   screenSharing: true,
//   errInfo: {}
// }

For more information about the browser versions supported by the SDK, seeBrowser compatibility.

Step 3: Log in to a room

We have finished creating a ZegoExpressEngine instance. It’s now time to log in to a room. To do so, call the loginRoom with the following parameters:

  • a unique room ID as the roomID parameter.
  • The login token you obtained in the previous step is the token parameter.
  • The user ID and user name are the roomID and userName parameters.
// Log in to a room. It returns `true` if the login is successful.
// The roomUserUpdate callback is disabled by default. To receive this callback, you must set the `userUpdate` property to `true` when logging in to a room.
const result = await zg.loginRoom(roomID, token, {userID, userName}, {userUpdate: true});

Note: If the roomID you entered does not exist, a new room will be created, and you will be logged in automatically when you call the loginRoom method.

You can view the room’s status with callbacks. Use the code below to implement the following callbacks:

roomStateUpdate: Callback for updates on the current user’s room connection status.

// Callback for updates on the current user's room connection status.
zg.on('roomStateUpdate', (roomID,state,errorCode,extendedData) => {
    if (state == 'DISCONNECTED') {
        // Disconnected from the room
    }

    if (state == 'CONNECTING') {
        // Connecting to the room
    }

    if (state == 'CONNECTED') {
        // Connected to the room
    }
})

roomUserUpdate: A callback function that receives updates on the status of other users in the room.

 // Callback for updates on the status of the users in the room.
zg.on('roomUserUpdate', (roomID, updateType, userList) => {
    console.warn(
        `roomUserUpdate: room ${roomID}, user ${updateType === 'ADD' ? 'added' : 'left'}`,
        JSON.stringify(userList),
    );
});

roomStreamUpdate: A callback that receives status updates for the streams in the room.

// Callback for updates on the status of the streams in the room.
zg.on('roomStreamUpdate', async (roomID, updateType, streamList, extendedData) => {
    if (updateType == 'ADD') {
        // New stream added, start playing the stream.
    } else if (updateType == 'DELETE') {
        // Stream deleted, stop playing the stream.
    }
});

Step 4: Publishing Streams

To create a local audio and video stream, call the createStream method. By default, the engine captures audio and video data from the microphone.

// After calling the CreateStream method, you need to wait for the ZEGOCLOUD server to return the local stream object before any further operation.
const localStream = await zg.createStream({camera :{audio:true,video:false}});
// Get the audio tag.
const localAudio = document.getElementById('local-audio');
// The local stream is a MediaStream object. You can render audio by assigning the local stream to the srcObject property of video or audio.
localAudio.srcObject = localStream;

To start publishing a local audio and video stream to remote users, call the startPublishingStreammethod with the following parameters:

  • a stream ID as the streamID parameter
  • The media stream object was obtained in the previous step as a localStream parameter.
// localStream is the MediaStream object created by calling creatStream in the previous step.
zg.startPublishingStream(streamID, localStream)

To check the status and information of a published stream, use the callbacks listed below:

publisherStateUpdate: A callback that receives updates on the status of stream publishing.

zg.on('publisherStateUpdate', result => {
    // Callback for updates on stream publishing status.
})

publishQualityUpdate: A callback for reporting the quality of stream publishing.

zg.on('publishQualityUpdate', (streamID, stats) => {
    // Callback for reporting stream publishing quality.
})

Step 5: Playing Streams

We published an audio stream in the previous section. We can play the stream by calling the startPlayingStream method with the corresponding stream ID passed to the streamID parameter.

const remoteStream = await zg.startPlayingStream(streamID);

// The remoteVideo object is the <video> or <audio> element on your webpage.
remoteAudio.srcObject = remoteStream;

The following are some common event callbacks related to streaming:

playerStateUpdate: A callback that receives updates on the stream’s playing status.

zg.on('playerStateUpdate', result => {
    // Callback for updates on stream playing status.
})

playQualityUpdate: A callback for reporting the quality of the stream’s playback.After stream playback starts, the SDK sends out the streaming quality data (resolution, frame rate, bit rate, etc.) through this callback.

zg.on('playQualityUpdate', (streamID,stats) => {
    // Callback for reporting stream playing quality.
})

Step 6: Stop publishing streams

To stop publishing a local audio and video stream to remote users, call the stopPublishingStreammethod with the stream ID to be stopped in the streamID parameter.

zg.stopPublishingStream(streamID)

Step 7: Destroy Stream

To destroy a local media stream, call the destroyStream method.

// localStream is the MediaStream object created when calling the createStream method.
zg.destroyStream(localStream)

Step 8: Stop playing stream

To stop playing a remote audio and video stream, call the stopPlayingStreammethod with the corresponding stream ID passed to the streamID parameter.

zg.stopPlayingStream(streamID)

Step 9: Log out of a room

To log out of a room, call the logoutRoom the method with the corresponding room ID passed to the roomIDparameter.

zg.logoutRoom(roomID)

The diagram below shows the API sequence of callbacks for this voice call app.

voice call api sequence

Run a demo

To test our real-time audio call features, visit the ZEGO Express Web Demo and enter the same AppID, Server, and RoomID to join the same room.

The voice call SDK is designed in a straightforward pattern. You can implement voice call and voice call recording features with ease. In this approach, to make things simpler, you don’t have to spend a lot of time studying the use of APIs and low-level programming; all you have to do is focus on the implementation of business logic.

Read More

ZEGOCLOUD
ZEGOCLOUD With our fully customizable and easy-to-integrate Live SDK, you can quickly build reliable, scalable, and interactive live streaming into your mobile, web, and desktop apps.

Related posts