Integration with AI Effects
Usage Guide
Overview
Video Call is a real-time audio and video interaction service product from ZEGOCLOUD. Developers can build audio and video applications through its flexible and easy-to-use APIs. At the same time, another product from ZEGOCLOUD - AI Effects, based on leading AI algorithms, provides features such as face beautification, body reshaping, makeup, stickers, and more. By combining the two, you can easily achieve the integration of audio/video interaction and beauty effects, creating real-time beauty applications.
The combination of the two can be widely used in live streaming scenarios such as entertainment live streaming, game live streaming, video conferences, and more.
Concept Explanation
- ZEGO Express SDK: ZEGOCLOUD real-time audio/video SDK that provides basic real-time audio/video features, including live streaming publishing and playing, live co-hosting, etc. Hereafter referred to as ZEGO Express SDK.
- ZEGO Effects SDK: ZEGOCLOUD AI Effects SDK that provides multiple intelligent image rendering and algorithm capabilities, including intelligent beauty effects, AR effects, image segmentation, etc. Hereafter referred to as ZEGO Effects SDK.
Sample Source Code
To help developers implement the integration of the two, ZEGO provides sample code. Please refer to AI Effects - Run Sample Source Code.
Prerequisites
- A project has been created in the ZEGOCLOUD Console, and valid AppID and AppSign have been obtained. For details, please refer to Console - Project Management in "Project Information"; and contact ZEGOCLOUD Technical Support to enable ZEGO Effects related package service permissions.
- ZEGO Express SDK has been integrated into the project, and basic audio/video stream publishing and playing functionality has been implemented. For details, please refer to Quick Start - Integration and Quick Start - Implementing Video Call.
- ZEGO Effects SDK has been integrated into the project. For details, please refer to "AI Effects" Quick Start - Integration.
- The unique authentication file for ZEGO Effects SDK has been obtained. For details, please refer to "AI Effects" Quick Start - Online Authentication.
Usage Steps
The principle of using ZEGO Effects SDK and ZEGO Express SDK together to perform real-time AI beauty processing on video data is shown in the following figure:

Through the above process, the specific implementation steps are shown in the following figure:
- Initialize ZEGO Effects SDK and ZEGO Express SDK. There is no timing restriction for initialization.
- Obtain video raw data, which can be obtained through Custom Video Capture or Custom Video Pre-processing of ZEGO Express SDK.
- Pass the captured video raw data to ZEGO Effects SDK for AI beauty processing.
- Pass the processed data to ZEGO Express SDK for publishing. If you need to adjust AI beauty effects during the publishing and playing process, you can use the related features of ZEGO Effects SDK to make real-time changes.
- Remote users play the processed data by pulling it through ZEGO Express SDK.
Initialize ZEGO Effects/Express SDK
For the initialization of both SDKs, there is no timing restriction. The following steps take "initializing ZEGO Effects SDK first, then initializing ZEGO Express SDK" as an example.
Initialize ZEGO Effects SDK
-
Import Effects models and resources.
When using AI-related features of ZEGO Effects SDK, you must first import AI models and resources.
// Pass in the absolute path of the face recognition model. Face detection, big eyes, face slimming features all require this // Pass in the absolute path of the portrait segmentation model. AI portrait segmentation feature requires this // Pass in the absolute path of resources. char* resouce_path_list[] = {"D:\\YOUR_APP\\FaceDetectionModel.model", "D:\\YOUR_APP\\SegmentationModel.model", "D:\\YOUR_APP\\FaceWhiteningResources.bundle", "D:\\YOUR_APP\\PendantResources.bundle", "D:\\YOUR_APP\\RosyResources.bundle", "D:\\YOUR_APP\\TeethWhiteningResources.bundle", "D:\\YOUR_APP\\CommonResources.bundle"}; // Pass in the path list of resources or models, must be called before create zego_effects_set_resources(resouce_path_list, 7);For all resources and models supported by ZEGO Effects SDK, please refer to "AI Effects" Quick Start - Import Resources and Models.
-
Create Effects object. Pass in the authentication file obtained in Prerequisites to create the Effects object.
// Please refer to the actually obtained file for authentication content zego_effects_create(&m_handle,"ABCDEFG"); -
Initialize Effects object.
Call the zego_effects_init_env interface to initialize the Effects object. You need to pass in the width and height of the video image data to be processed.
Taking processing 1280 × 720 video images as an example:
// Initialize Effects object, pass in the width and height of the original image to be processed. Manage the lifecycle yourself. When stopping image capture, call the zego_effects_uninit_env interface to uninitialize, otherwise it will cause memory leaks. zego_effects_init_env(m_handle,1280,720);
Initialize ZEGO Express SDK
Call the createEngine interface to initialize ZEGO Express SDK.
ZegoEngineProfile profile;
// AppID and AppSign are allocated by ZEGO to each App; for security reasons, it is recommended to store AppSign in the App's business backend and obtain it from the backend when needed
profile.appID = appID;
profile.appSign = appSign;
profile.scenario = ZegoScenario::ZEGO_SCENARIO_DEFAULT;
// Create engine instance
auto engine = ZegoExpressSDK::createEngine(profile, nullptr);Obtain Video Raw Data
ZEGO Express SDK can obtain video raw data through two methods: Custom Video Pre-processing and Custom Video Capture.
The differences between the two methods are as follows. Developers can choose as needed according to the actual situation.
| Data Acquisition Method | Video Data Capture Method | Advantages |
|---|---|---|
| Custom Video Pre-processing | Video data is captured internally by ZEGO Express SDK, and raw video data is obtained through callbacks. | Extremely simple integration of ZEGO Express SDK and ZEGO Effects SDK. Developers do not need to manage device input sources, only need to operate on the raw data thrown by ZEGO Express SDK, then pass it back to ZEGO Express SDK. |
| Custom Video Capture | Developers capture video data themselves and provide it to ZEGO Express SDK. | When integrating with multiple vendors, business implementation is more flexible, with more room for performance optimization. |
-
Method 1: Custom Video Pre-processing
Taking obtaining ZEGO_VIDEO_BUFFER_TYPE_RAW_DATA type raw video data as an example.
Developers call the enableCustomVideoProcessing interface to enable custom video pre-processing; after enabling, ZEGO Express SDK will internally capture video data; after capture is complete, the captured video raw data can be obtained through the onCapturedUnprocessedRawData callback interface.
ZegoCustomVideoProcessConfig config; config.bufferType = ZEGO_VIDEO_BUFFER_TYPE_RAW_DATA; // Enable custom pre-processing engine->enableCustomVideoProcessing(true,&config);For specific principles, please refer to "Video Call" Custom Video Pre-processing.
-
Method 2: Custom Video Capture
For custom video capture, developers mainly need to capture video data themselves. For specific methods, please refer to "Video Call" Custom Video Capture.
Perform AI Beauty Processing
After obtaining video raw data, pass the data to ZEGO Effects SDK to start AI beauty processing (such as: face beautification, makeup, background segmentation, etc.) on the video.
-
Method 1: Custom Video Pre-processing
In the onCapturedUnprocessedRawData callback, after obtaining the video raw data, call the related interfaces of ZEGO Effects SDK to perform AI beauty processing (please refer to Face Beautification, Shape Retouch, Background Segmentation, Face Detection, Stickers, Filters), and return the processed data to ZEGO Express SDK.
// Take custom pre-processing as an example // Callback method gets raw data // Callback processing class MyHandler : public IZegoCustomVideoProcessHandler { // ...... protected: void onCapturedUnprocessedRawData(const unsigned char** data, unsigned int* dataLength, ZegoVideoFrameParam param, unsigned long long referenceTimeMillisecond, ZegoPublishChannel channel) override; }; void MyHandler::onCapturedUnprocessedRawData(const unsigned char** data, unsigned int* dataLength, ZegoVideoFrameParam param, unsigned long long referenceTimeMillisecond, ZegoPublishChannel channel) { // Callback method gets raw data int width = param.width; int height = param.height; int stride = param.strides[0]; // Format_RGBA8888 is RGBA32 format, only supports RGBA32 format input QImage image(const_cast<unsigned char*>(data[0]),width,height,stride,QImage::Format_RGBA8888); zego_effects_video_frame_param frameParam; // RGBA32 format frameParam.format = zego_effects_video_frame_format_rgba32; frameParam.width = image.width(); frameParam.height = image.height(); // Custom pre-processing: use ZEGO Effects SDK here zego_effects_process_image_buffer_rgb(m_handle,image.bits(), image.bytesPerLine() * image.height(),frameParam); // Send the processed buffer back to ZEGO Express SDK engine->sendCustomVideoProcessedRawData((const unsigned char**)data,dataLength,param,referenceTimeMillisecond); } auto myHandler = std::make_shared<MyHandler>(); engine->setCustomVideoProcessHandler(myHandler); -
Method 2: Custom Video Capture
After receiving the custom capture onStart callback, developers obtain video data through custom capture, then call the related interfaces of ZEGO Effects SDK to perform AI beauty processing (please refer to Face Beautification, Shape Retouch, Background Segmentation, Face Detection, Stickers, Filters), and return the processed data to ZEGO Express SDK (can refer to Custom Video Capture in "3 Send video frame data to SDK").
Publish Processed Data
After processing by ZEGO Effects SDK is complete, return the processed data to ZEGO Express SDK.
ZEGO Express SDK calls the startPublishingStream interface, passes in the processed data stream streamID, starts publishing, and sends to the cloud server.
// User calls loginRoom before calling this interface to publish stream
// Under the same AppID, developers need to ensure "streamID" is globally unique. If different users each publish a stream with the same "streamID", the user who publishes later will fail to publish.
engine->startPublishingStream("streamID");Play Processed Data
After ZEGO Express SDK starts publishing, remote users can call the startPlayingStream interface, pass in the processed data stream streamID, pull the video data, and play it.
// Start playing stream, set the remote playing render view, view mode uses SDK default mode, aspect ratio scaling to fill the entire View
// The following playView is the UI window handle
std::string streamID = "streamID";
ZegoCanvas canvas((void*)playView);
engine->startPlayingStream(streamID, &canvas);At this point, developers can fully achieve real-time adjustment of AI beauty effects while publishing and playing audio/video streams.
