Integration with AI-Effects
Usage Guide
Introduction
Video Call is a real-time audio/video interaction service product from ZEGO. Developers can build audio/video applications through its flexible and easy-to-use API. At the same time, another product from ZEGO - AI-Effects, based on leading AI algorithms, provides features such as beauty effects, body reshaping, makeup, stickers, etc. Combining the two can easily achieve the combination of audio/video interaction and beauty effects, creating real-time beauty applications.
The combination of the two can be widely used in live streaming scenarios such as entertainment live streaming, game live streaming, video conferences, etc.
Concept Explanation
- ZEGO Express SDK: ZEGO real-time audio/video SDK, providing basic real-time audio/video functionality, including live streaming publishing and playing, live streaming co-hosting, etc. Hereinafter referred to as ZEGO Express SDK.
- ZEGO Effects SDK: ZEGO AI beauty effects SDK, providing multiple intelligent image rendering and algorithm capabilities, including intelligent beauty effects, AR effects, image segmentation, etc. Hereinafter referred to as ZEGO Effects SDK.
Example Source Code
To facilitate developers in implementing the combination of the two, ZEGO provides example code. Please refer to AI-Effects - Run Sample Code.
Prerequisites
- You have created a project in the ZEGOCLOUD Console and applied for a valid AppID and AppSign. For details, see "Project Information" in Console - Project Management; and contacted ZEGO technical support to enable ZEGO Effects related package service permissions.
- You have integrated ZEGO Express SDK into your project and implemented basic audio/video publishing and playing functionality. For details, see Quick Start - Integration and Quick Start - Implementing Video Call.
- You have integrated ZEGO Effects SDK into your project. For details, see "AI-Effects" Quick Start - Integration.
- You have obtained the unique authentication file for ZEGO Effects SDK. For details, see "AI-Effects" Quick Start - Online Authentication.
Usage Principles and Steps
The principle of using ZEGO Effects SDK and ZEGO Express SDK together to perform real-time AI beauty effects processing on video data is as follows:

Through the above process, the specific implementation steps are as follows:
- Initialize ZEGO Effects SDK and ZEGO Express SDK. There is no timing restriction on initialization.
- Obtain original video data, which can be obtained through Custom Video Capture or Custom Video Preprocessing of ZEGO Express SDK.
- Pass the captured original video data to ZEGO Effects SDK for AI beauty effects processing.
- Pass the processed data to ZEGO Express SDK for publishing. If you need to adjust AI beauty effects during the publishing and playing process, you can use the related functions of ZEGO Effects SDK to make real-time changes.
- Remote users play the processed data by pulling it through ZEGO Express SDK.
1 Initialize ZEGO Effects/Express SDK
For the initialization of the two SDKs, there is no timing restriction. The following steps take "initializing ZEGO Effects SDK first, then initializing ZEGO Express SDK" as an example.
Initialize ZEGO Effects SDK
-
Import Effects models and resources.
When using AI-related features of ZEGO Effects SDK, you must first import AI models and resources.
// Pass in the absolute path of the face recognition model. Face detection, big eyes, and face slimming features all require this NSString *faceDetectionModelPath = [[NSBundle mainBundle] pathForResource:@"FaceDetectionModel" ofType:@"model"]; // Pass in the absolute path of the portrait segmentation model. AI portrait segmentation feature requires this NSString *segmentationModelPath = [[NSBundle mainBundle] pathForResource:@"SegmentationModel" ofType:@"model"]; // Pass in the absolute path of beauty effects and body reshaping common resources NSString *commonBundlePath = [[NSBundle mainBundle] pathForResource:@"CommonResources" ofType:@"bundle"]; // Pass in the absolute path of pendant resources. NSString *pendantBundlePath = [[NSBundle mainBundle] pathForResource:@"PendantResources" ofType:@"bundle"]; // Pass in the absolute path of whitening resources. NSString *whitenBundlePath = [[NSBundle mainBundle] pathForResource:@"FaceWhiteningResources" ofType:@"bundle"]; // Pass in the list of resource or model paths, must be called before create [ZegoEffects setResources:@[faceDetectionModelPath, SegmentationModel, commonBundlePath, pendantBundlePath, whitenBundlePath]];For all resources and models supported by ZEGO Effects SDK, please refer to "AI-Effects" Quick Start - Import Resources and Models.
-
Create Effects object.
-
Initialize Effects object.
Call the initEnv interface to initialize the Effects object, which requires passing in the width and height of the video image data to be processed.
Taking processing 1280 × 720 video images as an example:
// Initialize the Effects object and pass in the width and height of the original image to be processed. Manage the lifecycle yourself. When stopping image capture, call [self.effects uninitEnv]; interface to uninitialize, otherwise it will cause memory leaks. [self.effects initEnv:CGSizeMake(1280, 720)];
Initialize ZEGO Express SDK
Call the createEngineWithProfile interface to initialize ZEGO Express SDK.
ZegoEngineProfile *profile = [[ZegoEngineProfile alloc] init];
profile.appID = [KeyCenter appID];
profile.appSign = [KeyCenter appSign];
profile.scenario = ZegoScenarioDefault;
[ZegoExpressEngine createEngineWithProfile:profile eventHandler:self];2 Obtain Original Video Data
ZEGO Express SDK can obtain original video data through Custom Video Preprocessing and Custom Video Capture.
The differences between the two obtaining methods are as follows. Developers can choose as needed based on actual situations.
| Data Acquisition Method | Video Data Capture Method | Advantage |
|---|---|---|
| Custom video preprocessing | Video data is captured internally by ZEGO Express SDK, and original video data is obtained through callbacks. | Extremely simple combination of ZEGO Express SDK and ZEGO Effects SDK. Developers do not need to manage device input sources, only need to operate on the original data thrown by ZEGO Express SDK, and then pass it back to ZEGO Express SDK. |
| Custom video capture | Video data is captured by the developer themselves and provided to ZEGO Express SDK. | When integrating with multiple manufacturers, business implementation is more flexible, and there is more room for performance optimization. |
-
Method 1: Custom Video Preprocessing
Taking obtaining CVPixelBufferRef type original video data as an example.
Developers call the enableCustomVideoProcessing interface to enable custom video preprocessing; after enabling, ZEGO Express SDK will internally capture video data; after capture is complete, the captured original video data can be obtained through the onCapturedUnprocessedCVPixelBuffer callback interface.
ZegoCustomVideoProcessConfig *processConfig = [[ZegoCustomVideoProcessConfig alloc] init]; // Select CVPixelBuffer type video frame data processConfig.bufferType = ZegoVideoBufferTypeCVPixelBuffer; // Enable custom preprocessing [[ZegoExpressEngine sharedEngine] enableCustomVideoProcessing:YES config:processConfig channel:ZegoPublishChannelMain]; // Set self as the custom video preprocessing callback object [[ZegoExpressEngine sharedEngine] setCustomVideoProcessHandler:self];For specific principles, please refer to Custom Video Preprocessing in "Video Call".
-
Method 2: Custom Video Capture
Custom video capture mainly relies on developers capturing video data themselves. For specific methods, please refer to Custom Video Capture in "Video Call".
3 Perform AI Beauty Effects Processing
After obtaining the original video data, pass the data to ZEGO Effects SDK to start AI beauty effects processing on the video (e.g., beauty effects, makeup, background segmentation, etc.).
-
Method 1: Custom Video Preprocessing
In the onCapturedUnprocessedCVPixelBuffer callback, after obtaining the original video data, call the related interfaces of ZEGO Effects SDK to perform AI beauty effects processing (please refer to Beauty Effects, Body Reshaping, Background Segmentation, Face Detection, Stickers, Filters), and return the processed data to ZEGO Express SDK.
// Taking custom preprocessing as an example // Callback method to obtain original data - (void)onCapturedUnprocessedCVPixelBuffer:(CVPixelBufferRef)buffer timestamp:(CMTime)timestamp channel:(ZegoPublishChannel)channel { ... // Custom preprocessing: Use ZEGO Effects SDK here [self.effects processImageBuffer:buffer]; // Send the processed buffer back to ZEGO Express SDK [[ZegoExpressEngine sharedEngine] sendCustomVideoProcessedCVPixelBuffer:output timestamp:timestamp channel:channel]; ... } -
Method 2: Custom Video Capture
After receiving the onStart callback for custom capture, developers obtain video data through custom capture, then call the related interfaces of ZEGO Effects SDK to perform AI beauty effects processing (please refer to Beauty Effects, Body Reshaping, Background Segmentation, Face Detection, Stickers, Filters), and return the processed data to ZEGO Express SDK (please refer to "3 Send Video Frame Data to SDK" in Custom Video Capture).
4 Publish Processed Data
After processing is completed by ZEGO Effects SDK, return the processed data to ZEGO Express SDK.
ZEGO Express SDK calls the startPublishingStream interface, passes in the processed data stream streamID, starts publishing, and sends to the cloud server.
// Start publishing stream
[[ZegoExpressEngine sharedEngine] startPublishingStream:@"streamID"];5 Pull Processed Data for Playback
After ZEGO Express SDK starts publishing, remote users can call the startPlayingStream interface, pass in the processed data stream streamID, pull video data, and play.
// Pull real-time stream
[[ZegoExpressEngine sharedEngine] startPlayingStream:@"streamID" canvas:[ZegoCanvas canvasWithView:self.view]];At this point, developers can fully implement real-time adjustment of AI beauty effects while publishing and playing audio/video streams.
