Integration with AI Beauty Effects
Usage Guide
Introduction
Real-time audio and video is a real-time audio and video interaction service product from ZEGO. Developers can build audio and video applications through its flexible and easy-to-use APIs. At the same time, another product from ZEGO - AI Beauty Effects, based on leading AI algorithms, provides features such as beauty effects, body retouching, makeup, and stickers. Combining the two can easily achieve the integration of audio and video interaction and beauty effects, creating real-time beauty applications.
The combination of the two can be widely used in live streaming scenarios such as entertainment live streaming, game live streaming, and video conferencing.
Concept Explanation
- ZEGO Express SDK: ZEGO real-time audio and video SDK, providing basic real-time audio and video functions, including live streaming publish and play, live co-hosting, etc. Hereinafter referred to as ZEGO Express SDK.
- ZEGO Effects SDK: ZEGO AI Beauty SDK, providing multiple intelligent image rendering and algorithm capabilities, including intelligent beauty effects, AR effects, image segmentation, etc. Hereinafter referred to as ZEGO Effects SDK.
Example Source Code
To facilitate developers in implementing the integration of the two, ZEGO provides example code. Please refer to AI Beauty Effects - Run Example Source Code.
Prerequisites
-
You have created a project in the ZEGOCLOUD Console and applied for a valid AppID and AppSign. For details, please refer to Console - Project Management in "Project Information"; and contact ZEGO Technical Support to enable ZEGO Effects related package service permissions.
-
You have integrated ZEGO Express SDK in your project and implemented basic audio and video streaming functionality. For details, please refer to Quick Start - Integration and Quick Start - Implementing Video Call.
-
You have integrated ZEGO Effects SDK in your project. For details, please refer to "AI Beauty Effects" Quick Start - Integration.
-
You have obtained the unique authentication file for ZEGO Effects SDK. For details, please refer to "AI Beauty Effects" Quick Start - Online Authentication.
Usage Steps
The principle of using ZEGO Effects SDK and ZEGO Express SDK together to perform real-time AI beauty processing on video data is shown in the following figure:

Through the above process, the specific implementation steps are shown in the following figure:
- Initialize ZEGO Effects SDK and ZEGO Express SDK. There is no timing restriction on initialization.
- Obtain original video data, which can be obtained through Custom Video Capture or Custom Video Preprocessing of ZEGO Express SDK.
- Pass the captured original video data to ZEGO Effects SDK for AI beauty processing.
- Pass the processed data to ZEGO Express SDK for publishing. If you need to adjust AI beauty effects during the publish and play process, you can use the relevant functions of ZEGO Effects SDK to make real-time changes.
- Remote users use ZEGO Express SDK to pull and play the processed data.
Initialize ZEGO Effects/Express SDK
There is no timing restriction on the initialization of the two SDKs. The following steps take "initializing ZEGO Effects SDK first, then initializing ZEGO Express SDK" as an example.
Initialize ZEGO Effects SDK
-
Import Effects models and resources.
When using AI-related functions of ZEGO Effects SDK, you must first import AI models and resources.
// Pass in the absolute path of the face recognition model. Face detection, big eyes, and face slimming functions all need to import ArrayList<String> aiResources = new ArrayList<>(); aiResources.add("sdcard/xxx/xxxxx/FaceDetectionModel.model"); aiResources.add("sdcard/xxx/xxxxx/SegmentationModel.model"); // Pass in the absolute path of resources aiResources.add("sdcard/xxx/xxxxx/CommonResources.bundle"); aiResources.add("sdcard/xxx/xxxxx/PendantResources.bundle"); aiResources.add("sdcard/xxx/xxxxx/FaceWhiteningResources.bundle"); ... // Pass in the path list of resources or models, must be called before create ZegoEffects.setResources(aiResources);For all resources and models supported by ZEGO Effects SDK, please refer to "AI Beauty Effects" Quick Start - Import Resources and Models.
-
Create Effects object.
-
Initialize Effects object.
Call the initEnv interface to initialize the Effects object. You need to pass in the width and height of the video image data to be processed.
Taking processing 1280 × 720 video images as an example:
// Initialize Effects object, pass in the width and height of the original image to be processed, need to initialize in onStart callback of custom video preprocessing, express is the Express engine object created later express.setCustomVideoProcessHandler(new IZegoCustomVideoProcessHandler() { public void onStart(ZegoPublishChannel channel) { effects.initEnv(1280,720); // After SDK 1.4.7, this interface can be omitted. If you want to call it, please open preview first then turn on the camera } }
Initialize ZEGO Express SDK
Call the createEngine interface to initialize ZEGO Express SDK.
// Define SDK engine object
ZegoExpressEngine express;
ZegoEngineProfile profile = new ZegoEngineProfile();
// Please obtain through official website registration, format is 123456789L
profile.appID = appID;
// Please obtain through official website registration, format is: "0123456789012345678901234567890123456789012345678901234567890123" (64 characters in total)
profile.appSign = appSign;
// General scenario access
profile.scenario = ZegoScenario.DEFAULT;
// Set app's application object
profile.application = getApplication();
// Create engine
express = ZegoExpressEngine.createEngine(profile, null);Obtain Original Video Data
ZEGO Express SDK can obtain original video data through two methods: Custom Video Preprocessing and Custom Video Capture.
The difference between the two acquisition methods is as follows. Developers can choose as needed:
| Data acquisition method | Video data acquisition method | Advantage |
|---|---|---|
| Custom video preprocessing | ZEGO Express SDK internally captures video data, and original video data is obtained through callbacks. | Extremely simple integration of ZEGO Express SDK and ZEGO Effects SDK. Developers do not need to manage device input sources, only need to operate on the original data thrown by ZEGO Express SDK and then pass it back to ZEGO Express SDK. |
| Custom video capture | Developers capture video data themselves and provide it to ZEGO Express SDK. | When integrating multiple manufacturers, business implementation is more flexible, and there is more room for performance optimization. |
-
Method 1: Custom Video Preprocessing
Taking obtaining GL_TEXTURE_2D type original video data as an example.
Developers call the enableCustomVideoProcessing interface to enable custom video preprocessing; after enabling, ZEGO Express SDK will internally capture video data; after capture is complete, the captured original video data can be obtained through the onCapturedUnprocessedTextureData callback interface.
ZegoCustomVideoProcessConfig config = new ZegoCustomVideoProcessConfig(); // Select GL_TEXTURE_2D type video frame data config.bufferType = ZegoVideoBufferType.GL_TEXTURE_2D; // Enable custom preprocessing express.enableCustomVideoProcessing(true, config, ZegoPublishChannel.MAIN);For the specific principle, please refer to "Real-time Audio and Video" Custom Video Preprocessing.
-
Method 2: Custom Video Capture
For custom video capture, developers mainly need to capture video data themselves. For specific methods, please refer to "Real-time Audio and Video" Custom Video Capture.
Perform AI Beauty Processing
After obtaining the original video data, pass the data to ZEGO Effects SDK to start AI beauty processing (such as: beauty effects, makeup, background segmentation, etc.) on the video.
-
Method 1: Custom Video Preprocessing
In the onCapturedUnprocessedTextureData callback, after obtaining the original video data, call the relevant interfaces of ZEGO Effects SDK to perform AI beauty processing (please refer to Beauty Effects, Shape Retouch, Background Segmentation, Face Detection, Stickers, Filters), and return the processed data to ZEGO Express SDK.
// Custom preprocessing as an example // Callback method to obtain original data // Callback processing // Effect initialization and de-initialization in Express video preprocessing start/stop callbacks express.setCustomVideoProcessHandler(new IZegoCustomVideoProcessHandler() { @Override public void onStart(ZegoPublishChannel channel) { effects.initEnv(720, 1280); } // Must de-initialize, otherwise it will cause memory leak @Override public void onStop(ZegoPublishChannel channel) { effects.uninitEnv(); } // Callback method to obtain original data texture @Override public void onCapturedUnprocessedTextureData(int textureID, int width, int height, long referenceTimeMillisecond, ZegoPublishChannel channel) { ZegoEffectsVideoFrameParam param = new ZegoEffectsVideoFrameParam(); param.format = ZegoEffectsVideoFrameFormat.RGBA32; param.width = width; param.height = height; // Custom preprocessing: Use ZEGO Effects SDK here int processedTextureID = effects.processTexture(textureID, param); // Pass the processed buffer back to ZEGO Express SDK express.sendCustomVideoProcessedTextureData(processedTextureID, width, height, referenceTimeMillisecond); } } -
Method 2: Custom Video Capture
After receiving the custom capture onStart callback, developers obtain video data through custom capture, then call the relevant interfaces of ZEGO Effects SDK to perform AI beauty processing (please refer to Beauty Effects, Shape Retouch, Background Segmentation, Face Detection, Stickers, Filters), and return the processed data to ZEGO Express SDK (can refer to Custom Video Capture in "Send video frame data to SDK").
Publish Processed Data
After processing by ZEGO Effects SDK is completed, return the processed data to ZEGO Express SDK.
ZEGO Express SDK calls the startPublishingStream interface, passes in the processed data stream streamID, starts publishing, and sends it to the cloud server.
// Start publishing stream
express.startPublishingStream("streamID");Pull and Play Processed Data
After ZEGO Express SDK starts publishing, remote users can call the startPlayingStream interface, pass in the processed data stream streamID, pull the video data, and play it.
/**
* Start playing stream, set remote playing stream rendering view, view mode uses SDK default mode, proportional scaling fills the entire View
* The following play_view is a SurfaceView/TextureView/SurfaceTexture object on the UI interface
*/
express.startPlayingStream("streamID", new ZegoCanvas(play_view));At this point, developers can completely achieve real-time adjustment of AI beauty effects while publishing and playing audio and video streams.
