Object Segmentation
Feature Overview
Object Segmentation is a value-added capability provided by the Express SDK. It uses AI algorithms to identify content in video frames and set transparency information for each pixel. Pixels in the main subject area are set to "opaque", while pixels outside the main subject area are set to "transparent". Developers can use this transparency information to apply different processing to the main subject and background areas, enabling various features.
- The official SDK does not include "Object Segmentation" related features. If needed, please contact ZEGOCLOUD Technical Support for a special build and provide your AppID to enable the relevant permissions.
- "Object Segmentation" is a paid feature. Please contact ZEGOCLOUD sales for trial access or official pricing information.
Object Segmentation Types
For users in different environments, ZEGO provides two segmentation capabilities: "Green Screen Background Segmentation" and "Arbitrary Background Segmentation".
| Segmentation Type | Green Screen Background Segmentation | Arbitrary Background Segmentation |
|---|---|---|
| Capability Description | When users have set up a green screen, the main subject in non-green screen areas can be retained. Suitable for e-commerce live streaming, online exams, and other scenarios. | Most users don't have the conditions to set up a green screen. Through ZEGO's arbitrary background segmentation capability, the main subject in the frame can be identified without a green screen. Suitable for online education, video conferences, and other scenarios. |
| Illustration | ![]() | ![]() |
Feature Scenarios
Based on the Object Segmentation capability, developers can implement scenarios such as background blur, virtual background, presenter mode, and multi-user real-time interaction, creating more diverse interactive experiences.
| Feature | Background Blur | Virtual Background | Transparent Background | Object Segmentation and Transmission |
|---|---|---|---|---|
| Feature Description | Blur the area outside the main subject. | Replace the area outside the main subject with custom images or colors. | Render the main subject over other video content locally. For example, implement presenter mode over screen sharing or playing video content. | Combine with the Alpha channel data transmission capability provided by Express SDK to transmit the segmented main subject to the playing stream side, where subject rendering is performed, achieving the visual effect of multiple users in different locations appearing in the same scene in real-time. |
| Illustration | ![]() | ![]() | ![]() | ![]() |
Hardware Compatibility
| Platform | Hardware Requirements |
|---|---|
| Android |
|
| iOS | A-series chips: Apple A9 and above, e.g., iPhone 6s |
Please note that when using ReactNative SDK on the above platforms, hardware compatibility requirements are the same.
Prerequisites
Before using the Object Segmentation feature, ensure that:
-
You have contacted ZEGOCLOUD Technical Support for a special build.
-
You have created a project in the ZEGOCLOUD Console and applied for a valid AppID and AppSign. For details, refer to Console - Project Information.
-
You have integrated ZEGO Express SDK in your project and implemented basic audio/video stream publishing and playing. For details, refer to Quick Start - Integration and Quick Start - Implementation.
Implementation Flow
- Enabling Object Segmentation will consume additional system resources. To ensure user experience, currently only one channel's published stream can have Object Segmentation enabled.
- If using custom pre-processing third-party filters, ensure that the third-party filters support Alpha channel pass-through functionality.
Note that developers can choose whether to implement the (Optional) steps shown in the diagram above based on their business scenario needs. For implementation details, refer to the specific instructions below.
Initialization and Room Login
For the specific flow of initialization and room login, refer to "Create Engine" and "Login Room" in the Implementing Video Call documentation.
Listen for Object Segmentation State Callback
Call the videoObjectSegmentationStateChanged interface to listen for Object Segmentation state callbacks.
The Object Segmentation state callback depends on starting preview or publishing stream. That is, to listen for the videoObjectSegmentationStateChanged callback, you need to call preview startPreview or publishing stream startPublishingStream.
ZegoExpressEngine.instance().on('videoObjectSegmentationStateChanged', (state, channel, errorCdoe) => {
console.log("JS videoObjectSegmentationStateChanged: " + state + " channel: " + channel + " errorCode: " + errorCdoe);
})Use Object Segmentation to Implement Different Business Features
If developers need to update the Object Segmentation type or background processing type, they need to modify the ZegoObjectSegmentationConfig configuration and call the enableVideoObjectSegmentation interface again to enable Object Segmentation. The Object Segmentation effect will be updated, and the result will be notified to developers through the videoObjectSegmentationStateChanged callback.
Background Blur
Call the enableVideoObjectSegmentation interface to enable Object Segmentation and set the background processing type to "Blur".
let config = new ZegoObjectSegmentationConfig();
config.objectSegmentationType = ZegoObjectSegmentationType.AnyBackground;//Select the Object Segmentation type to enable based on actual situation
config.backgroundConfig.processType = ZegoBackgroundProcessType.Blur;//Set background processing mode to blur
config.backgroundConfig.blurLevel = ZegoBackgroundBlurLevel.Medium;//Set background blur level to medium
ZegoExpressEngine.instance().enableVideoObjectSegmentation(enable, config, ZegoPublishChannel.Main);//Enable Object SegmentationVirtual Background
- When using this feature, developers should pay attention to the aspect ratio of custom images, otherwise parts of the image that exceed the view will be cropped.
- Currently supports "PNG" and "JPEG" image formats, i.e., image files with ".png", ".jpg", and ".jpeg" extensions.
Call the enableVideoObjectSegmentation interface to enable Object Segmentation and set the background processing type to "Image".
let config = new ZegoObjectSegmentationConfig();
config.objectSegmentationType = ZegoObjectSegmentationType.AnyBackground;//Select the Object Segmentation type to enable based on actual situation
config.backgroundConfig.processType = ZegoBackgroundProcessType.Image;//Set background processing mode to image
config.backgroundConfig.imageURL = "<image_path>";//Set background image path
ZegoExpressEngine.instance().enableVideoObjectSegmentation(enable, config, ZegoPublishChannel.Main);//Enable Object SegmentationTransparent Background
If developers need to implement business functions similar to "Presenter Mode", they need to mix the "main subject video" with "video content to be mixed" into a single video stream on the business side.
Call the enableVideoObjectSegmentation interface to enable Object Segmentation and set the background processing type to "Transparent".
let config = new ZegoObjectSegmentationConfig();
config.objectSegmentationType = ZegoObjectSegmentationType.AnyBackground;//Select the Object Segmentation type to enable based on actual situation
config.backgroundConfig.processType = ZegoBackgroundProcessType.Transparent;//Set background processing mode to transparent
ZegoExpressEngine.instance().enableVideoObjectSegmentation(enable, config, ZegoPublishChannel.Main);//Enable Object Segmentation(Optional) Use Alpha Channel to Transmit Segmented Subject
If the publishing stream side needs to transmit the segmented subject video to the playing stream side through the Alpha channel for subject rendering, first call the enableAlphaChannelVideoEncoder interface to set the encoder to support the transparent channel, then call the startPublishingStream interface to publish the stream for smooth transmission to the playing stream side.
Currently only supports transparent channel data arranged below RGB or YUV data.
-
Enable Alpha channel data transmission:
let layoutType = ZegoAlphaLayoutType.Bottom; // Transparent channel data arranged below RGB or YUV data ZegoExpressEngine.instance().enableAlphaChannelVideoEncoder(true, layoutType, ZegoPublishChannel.Main); // Enable encoder to support transparent channel -
Disable Alpha channel data transmission:
let layoutType = ZegoAlphaLayoutType.Bottom; // Transparent channel data arranged below RGB or YUV data ZegoExpressEngine.instance().enableAlphaChannelVideoEncoder(false, layoutType, ZegoPublishChannel.Main); // Disable encoder transparent channel support
Start Preview and Publish Stream
After enabling the Object Segmentation feature through the enableVideoObjectSegmentation interface, you can start preview.
Developers can also start preview first, then enable Object Segmentation. This article introduces enabling Object Segmentation first, then starting preview.
let view = {"reactTag": findNodeHandle(this.refs.zego_preview_view), "viewMode": 0, "backgroundColor": 0};
view.alphaBlend = true;//Enable internal rendering Alpha blending, which supports Alpha blending between the segmented subject and background layer
ZegoExpressEngine.instance().startPreview(view);
ZegoExpressEngine.instance().startPublishingStream(streamID);(Optional) Set Alpha Channel Rendering on Playing Stream Side and Start Playing Stream
Alpha channel rendering on the playing stream side is only required when the publishing stream side has enabled Alpha channel transmission.
let view = {"reactTag": findNodeHandle(this.refs.zego_play_view), "viewMode": 0, "backgroundColor": 0};
view.alphaBlend = true;//Enable internal rendering Alpha blending, which supports Alpha blending between the segmented subject and background layer
ZegoExpressEngine.instance().startPlayingStream(streamID, canvas);Disable Object Segmentation
Call the enableVideoObjectSegmentation interface to disable Object Segmentation.
let objectType = ZegoObjectSegmentationType.AnyBackground;//Select the Object Segmentation type to disable based on actual situation
ZegoExpressEngine.instance().enableVideoObjectSegmentation(false, objectType, ZegoPublishChannel.Main);//Disable Object SegmentationDestroy Engine
Call the destroyEngine interface to destroy the engine.
ZegoExpressEngine.destroyEngine();





