logo
Live Streaming
On this page

Custom Video Capture

2024-12-09

Feature Overview

Custom video capture refers to the functionality where developers capture video themselves and provide video data to the ZEGO Express SDK, which then encodes and publishes the stream. When users enable custom video capture functionality, by default, the ZEGO Express SDK will render the local preview on the publishing side, and users do not need to perform rendering themselves.

It is recommended to use the SDK's custom video capture feature when the following situations occur in your business:

  • The developer's App uses a third-party beauty effects vendor's beauty SDK, which can directly interface with the ZEGO Express SDK's custom video capture functionality. That is, the third-party beauty effects vendor's beauty SDK is responsible for video data capture and pre-processing, while the ZEGO Express SDK is responsible for video data encoding and publishing the audio/video stream to the ZEGO audio/video cloud.
  • During live streaming, the developer needs to use the camera for additional functions that conflict with the ZEGO Express SDK's default video capture logic, causing the camera to not work properly. For example, recording a short video in the middle of a live stream.
  • Live streaming data not captured by the camera. For example, local video file playback, screen sharing, game streaming, etc.

Example Source Code Download

Please refer to Download Example Source Code to get the source code.

For related source code, please view files in the "/ZegoExpressExample/Examples/AdvancedVideoProcessing/CustomVideoCapture" directory.

Prerequisites

Before implementing custom video capture, ensure that:

Usage Steps

The API interface call sequence diagram is as follows:

  1. Create the ZegoExpressEngine engine.
  2. Call the enableCustomVideoCapture interface to enable custom video capture functionality.
  3. Call the setCustomVideoCaptureHandler interface to set the custom video capture callback object; and implement the corresponding onStart and onStop callback methods to receive notifications for custom video capture start and end, respectively.
  4. After logging in to the room and starting preview/publishing, the onStart custom video capture start callback notification will be triggered.
  5. Call startCapture to start capturing video; or implement your own video capture related business logic, such as enabling the camera, etc.
  6. Call the sendCustomVideoCapturePixelBuffer, sendCustomVideoCaptureTextureData, and sendCustomVideoCaptureEncodedData interfaces to send video frame data to the SDK.
  7. Finally, ending preview/publishing will trigger the onStop custom video capture end callback notification.
  8. Call stopCapture to stop capturing video.
Caution

When enabling custom video capture functionality, you need to keep the enableCamera interface at its default setting "True", otherwise there will be no video data for publishing.

1 Enable Custom Video Capture

  1. After initializing the SDK, create a custom video capture object ZegoCustomVideoCaptureConfig and set the bufferType property to provide the video frame data type to the SDK.
  2. Call the enableCustomVideoCapture interface to enable custom video capture functionality.
Note

Due to the diversity of iOS capture, the SDK supports multiple video frame data types bufferType. Developers need to inform the SDK of the data type being used. Currently, the iOS SDK supports the following video frame data types:

  • ZegoVideoBufferTypeCVPixelBuffer: CVPixelBuffer type, supporting multiple video data formats, such as RGBA32, I420, NV12, etc.
  • ZegoVideoBufferTypeGLTexture2D: OpenGL Texture 2D type.
  • ZegoVideoBufferTypeEncodedData: Encoded type.

Setting it to other enum values will prevent providing video frame data to the SDK.

ZegoCustomVideoCaptureConfig *captureConfig = [[ZegoCustomVideoCaptureConfig alloc] init];
// Select CVPixelBuffer type video frame data
captureConfig.bufferType = ZegoVideoBufferTypeCVPixelBuffer;

[[ZegoExpressEngine sharedEngine] enableCustomVideoCapture:YES config:captureConfig channel:ZegoPublishChannelMain];

2 Set Custom Video Capture Callback

Set custom video capture callback object

Set ViewController as the custom video capture callback object, conforming to the ZegoCustomVideoCaptureHandler protocol.

@interface ViewController () <ZegoEventHandler, ZegoCustomVideoCaptureHandler>

    ......

@end

Call the setCustomVideoCaptureHandler interface to set the custom video capture callback.

// Set self as the custom video capture callback object
[[ZegoExpressEngine sharedEngine] setCustomVideoCaptureHandler:self];

Implement custom video capture callback methods

Implement the onStart and onStop custom capture callback methods.

Note

When custom capturing multiple streams, you need to specify the publishing channel in the onStart and onStop callbacks, otherwise only the main channel notifications will be callbacked by default.

// Note: This callback is not on the main thread. Please switch to the main thread yourself if there are UI operations
- (void)onStart {

    // After receiving the callback, developers need to execute business logic related to starting capture, such as enabling the camera, etc.

    // This example starts a self-implemented video capture device
    [self.captureDevice startCapture];
}

// Note: This callback is not on the main thread. Please switch to the main thread yourself if there are UI operations
- (void)onStop {

    // After receiving the callback, developers need to execute business logic related to stopping capture, such as disabling the camera, etc.

    // This example stops a self-implemented video capture device
    [self.captureDevice stopCapture];
}

3 Send Video Frame Data to SDK

After calling the start preview interface startPreview or start publishing stream interface startPublishingStream, the onStart callback will be triggered; after the developer starts the capture-related business logic, call the send custom captured video frame data interface to send video frame data to the SDK. The custom captured video frame data interfaces correspond one-to-one with the video frame data type bufferType provided to the SDK in 1 Enable Custom Video Capture:

Video Frame TypebufferTypeSend Video Frame Data Interface
CVPixelBuffer TypeZegoVideoBufferTypeCVPixelBuffersendCustomVideoCapturePixelBuffer
OpenGL Texture 2D TypeZegoVideoBufferTypeGLTexture2DsendCustomVideoCaptureTextureData
Encoded TypeZegoVideoBufferTypeEncodedDatasendCustomVideoCaptureEncodedData
Caution

When performing external capture, if the data sent to the SDK through the sendCustomVideoCaptureEncodedData interface is encoded video frame data, the SDK is only responsible for transmitting the data and cannot preview. Developers need to preview themselves, and pre-processing effects like watermarks will not take effect.

Taking receiving the camera video frame callback and calling the sendCustomVideoCapturePixelBuffer interface to send video frame data to the SDK as an example:

#pragma mark - AVCaptureVideoDataOutputSampleBufferDelegate

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    CVPixelBufferRef buffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CMTime timeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);

    // Send custom captured video frame CVPixelBuffer data to the SDK
    [[ZegoExpressEngine sharedEngine] sendCustomVideoCapturePixelBuffer:buffer timeStamp:timeStamp];
}

After stopping publishing/previewing, when leaving the room or destroying the engine, the onStop callback will be triggered, and developers can stop capture-related business logic, such as disabling the camera, etc.

4 (Optional) Set Custom Capture Device State

  1. After receiving the onStart callback, you can call the setCustomVideoCaptureDeviceState interface as needed to specify the custom capture device state for the publishing channel.

  2. The playing side can get the publishing side's device state by listening to the onRemoteCameraStateUpdate callback.

Caution
  • When the publishing side sets the device state to ZegoRemoteDeviceStateDisable or ZegoRemoteDeviceStateMute through the setCustomVideoCaptureDeviceState interface, it is invalid, that is, the playing side cannot receive the onRemoteCameraStateUpdate callback.
  • When the publishing side disables the camera through enableCamera, the playing side will receive the ZegoRemoteDeviceStateDisable state through the onRemoteCameraStateUpdate callback, and there is no video data for publishing at this time.
  • When the publishing side stops sending the video stream through the mutePublishStreamVideo interface, the playing side will receive the ZegoRemoteDeviceStateMute state through the onRemoteCameraStateUpdate callback.

FAQ

  1. How to use "ZegoVideoBufferTypeGLTexture2D" to pass capture data?

    1. Set the bufferType of ZegoCustomVideoCaptureConfig to ZegoVideoBufferTypeGLTexture2D.
    2. Call the sendCustomVideoCaptureTextureData interface to send video frame data to the SDK.
  2. When using custom video capture, the local preview image is normal, but the image seen by the audience after publishing is distorted. How should this be handled?

    This problem is caused by the image ratio captured by custom video capture being inconsistent with the SDK's default resolution ratio. For example, the video frame image ratio captured by custom video capture is 4:3, while the SDK's default publishing image resolution ratio is 16:9.

    There are the following solutions:

    • Solution 1: Manually modify the custom video capture's video resolution ratio to 16:9.

    • Solution 2: Call the setVideoConfig interface to customize the SDK's publishing resolution ratio to 4:3.

    • Solution 3: Call the setCustomVideoCaptureFillMode interface to set the video capture image scaling fill mode to "ZegoViewModeAspectFit" (aspect ratio scaling, may have black edges) or "ZegoViewModeAspectFill" (aspect ratio fill, may have some image cropped).

  3. After enabling custom video capture, the captured frame rate is inconsistent with the playing frame rate?

    This can be handled in the following ways:

    • When sending ZegoVideoBufferTypeCVPixelBuffer type video frame data to the SDK, the frame rate set by the setVideoConfig interface and the frame rate of the video data provided by calling the custom capture sendCustomVideoCapturePixelBuffer interface need to be consistent.
    • When sending ZegoVideoBufferTypeGLTexture2D type video frame data to the SDK, the frame rate set by the setVideoConfig interface and the frame rate of the video data provided by calling the custom capture sendCustomVideoCaptureTextureData interface need to be consistent.
  4. Does the SDK's video frame data receiving method process the passed-in data synchronously or asynchronously?

    After receiving video frame data, the SDK will first synchronously copy the data, and then asynchronously perform operations such as encoding, so the data can be released immediately after being passed to the SDK.

  5. How to implement video rotation during custom video capture?

    You can refer to the following two ways to implement landscape/portrait switching:

    • Process video frame data yourself: In the device orientation change callback, perform rotation processing on the captured video frame data, then pass the processed data to the SDK through the sendCustomVideoCapturePixelBuffer interface.
    • Let the SDK process video frame data: In the device orientation change callback, before passing the captured video frame data to the SDK, set "rotation" in ZegoVideoEncodedFrameParam according to the actual orientation, call the sendCustomVideoCaptureEncodedData interface to pass in the video frame data and orientation-setting parameters, and pass the data to the SDK.

Previous

Set Video Encoding Mode

Next

Custom Video Rendering