logo
On this page

Set Video Encoding Method

2024-01-02

Overview

When developers publish or play video streams, they can set detailed encoding/decoding configurations, including enabling layered video encoding (Simulcast), using hardware encoding/decoding, and setting encoding methods.

Layered Video Encoding (Simulcast)

Layered video encoding divides the bitstream into a base layer and an enhancement layer. This encoding method can provide better experience for users with different network conditions, also known as Simulcast. The base layer ensures the most basic video quality, while the enhancement layer supplements the base layer. For users with better network, only the enhancement layer can be played for better experience. For users with poor network conditions, only playing the base layer can ensure basic video quality.

When developers encounter the following situations in co-hosting or stream mixing business, it is recommended to use the layered video encoding function:

  • Need to display video streams of different quality on different terminals.
  • Need to maintain smooth co-hosting in poor network environments.
  • Need to adaptively play video stream quality according to network conditions.
Note

Layered video encoding uses ZEGO's private protocol. The playing stream end can only play different layered video streams from ZEGO servers.

Hardware Encoding/Decoding

Developers can choose to enable hardware encoding and hardware decoding. After enabling hardware encoding/decoding, GPU will be used for encoding/decoding, reducing CPU usage. If some devices have severe heating issues when publishing or playing high-resolution audio and video streams, hardware encoding/decoding can be enabled.

Video Encoding Method

Developers can configure video encoding to align encoding between different ends, thereby achieving multi-terminal interoperability.

Usage scenarios:

  • Generally, use the default encoding.
  • If you need to reduce bitrate under the same resolution and frame rate, you can use H.265.
  • If you need to interoperate with mini-programs, you need to use H.264.

Prerequisites

Before implementing video encoding/decoding functions, please ensure:

Implementation Steps

1 Layered Video Encoding

Using layered video encoding requires the following two steps:

  • Enable layered video encoding by specifying a specific encoder before publishing stream.
  • Specify the layered video to play when playing stream.

Enable Layered Video Encoding

Before publishing stream (startPublishingStream), call setVideoConfig to set the parameter "codecID" in the ZegoVideoConfig class to enable/disable layered video encoding.

  • Setting "codecID" to "ZegoVideoCodecID.SVC" enables this function.
  • Setting "codecID" to "ZegoVideoCodecID.Default" or "ZegoVideoCodecID.VP8" disables this function.
let videoConfig = new ZegoVideoConfig();
videoConfig.codecID = ZegoVideoCodecID.SVC;
ZegoExpressEngine.instance().setVideoConfig(videoConfig);

let streamID = "MultiLayer-1";
ZegoExpressEngine.instance().startPublishingStream(streamID);

Specify Layered Video to Play

After the publishing side enables layered video encoding, the playing side can call the setPlayStreamVideoType interface before or after playing stream. The playing side will play the appropriate video layer according to network conditions by default, for example, only playing the base layer in weak network conditions. Developers can also pass in specific playing parameters to play a specific video layer. Currently supported video layers are as follows:

Enumeration ValueDescription
ZegoVideoStreamType.DefaultSelect layer according to network status
ZegoVideoStreamType.SmallSmall resolution type
ZegoVideoStreamType.BigLarge resolution type

Taking playing the enhancement layer as an example:

ZegoExpressEngine.instance().setPlayStreamVideoType(playStreamID,ZegoVideoStreamType.Big);
ZegoExpressEngine.instance().startPlayingStream(playStreamID);

2 Hardware Encoding/Decoding

Since a small number of devices do not support hardware encoding/decoding well, the SDK uses software encoding and software decoding by default. If developers need to use hardware encoding, they can refer to this section to set it up themselves.

Enable Hardware Encoding

Warning

This function must be set before publishing stream to take effect. If set after publishing stream, it will only take effect after stopping publishing stream and republishing.

If developers need to enable hardware encoding, they can call the enableHardwareEncoder interface.

// Enable hardware encoding
ZegoExpressEngine.instance().enableHardwareEncoder(true);

Enable Hardware Decoding

Warning

This function must be set before playing stream to take effect. If set after playing stream, it will only take effect after stopping playing stream and replaying.

If developers need to enable hardware decoding, they can call the enableHardwareDecoder interface.

// Enable hardware decoding
ZegoExpressEngine.instance().enableHardwareDecoder(true);

3 Set Video Encoding Method

Before publishing stream (startPublishingStream), call the setVideoConfig interface to set the parameter "codecID" under the "ZegoVideoConfig" class to set the video encoding method. Currently supported video encoding methods are as follows:

Enumeration ValueEncoding MethodUsage Scenarios
ZegoVideoCodecID.DefaultDefault encoding (H.264)H.264 is a widely used high-precision video recording, compression and publishing format with good compatibility.
ZegoVideoCodecID.SvcLayered encoding (H.264 SVC)Scenarios that require layered encoding.
ZegoVideoCodecID.Vp8VP8Commonly used for Web video, but cannot be used in CDN recording scenarios, otherwise it will cause recording file abnormalities.

Taking setting the encoding method to Vp8 as an example:

let videoConfig = new ZegoVideoConfig();
videoConfig.codecID = ZegoVideoCodecID.Vp8;
ZegoExpressEngine.instance().setVideoConfig(videoConfig);

let streamID = "MultiLayer-1";
ZegoExpressEngine.instance().startPublishingStream(streamID);

FAQ

  1. What is Simulcast?

Simulcast is layered video encoding. Before playing stream at the receiving end, users can call setPlayStreamVideoType to set it to ZegoVideoStreamTypeSMALL (small stream) or ZegoVideoStreamTypeBIG (big stream) according to their network conditions, or set it to ZegoVideoStreamTypeDEFAULT to let ZEGO automatically select for you.

  1. Are there differences in parameters such as bitrate and resolution for layered video encoding to play the base layer and enhancement layer?

The resolution width and height of the base layer of layered video encoding are 50% of the enhancement layer respectively. The bitrate of playing the base layer is about 25% of the bitrate of playing the enhancement layer. Other parameters are consistent.

Warning

Layered video encoding will only play one layer. When network conditions are good, only the enhancement layer is played. When network conditions are poor, only the base layer is played.

For example, if the user sets the encoding resolution to "800 × 600", the enhancement layer resolution is "800 × 600", and the base layer resolution is "400 × 300".
  1. When relaying or directly publishing to CDN, the audience plays stream from CDN. Is layered video encoding effective? What is the bitrate and resolution of the stream played from CDN?
  • Layered video encoding uses ZEGO's private protocol. The playing stream end can only play different layered video streams from ZEGO servers.

  • In the relay to CDN scenario, the stream published by the publishing side to ZEGO server can use layered video encoding, and the stream with layered video encoding can also be played from ZEGO server. However, the stream relayed by ZEGO server to CDN server cannot use layered video encoding, but will be a high-quality stream. The stream played from CDN is consistent with the bitrate and resolution of the enhancement layer in layered video encoding.

  • In the direct publish to CDN scenario, since it does not pass through ZEGO server, layered video encoding is invalid. The resolution and bitrate of the stream played from CDN are consistent with the resolution and bitrate set by the publishing user.

  1. What are the advantages and disadvantages of layered video encoding?
AdvantagesDisadvantages
  • Layered video encoding can generate different bitstreams or extract different bitstreams as needed. Using layered video encoding to implement encoding once is more efficient than using ordinary encoding methods to encode multiple times.
  • Layered video encoding is more flexible in application.
  • Layered video encoding has stronger network adaptability.
  • Slightly lower compression efficiency: Under the same conditions, the compression efficiency of layered video encoding is about 20% lower than that of ordinary encoding methods. That is, to achieve the same video quality as ordinary encoding methods, the bitrate of layered video encoding needs to be 20% higher than that of ordinary encoding methods. The more layers, the more the efficiency decreases. (Currently SDK only supports 1 base layer and 1 enhancement layer)
  • Lower encoding efficiency: Under the same conditions, the encoding computational complexity of layered video encoding is higher than that of ordinary encoding methods, so the encoding efficiency is about 10% lower than that of ordinary encoding methods.
  • Does not support hardware encoding: Layered video encoding does not support hardware encoding and has a greater burden on CPU performance, but supports hardware decoding.

Previous

Screen Sharing

Next

Custom Video Capture