Custom Audio Processing
Feature Introduction
Custom audio processing is generally used to remove interference in speech. Since the SDK has already performed echo cancellation, noise suppression, and other processing on the captured audio raw data, developers usually do not need to process it again.
If developers want to implement special features through custom processing after capturing audio data or before rendering remote audio data (such as voice changing, voice beautification, etc.), they can refer to this document.
The data processed by custom audio is the audio data after the raw audio has been processed by 3A (AEC Acoustic Echo Cancellation, AGC Automatic Gain Control, ANS Noise Suppression):
- If developers need to process the raw data, please first call the enableAEC, enableAGC, and enableANS interfaces to disable audio 3A processing. If voice changing, reverb, stereo, and other sound effects processing are enabled (disabled by default), they also need to be disabled first before raw audio data can be obtained.
- If developers need to obtain both raw data and audio data after 3A processing for processing, please refer to Custom Audio Capture and Rendering.
Prerequisites
Before custom audio processing, please ensure:
- You have created a project in the ZEGOCLOUD Console and applied for a valid AppID and AppSign. For details, please refer to Console - Project Information.
- You have integrated the ZEGO Express SDK in the project and implemented basic audio and video publishing and playing functions. For details, please refer to Quick Start - Integration and Quick Start - Implementation Flow.
Usage Steps
1 Create SDK engine
Call the createEngineWithProfile interface to create an SDK engine instance. For details, please refer to "Create Engine" in Quick Start - Implementation Flow.
2 Set audio custom processing handler and implement callback methods
Call the setCustomAudioProcessHandler interface to set the audio custom processing handler, and implement callback methods: custom audio processing local captured PCM audio frame callback onProcessCapturedAudioData and custom audio processing remote playing PCM audio frame callback onProcessRemoteAudioData. By directly processing the obtained data in the callback method, you can achieve processing of publishing and playing stream audio data.
[[ZegoExpressEngine sharedEngine] setCustomAudioProcessHandler:self];
- (void)onProcessCapturedAudioData:(unsigned char *)data dataLength:(unsigned int)dataLength param:(ZegoAudioFrameParam *)param timestamp:(double)timestamp{
}
- (void)onProcessRemoteAudioData:(unsigned char *)data dataLength:(unsigned int)dataLength param:(ZegoAudioFrameParam *)param streamID:(NSString *)streamID timestamp:(double)timestamp{
}3 Custom audio processing
-
Before starting publishing stream or starting local preview, call the enableCustomAudioCaptureProcessing interface to enable local capture custom audio processing. After enabling, developers can receive locally captured audio frames through the onProcessCapturedAudioData callback and modify the audio data.
-
Before starting playing stream, call the enableCustomAudioRemoteProcessing interface to enable remote playing custom audio processing. After enabling, developers can receive remote playing stream audio frames through onProcessRemoteAudioData and modify the audio data.
ZegoCustomAudioProcessConfig *config = [[ZegoCustomAudioProcessConfig alloc] init];
config.channel = ZegoAudioChannelMono;
config.sampleRate = ZegoAudioSampleRate16K;
config.samples = 0;
[[ZegoExpressEngine sharedEngine] enableCustomAudioCaptureProcessing:YES config:config];
[[ZegoExpressEngine sharedEngine] enableCustomAudioRemoteProcessing:YES config:config];