Package io.agora.rtc2
Interface IAudioFrameObserver
public interface IAudioFrameObserver
The IAudioFrameObserver interface.
-
Method Summary
Modifier and TypeMethodDescriptionSets the audio ear monitoring format for theonEarMonitoringAudioFramecallback.Sets the audio mixing format for theonMixedFramecallback.intSets the audio observation positions.Sets the audio playback format for theonPlaybackFramecallback.Sets the audio recording format for theonRecordFramecallback.booleanonEarMonitoringAudioFrame(int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) Occurs when the ear monitoring audio frame is received.booleanonMixedAudioFrame(String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) Occurs when the mixed playback audio frame is received.booleanonPlaybackAudioFrame(String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) Occurs when the playback audio frame is received.booleanonPlaybackAudioFrameBeforeMixing(String channelId, int uid, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type, int rtpTimestamp, long presentationMs) Occurs when the playback audio frame before mixing is received.booleanonRecordAudioFrame(String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) Occurs when the recorded audio frame is received.
-
Method Details
-
onRecordAudioFrame
boolean onRecordAudioFrame(String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) Occurs when the recorded audio frame is received.- Parameters:
channelId- The channel nametype- The audio frame type.samplesPerChannel- The samples per channel.bytesPerSample- The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).channels- The number of audio channels. If the channel uses stereo, the data is interleaved. - 1: Mono. - 2: Stereo.samplesPerSec- The number of samples per channel per second in the audio frame.buffer- The audio frame payload.renderTimeMs- The render timestamp in ms.avsync_type- The audio/video sync type.- Returns:
- - true: The recorded audio frame is valid and is encoded and sent. - false: The recorded audio frame is invalid and is not encoded or sent.
-
onPlaybackAudioFrame
boolean onPlaybackAudioFrame(String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) Occurs when the playback audio frame is received.- Parameters:
channelId- The channel nametype- The audio frame type.samplesPerChannel- The samples per channel.bytesPerSample- The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).channels- The number of audio channels. If the channel uses stereo, the data is interleaved. - 1: Mono. - 2: Stereo.samplesPerSec- The number of samples per channel per second in the audio frame.buffer- The audio frame payload.renderTimeMs- The render timestamp in ms.avsync_type- The audio/video sync type.- Returns:
- - true: The playback audio frame is valid and is encoded and sent. - false: The playback audio frame is invalid and is not encoded or sent.
-
onMixedAudioFrame
boolean onMixedAudioFrame(String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) Occurs when the mixed playback audio frame is received.- Parameters:
channelId- The channel nametype- The audio frame type.samplesPerChannel- The samples per channel.bytesPerSample- The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).channels- The number of audio channels. If the channel uses stereo, the data is interleaved. - 1: Mono. - 2: Stereo.samplesPerSec- The number of samples per channel per second in the audio frame.buffer- The audio frame payload.renderTimeMs- The render timestamp in ms.avsync_type- The audio/video sync type.- Returns:
- - true: The mixed audio data is valid and is encoded and sent. - false: The mixed audio data is invalid and is not encoded or sent.
-
onEarMonitoringAudioFrame
boolean onEarMonitoringAudioFrame(int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) Occurs when the ear monitoring audio frame is received.- Parameters:
type- The audio frame type.samplesPerChannel- The samples per channel.bytesPerSample- The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).channels- The number of audio channels. If the channel uses stereo, the data is interleaved. - 1: Mono. - 2: Stereo.samplesPerSec- The number of samples per channel per second in the audio frame.buffer- The audio frame payload.renderTimeMs- The render timestamp in ms.avsync_type- The audio/video sync type.- Returns:
- - true: The ear monitoring audio frame is valid and is encoded and sent. - false: The ear monitoring audio frame is invalid and is not encoded or sent.
-
onPlaybackAudioFrameBeforeMixing
boolean onPlaybackAudioFrameBeforeMixing(String channelId, int uid, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type, int rtpTimestamp, long presentationMs) Occurs when the playback audio frame before mixing is received.- Parameters:
uid- The user Id.type- The audio frame type.samplesPerChannel- The samples per channel.bytesPerSample- The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).channels- The number of audio channels. If the channel uses stereo, the data is interleaved. - 1: Mono. - 2: Stereo.samplesPerSec- The number of samples per channel per second in the audio frame.buffer- The audio frame payload.renderTimeMs- The render timestamp in ms.avsync_type- The audio/video sync type.rtpTimestamp- RTP timestamp of the first sample in the frame.presentationMs- The pts timestamp of this audio frame.- Returns:
- - true: The playback audio frame before mixing is valid and is encoded and sent. - false: The playback audio frame before mixing is invalid and is not encoded or sent.
-
getObservedAudioFramePosition
int getObservedAudioFramePosition()Sets the audio observation positions. After you successfully register the audio observer, the SDK uses thegetObservedAudioFramePositioncallback to determine at each specific audio-frame processing node whether to trigger the following callbacks: -onRecordFrame-onPlaybackFrame-onPlaybackFrameBeforeMixingoronPlaybackFrameBeforeMixingEx-onMixedFrameYou can set the positions that you want to observe by modifying the return value ofgetObservedAudioFramePositionaccording to your scenario.- Returns:
- The bit mask that controls the audio observation positions:
- `POSITION_PLAYBACK (0x01)`: The position for observing the playback audio of all remote users
after mixing, which enables the SDK to trigger the
onPlaybackFramecallback. - `POSITION_RECORD (0x01 << 1)`: The position for observing the recorded audio of the local user, which enables the SDK to trigger theonRecordFramecallback. - `POSITION_MIXED (0x01 << 2)`: The position for observing the mixed audio of the local user and all remote users, which enables the SDK to trigger theonMixedFramecallback. - `POSITION_BEFORE_MIXING (0x01 << 3)`: The position for observing the audio of a single remote user before mixing, which enables the SDK to trigger theonPlaybackFrameBeforeMixingoronPlaybackFrameBeforeMixingExcallback.
-
getRecordAudioParams
AudioParams getRecordAudioParams()Sets the audio recording format for theonRecordFramecallback. Register thegetRecordAudioParamscallback when calling theregisterAudioFrameObservermethod. After you successfully register the audio observer, the SDK triggers this callback each time it receives an audio frame. You can set the audio recording format in the return value of this callback.- Returns:
- Sets the audio format. See
AudioParams.
-
getPlaybackAudioParams
AudioParams getPlaybackAudioParams()Sets the audio playback format for theonPlaybackFramecallback. Register thegetPlaybackAudioParamscallback when calling theregisterAudioFrameObservermethod. After you successfully register the audio observer, the SDK triggers this callback each time it receives an audio frame. You can set the audio playback format in the return value of this callback.- Returns:
- Sets the audio format. See
AudioParams.
-
getMixedAudioParams
AudioParams getMixedAudioParams()Sets the audio mixing format for theonMixedFramecallback. Register thegetMixedAudioParamscallback when calling theregisterAudioFrameObservermethod. After you successfully register the audio observer, the SDK triggers this callback each time it receives an audio frame. You can set the audio mixing format in the return value of this callback.- Returns:
- Sets the audio format. See
AudioParams.
-
getEarMonitoringAudioParams
AudioParams getEarMonitoringAudioParams()Sets the audio ear monitoring format for theonEarMonitoringAudioFramecallback. Register thegetMixedAudioParamscallback when calling theregisterAudioFrameObservermethod. After you successfully register the audio observer, the SDK triggers this callback each time it receives an audio frame. You can set the audio ear monitoring format in the return value of this callback.- Returns:
- Sets the audio format. See
AudioParams.
-