Inherited by IAudioFrameObserver.
◆ AUDIO_FRAME_TYPE
Audio frame type.
| Enumerator |
|---|
| FRAME_TYPE_PCM16 | 0: PCM 16
|
◆ anonymous enum
| Enumerator |
|---|
| MAX_HANDLE_TIME_CNT | |
◆ AUDIO_FRAME_POSITION
| Enumerator |
|---|
| AUDIO_FRAME_POSITION_NONE | |
| AUDIO_FRAME_POSITION_PLAYBACK | The position for observing the playback audio of all remote users after mixing
|
| AUDIO_FRAME_POSITION_RECORD | The position for observing the recorded audio of the local user
|
| AUDIO_FRAME_POSITION_MIXED | The position for observing the mixed audio of the local user and all remote users
|
| AUDIO_FRAME_POSITION_BEFORE_MIXING | The position for observing the audio of a single remote user before mixing
|
| AUDIO_FRAME_POSITION_EAR_MONITORING | The position for observing the ear monitoring audio of the local user
|
◆ ~IAudioFrameObserverBase()
◆ onRecordAudioFrame()
| virtual bool onRecordAudioFrame |
( |
const char * |
channelId, |
|
|
AudioFrame & |
audioFrame |
|
) |
| |
|
pure virtual |
Gets the captured audio frame.
To ensure that the format of the cpatured audio frame is as expected, you can choose one of the following two methods to set the audio data format:
- Method 1: After calling
setRecordingAudioFrameParameters to set the audio data format and registerAudioFrameObserver to register the audio frame observer object, the SDK calculates the sampling interval according to the parameters set in the methods, and triggers the onRecordAudioFrame callback according to the sampling interval.
- Method 2: After calling
registerAudioFrameObserver to register the audio frame observer object, set the audio data format in the return value of the getObservedAudioFramePosition callback. The SDK then calculates the sampling interval according to the return value of the getRecordAudioParams callback, and triggers the onRecordAudioFrame callback according to the sampling interval.
- Note
- The priority of method 1 is higher than that of method 2. If method 1 is used to set the audio data format, the setting of method 2 is invalid.
- Parameters
-
| audioFrame | The raw audio data. See AudioFrame. |
| channelId | The channel ID. |
- Returns
- Without practical meaning.
◆ onPlaybackAudioFrame()
| virtual bool onPlaybackAudioFrame |
( |
const char * |
channelId, |
|
|
AudioFrame & |
audioFrame |
|
) |
| |
|
pure virtual |
Gets the raw audio frame for playback.
To ensure that the data format of audio frame for playback is as expected, Agora recommends that you choose one of the following two methods to set the audio data format:
- Method 1: After calling
setPlaybackAudioFrameParameters to set the audio data format and registerAudioFrameObserver to register the audio frame observer object, the SDK calculates the sampling interval according to the parameters set in the methods, and triggers the onPlaybackAudioFrame callback according to the sampling interval.
- Method 2: After calling
registerAudioFrameObserver to register the audio frame observer object, set the audio data format in the return value of the getObservedAudioFramePosition callback. The SDK then calculates the sampling interval according to the return value of the getPlaybackAudioParams callback, and triggers the onPlaybackAudioFrame callback according to the sampling interval.
- Note
- The priority of method 1 is higher than that of method 2. If method 1 is used to set the audio data format, the setting of method 2 is invalid.
- Parameters
-
| audioFrame | The raw audio data. See AudioFrame. |
| channelId | The channel ID. |
- Returns
- Without practical meaning.
◆ onMixedAudioFrame()
| virtual bool onMixedAudioFrame |
( |
const char * |
channelId, |
|
|
AudioFrame & |
audioFrame |
|
) |
| |
|
pure virtual |
Retrieves the mixed captured and playback audio frame.
To ensure that the data format of mixed captured and playback audio frame meets the expectations, Agora recommends that you choose one of the following two ways to set the data format:
- Method 1: After calling
setMixedAudioFrameParameters to set the audio data format and registerAudioFrameObserver to register the audio frame observer object, the SDK calculates the sampling interval according to the parameters set in the methods, and triggers the onMixedAudioFrame callback according to the sampling interval.
- Method 2: After calling
registerAudioFrameObserver to register the audio frame observer object, set the audio data format in the return value of the getObservedAudioFramePosition callback. The SDK then calculates the sampling interval according to the return value of the getMixedAudioParams callback, and triggers the onMixedAudioFrame callback according to the sampling interval.
- Note
- The priority of method 1 is higher than that of method 2. If method 1 is used to set the audio data format, the setting of method 2 is invalid.
- Parameters
-
| audioFrame | The raw audio data. See AudioFrame. |
| channelId | The channel ID. |
- Returns
- Without practical meaning.
◆ onEarMonitoringAudioFrame()
| virtual bool onEarMonitoringAudioFrame |
( |
AudioFrame & |
audioFrame | ) |
|
|
pure virtual |
Gets the in-ear monitoring audio frame.
In order to ensure that the obtained in-ear audio data meets the expectations, Agora recommends that you choose one of the following two methods to set the in-ear monitoring-ear audio data format:
- Method 1: After calling
setEarMonitoringAudioFrameParameters to set the audio data format and registerAudioFrameObserver to register the audio frame observer object, the SDK calculates the sampling interval according to the parameters set in the methods, and triggers the onEarMonitoringAudioFrame callback according to the sampling interval.
- Method 2: After calling
registerAudioFrameObserver to register the audio frame observer object, set the audio data format in the return value of the getObservedAudioFramePosition callback. The SDK then calculates the sampling interval according to the return value of the getEarMonitoringAudioParams callback, and triggers the onEarMonitoringAudioFrame callback according to the sampling interval.
- Note
- The priority of method 1 is higher than that of method 2. If method 1 is used to set the audio data format, the setting of method 2 is invalid.
- Parameters
-
- Returns
- Without practical meaning.
◆ onPlaybackAudioFrameBeforeMixing()
Occurs when the before-mixing playback audio frame is received.
- Parameters
-
| channelId | The channel name |
| userId | ID of the remote user. |
| audioFrame | The reference to the audio frame: AudioFrame. |
- Returns
- true: The before-mixing playback audio frame is valid and is encoded and sent.
- false: The before-mixing playback audio frame is invalid and is not encoded or sent.
◆ getObservedAudioFramePosition()
| virtual int getObservedAudioFramePosition |
( |
| ) |
|
|
pure virtual |
Sets the frame position for the video observer.
After successfully registering the audio data observer, the SDK uses this callback for each specific audio frame processing node to determine whether to trigger the following callbacks:
onRecordAudioFrame
onPlaybackAudioFrame
onPlaybackAudioFrameBeforeMixing
onMixedAudioFrame
onEarMonitoringAudioFrame You can set one or more positions you need to observe by modifying the return value of getObservedAudioFramePosition based on your scenario requirements: When the annotation observes multiple locations, the | (or operator) is required. To conserve system resources, you can reduce the number of frame positions that you want to observe.
- Returns
- a bitmask that sets the observation position, with the following values:
- AUDIO_FRAME_POSITION_PLAYBACK (0x0001): This position can observe the playback audio mixed by all remote users, corresponding to the
onPlaybackAudioFrame callback.
- AUDIO_FRAME_POSITION_RECORD (0x0002): This position can observe the collected local user's audio, corresponding to the
onRecordAudioFrame callback.
- AUDIO_FRAME_POSITION_MIXED (0x0004): This position can observe the playback audio mixed by the loacl user and all remote users, corresponding to the
onMixedAudioFrame callback.
- AUDIO_FRAME_POSITION_BEFORE_MIXING (0x0008): This position can observe the audio of a single remote user before mixing, corresponding to the
onPlaybackAudioFrameBeforeMixing callback.
- AUDIO_FRAME_POSITION_EAR_MONITORING (0x0010): This position can observe the in-ear monitoring audio of the local user, corresponding to the
onEarMonitoringAudioFrame callback.