|
Agora Java API Reference for Android
|
Public Member Functions | |
| abstract boolean | onRecordAudioFrame (String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) |
| Gets the captured audio frame. More... | |
| abstract boolean | onPlaybackAudioFrame (String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) |
| Gets the raw audio frame for playback. More... | |
| abstract boolean | onMixedAudioFrame (String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) |
| Retrieves the mixed captured and playback audio frame. More... | |
| abstract boolean | onEarMonitoringAudioFrame (int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) |
| Gets the in-ear monitoring audio frame. More... | |
| abstract boolean | onPlaybackAudioFrameBeforeMixing (String channelId, int uid, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type, int rtpTimestamp, long presentationMs) |
| Retrieves the audio frame before mixing of subscribed remote users. More... | |
| abstract int | getObservedAudioFramePosition () |
| Sets the frame position for the video observer. More... | |
| abstract AudioParams | getRecordAudioParams () |
Sets the audio format for the onRecordAudioFrame callback. More... | |
| abstract AudioParams | getPlaybackAudioParams () |
Sets the audio format for the onPlaybackAudioFrame callback. More... | |
| abstract AudioParams | getMixedAudioParams () |
Sets the audio format for the onMixedAudioFrame callback. More... | |
| abstract AudioParams | getEarMonitoringAudioParams () |
Sets the audio format for the onEarMonitoringAudioFrame callback. More... | |
The IAudioFrameObserver interface.
|
abstract |
Gets the captured audio frame.
To ensure that the format of the cpatured audio frame is as expected, you can choose one of the following two methods to set the audio data format:
setRecordingAudioFrameParameters to set the audio data format and registerAudioFrameObserver to register the audio frame observer object, the SDK calculates the sampling interval according to the parameters set in the methods, and triggers the onRecordAudioFrame callback according to the sampling interval.registerAudioFrameObserver to register the audio frame observer object, set the audio data format in the return value of the getObservedAudioFramePosition callback. The SDK then calculates the sampling interval according to the return value of the getRecordAudioParams callback, and triggers the onRecordAudioFrame callback according to the sampling interval.| channelId | The channel ID. |
| type | The audio frame type. |
| samplesPerChannel | The number of samples per channel in the audio frame. |
| bytesPerSample | The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes). |
| channels | The number of channels.
|
| samplesPerSec | Recording sample rate (Hz). |
| buffer | The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample. |
| renderTimeMs | The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used. |
| avsync_type | Reserved for future use. |
|
abstract |
Gets the raw audio frame for playback.
To ensure that the format of the cpatured audio frame is as expected, you can choose one of the following two methods to set the audio data format:
setPlaybackAudioFrameParameters to set the audio data format and registerAudioFrameObserver to register the audio frame observer object, the SDK calculates the sampling interval according to the parameters set in the methods, and triggers the onPlaybackAudioFrame callback according to the sampling interval.registerAudioFrameObserver to register the audio frame observer object, set the audio data format in the return value of the getObservedAudioFramePosition callback. The SDK then calculates the sampling interval according to the return value of the getPlaybackAudioParams callback, and triggers the onPlaybackAudioFrame callback according to the sampling interval.| channelId | The channel ID. |
| type | The audio frame type. |
| samplesPerChannel | The number of samples per channel in the audio frame. |
| bytesPerSample | The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes). |
| channels | The number of channels.
|
| samplesPerSec | Recording sample rate (Hz). |
| buffer | The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample. |
| renderTimeMs | The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used. |
| avsync_type | Reserved for future use. |
|
abstract |
Retrieves the mixed captured and playback audio frame.
To ensure that the data format of mixed captured and playback audio frame meets the expectations, Agora recommends that you choose one of the following two ways to set the data format:
setMixedAudioFrameParameters to set the audio data format and registerAudioFrameObserver to register the audio frame observer object, the SDK calculates the sampling interval according to the parameters set in the methods, and triggers the onMixedAudioFrame callback according to the sampling interval.registerAudioFrameObserver to register the audio frame observer object, set the audio data format in the return value of the getObservedAudioFramePosition callback. The SDK then calculates the sampling interval according to the return value of the getMixedAudioParams callback, and triggers the onMixedAudioFrame callback according to the sampling interval.| type | The audio frame type. |
| samplesPerChannel | The number of samples per channel in the audio frame. |
| bytesPerSample | The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes). |
| channels | The number of channels.
|
| samplesPerSec | Recording sample rate (Hz). |
| buffer | The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample. |
| renderTimeMs | The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used. |
| avsync_type | Reserved for future use. |
|
abstract |
Gets the in-ear monitoring audio frame.
To ensure that the obtained in-ear audio data meets the expectations, Agora recommends that you choose one of the following two methods to set the audio data format:
setEarMonitoringAudioFrameParameters to set the audio data format and registerAudioFrameObserver to register the audio frame observer object, the SDK calculates the sampling interval according to the parameters set in the methods, and triggers the onEarMonitoringAudioFrame callback according to the sampling interval.registerAudioFrameObserver to register the audio frame observer object, set the audio data format in the return value of the getObservedAudioFramePosition callback. The SDK then calculates the sampling interval according to the return value of the getEarMonitoringAudioParams callback, and triggers the onEarMonitoringAudioFrame callback according to the sampling interval.| type | The audio frame type. |
| samplesPerChannel | The number of samples per channel in the audio frame. |
| bytesPerSample | The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes). |
| channels | The number of channels.
|
| samplesPerSec | Recording sample rate (Hz). |
| buffer | The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample. |
| renderTimeMs | The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used. |
| avsync_type | Reserved for future use. |
|
abstract |
Retrieves the audio frame before mixing of subscribed remote users.
| uid | The ID of subscribed remote users. |
| type | The audio frame type. |
| samplesPerChannel | The number of samples per channel in the audio frame. |
| bytesPerSample | The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes). |
| channels | The number of channels.
|
| samplesPerSec | Recording sample rate (Hz). |
| buffer | The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample. |
| renderTimeMs | The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used. |
| avsync_type | Reserved for future use. |
|
abstract |
Sets the frame position for the video observer.
After successfully registering the audio data observer, the SDK uses this callback for each specific audio frame processing node to determine whether to trigger the following callbacks:
onRecordAudioFrameonPlaybackAudioFrameonPlaybackAudioFrameBeforeMixingonMixedAudioFrameonEarMonitoringAudioFrame You can set one or more positions you need to observe by modifying the return value of getObservedAudioFramePosition based on your scenario requirements: When the annotation observes multiple locations, the | (or operator) is required. To conserve system resources, you can reduce the number of frame positions that you want to observe.onPlaybackAudioFrame callback.onRecordAudioFrame callback.onMixedAudioFrame callback.onPlaybackAudioFrameBeforeMixing callback.onEarMonitoringAudioFrame callback.
|
abstract |
Sets the audio format for the onRecordAudioFrame callback.
You need to register the callback when calling the registerAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback, and you can set the audio format in the return value of this callback.
onRecordAudioFrame callback with the AudioParams calculated sampling interval you set in the return value. The calculation formula is Sample interval (sec) = samplePerCall /( sampleRate × channel ). Ensure that the sample interval ≥ 0.01 (s).AudioParams.
|
abstract |
Sets the audio format for the onPlaybackAudioFrame callback.
You need to register the callback when calling the registerAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback, and you can set the audio format in the return value of this callback.
onPlaybackAudioFrame callback with the AudioParams calculated sampling interval you set in the return value. The calculation formula is Sample interval (sec) = samplePerCall /( sampleRate × channel ). Ensure that the sample interval ≥ 0.01 (s).AudioParams.
|
abstract |
Sets the audio format for the onMixedAudioFrame callback.
You need to register the callback when calling the registerAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback, and you can set the audio format in the return value of this callback.
onMixedAudioFrame callback with the AudioParams calculated sampling interval you set in the return value. The calculation formula is Sample interval (sec) = samplePerCall /( sampleRate × channel ). Ensure that the sample interval ≥ 0.01 (s).AudioParams.
|
abstract |
Sets the audio format for the onEarMonitoringAudioFrame callback.
You need to register the callback when calling the registerAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback, and you can set the audio format in the return value of this callback.
onEarMonitoringAudioFrame callback with the AudioParams calculated sampling interval you set in the return value. The calculation formula is Sample interval ( sec ) = samplePerCall /( sampleRate × channel ). Ensure that the sample interval ≥ 0.01 (s).AudioParams.
1.8.18