Package io.agora.rtc2

Interface IAudioFrameObserver


public interface IAudioFrameObserver
The IAudioFrameObserver interface.
  • Method Summary

    Modifier and Type
    Method
    Description
    Sets the audio ear monitoring format for the onEarMonitoringAudioFrame callback.
    Sets the audio mixing format for the onMixedFrame callback.
    int
    Sets the audio observation positions.
    Sets the audio playback format for the onPlaybackFrame callback.
    Sets the audio recording format for the onRecordFrame callback.
    boolean
    onEarMonitoringAudioFrame(int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
    Occurs when the ear monitoring audio frame is received.
    boolean
    onMixedAudioFrame(String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
    Occurs when the mixed playback audio frame is received.
    boolean
    onPlaybackAudioFrame(String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
    Occurs when the playback audio frame is received.
    boolean
    onPlaybackAudioFrameBeforeMixing(String channelId, int uid, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type, int rtpTimestamp, long presentationMs)
    Occurs when the playback audio frame before mixing is received.
    boolean
    onRecordAudioFrame(String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
    Occurs when the recorded audio frame is received.
  • Method Details

    • onRecordAudioFrame

      boolean onRecordAudioFrame(String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
      Occurs when the recorded audio frame is received.
      Parameters:
      channelId - The channel name
      type - The audio frame type.
      samplesPerChannel - The samples per channel.
      bytesPerSample - The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
      channels - The number of audio channels. If the channel uses stereo, the data is interleaved. - 1: Mono. - 2: Stereo.
      samplesPerSec - The number of samples per channel per second in the audio frame.
      buffer - The audio frame payload.
      renderTimeMs - The render timestamp in ms.
      avsync_type - The audio/video sync type.
      Returns:
      - true: The recorded audio frame is valid and is encoded and sent. - false: The recorded audio frame is invalid and is not encoded or sent.
    • onPlaybackAudioFrame

      boolean onPlaybackAudioFrame(String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
      Occurs when the playback audio frame is received.
      Parameters:
      channelId - The channel name
      type - The audio frame type.
      samplesPerChannel - The samples per channel.
      bytesPerSample - The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
      channels - The number of audio channels. If the channel uses stereo, the data is interleaved. - 1: Mono. - 2: Stereo.
      samplesPerSec - The number of samples per channel per second in the audio frame.
      buffer - The audio frame payload.
      renderTimeMs - The render timestamp in ms.
      avsync_type - The audio/video sync type.
      Returns:
      - true: The playback audio frame is valid and is encoded and sent. - false: The playback audio frame is invalid and is not encoded or sent.
    • onMixedAudioFrame

      boolean onMixedAudioFrame(String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
      Occurs when the mixed playback audio frame is received.
      Parameters:
      channelId - The channel name
      type - The audio frame type.
      samplesPerChannel - The samples per channel.
      bytesPerSample - The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
      channels - The number of audio channels. If the channel uses stereo, the data is interleaved. - 1: Mono. - 2: Stereo.
      samplesPerSec - The number of samples per channel per second in the audio frame.
      buffer - The audio frame payload.
      renderTimeMs - The render timestamp in ms.
      avsync_type - The audio/video sync type.
      Returns:
      - true: The mixed audio data is valid and is encoded and sent. - false: The mixed audio data is invalid and is not encoded or sent.
    • onEarMonitoringAudioFrame

      boolean onEarMonitoringAudioFrame(int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
      Occurs when the ear monitoring audio frame is received.
      Parameters:
      type - The audio frame type.
      samplesPerChannel - The samples per channel.
      bytesPerSample - The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
      channels - The number of audio channels. If the channel uses stereo, the data is interleaved. - 1: Mono. - 2: Stereo.
      samplesPerSec - The number of samples per channel per second in the audio frame.
      buffer - The audio frame payload.
      renderTimeMs - The render timestamp in ms.
      avsync_type - The audio/video sync type.
      Returns:
      - true: The ear monitoring audio frame is valid and is encoded and sent. - false: The ear monitoring audio frame is invalid and is not encoded or sent.
    • onPlaybackAudioFrameBeforeMixing

      boolean onPlaybackAudioFrameBeforeMixing(String channelId, int uid, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type, int rtpTimestamp, long presentationMs)
      Occurs when the playback audio frame before mixing is received.
      Parameters:
      uid - The user Id.
      type - The audio frame type.
      samplesPerChannel - The samples per channel.
      bytesPerSample - The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
      channels - The number of audio channels. If the channel uses stereo, the data is interleaved. - 1: Mono. - 2: Stereo.
      samplesPerSec - The number of samples per channel per second in the audio frame.
      buffer - The audio frame payload.
      renderTimeMs - The render timestamp in ms.
      avsync_type - The audio/video sync type.
      rtpTimestamp - RTP timestamp of the first sample in the frame.
      presentationMs - The pts timestamp of this audio frame.
      Returns:
      - true: The playback audio frame before mixing is valid and is encoded and sent. - false: The playback audio frame before mixing is invalid and is not encoded or sent.
    • getObservedAudioFramePosition

      int getObservedAudioFramePosition()
      Sets the audio observation positions. After you successfully register the audio observer, the SDK uses the getObservedAudioFramePosition callback to determine at each specific audio-frame processing node whether to trigger the following callbacks: - onRecordFrame - onPlaybackFrame - onPlaybackFrameBeforeMixing or onPlaybackFrameBeforeMixingEx - onMixedFrame You can set the positions that you want to observe by modifying the return value of getObservedAudioFramePosition according to your scenario.
      Returns:
      The bit mask that controls the audio observation positions: - `POSITION_PLAYBACK (0x01)`: The position for observing the playback audio of all remote users after mixing, which enables the SDK to trigger the onPlaybackFrame callback. - `POSITION_RECORD (0x01 << 1)`: The position for observing the recorded audio of the local user, which enables the SDK to trigger the onRecordFrame callback. - `POSITION_MIXED (0x01 << 2)`: The position for observing the mixed audio of the local user and all remote users, which enables the SDK to trigger the onMixedFrame callback. - `POSITION_BEFORE_MIXING (0x01 << 3)`: The position for observing the audio of a single remote user before mixing, which enables the SDK to trigger the onPlaybackFrameBeforeMixing or onPlaybackFrameBeforeMixingEx callback.
    • getRecordAudioParams

      AudioParams getRecordAudioParams()
      Sets the audio recording format for the onRecordFrame callback. Register the getRecordAudioParams callback when calling the registerAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback each time it receives an audio frame. You can set the audio recording format in the return value of this callback.
      Returns:
      Sets the audio format. See AudioParams.
    • getPlaybackAudioParams

      AudioParams getPlaybackAudioParams()
      Sets the audio playback format for the onPlaybackFrame callback. Register the getPlaybackAudioParams callback when calling the registerAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback each time it receives an audio frame. You can set the audio playback format in the return value of this callback.
      Returns:
      Sets the audio format. See AudioParams.
    • getMixedAudioParams

      AudioParams getMixedAudioParams()
      Sets the audio mixing format for the onMixedFrame callback. Register the getMixedAudioParams callback when calling the registerAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback each time it receives an audio frame. You can set the audio mixing format in the return value of this callback.
      Returns:
      Sets the audio format. See AudioParams.
    • getEarMonitoringAudioParams

      AudioParams getEarMonitoringAudioParams()
      Sets the audio ear monitoring format for the onEarMonitoringAudioFrame callback. Register the getMixedAudioParams callback when calling the registerAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback each time it receives an audio frame. You can set the audio ear monitoring format in the return value of this callback.
      Returns:
      Sets the audio format. See AudioParams.