Package io.agora.rtc2

Interface IAudioFrameObserver


public interface IAudioFrameObserver
The IAudioFrameObserver interface.
  • Method Details

    • onRecordAudioFrame

      boolean onRecordAudioFrame(String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
      Parameters:
      channelId - The channel ID.
      type - The audio frame type.
      samplesPerChannel - The number of samples per channel in the audio frame.
      bytesPerSample - The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
      channels - The number of channels. - 1: Mono. - 2: Stereo. If the channel uses stereo, the data is interleaved.
      samplesPerSec - Recording sample rate (Hz).
      buffer - The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample.
      renderTimeMs - The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used.
      avsync_type - Reserved for future use.
      Returns:
      Without practical meaning.
    • onPlaybackAudioFrame

      boolean onPlaybackAudioFrame(String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
      Parameters:
      channelId - The channel ID.
      type - The audio frame type.
      samplesPerChannel - The number of samples per channel in the audio frame.
      bytesPerSample - The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
      channels - The number of channels. - 1: Mono. - 2: Stereo. If the channel uses stereo, the data is interleaved.
      samplesPerSec - Recording sample rate (Hz).
      buffer - The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample.
      renderTimeMs - The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used.
      avsync_type - Reserved for future use.
      Returns:
      Without practical meaning.
    • onMixedAudioFrame

      boolean onMixedAudioFrame(String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
      Parameters:
      type - The audio frame type.
      samplesPerChannel - The number of samples per channel in the audio frame.
      bytesPerSample - The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
      channels - The number of channels. - 1: Mono. - 2: Stereo. If the channel uses stereo, the data is interleaved.
      samplesPerSec - Recording sample rate (Hz).
      buffer - The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample.
      renderTimeMs - The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used.
      avsync_type - Reserved for future use.
      Returns:
      Without practical meaning.
    • onEarMonitoringAudioFrame

      boolean onEarMonitoringAudioFrame(int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
      Parameters:
      type - The audio frame type.
      samplesPerChannel - The number of samples per channel in the audio frame.
      bytesPerSample - The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
      channels - The number of channels. - 1: Mono. - 2: Stereo. If the channel uses stereo, the data is interleaved.
      samplesPerSec - Recording sample rate (Hz).
      buffer - The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample.
      renderTimeMs - The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used.
      avsync_type - Reserved for future use.
      Returns:
      Without practical meaning.
    • onPlaybackAudioFrameBeforeMixing

      boolean onPlaybackAudioFrameBeforeMixing(String channelId, int uid, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type, int rtpTimestamp, long presentationMs)
      Parameters:
      uid - The ID of subscribed remote users.
      type - The audio frame type.
      samplesPerChannel - The number of samples per channel in the audio frame.
      bytesPerSample - The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
      channels - The number of channels. - 1: Mono. - 2: Stereo. If the channel uses stereo, the data is interleaved.
      samplesPerSec - Recording sample rate (Hz).
      buffer - The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample.
      renderTimeMs - The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used.
      avsync_type - Reserved for future use.
      Returns:
      Without practical meaning.
    • getObservedAudioFramePosition

      int getObservedAudioFramePosition()
      Returns:
      Returns a bitmask that sets the observation position, with the following values: - POSITION_PLAYBACK (0x0001): This position can observe the playback audio mixed by all remote users, corresponding to the `onPlaybackAudioFrame` callback. - POSITION_RECORD (0x0002): This position can observe the collected local user's audio, corresponding to the `onRecordAudioFrame` callback. - POSITION_MIXED (0x0004): This position can observe the playback audio mixed by the loacl user and all remote users, corresponding to the `onMixedAudioFrame` callback. - POSITION_BEFORE_MIXING (0x0008): This position can observe the audio of a single remote user before mixing, corresponding to the `onPlaybackAudioFrameBeforeMixing` callback. - POSITION_EAR_MONITORING (0x0010): This position can observe the in-ear monitoring audio of the local user, corresponding to the `onEarMonitoringAudioFrame` callback.
    • getRecordAudioParams

      AudioParams getRecordAudioParams()
      Returns:
      The captured audio data, see `AudioParams`.
    • getPlaybackAudioParams

      AudioParams getPlaybackAudioParams()
      Returns:
      The audio data for playback, see `AudioParams`.
    • getMixedAudioParams

      AudioParams getMixedAudioParams()
      Returns:
      The mixed captured and playback audio data, see `AudioParams`.
    • getEarMonitoringAudioParams

      AudioParams getEarMonitoringAudioParams()
      Returns:
      The audio data of in-ear monitoring, see `AudioParams`.