|
Agora Java API Reference for Android
|
Public Member Functions | |
| abstract int | setChannelProfile (int profile) |
| Sets the channel profile. More... | |
| abstract int | setClientRole (int role) |
| Sets the client role. More... | |
| abstract int | setClientRole (int role, ClientRoleOptions options) |
| Sets the user role and the audience latency level in a live streaming scenario. More... | |
| abstract int | sendCustomReportMessage (String id, String category, String event, String label, int value) |
| Reports customized messages. More... | |
| abstract int | preloadChannel (String token, String channelName, int optionalUid) |
Preloads a channel with token, channelName, and optionalUid. More... | |
| abstract int | preloadChannelWithUserAccount (String token, String channelName, String userAccount) |
Preloads a channel with token, channelName, and userAccount. More... | |
| abstract int | updatePreloadChannelToken (String token) |
| Updates the wildcard token for preloading channels. More... | |
| abstract int | joinChannel (String token, String channelId, String optionalInfo, int uid) |
| Joins a channel. More... | |
| abstract int | joinChannel (String token, String channelId, int uid, ChannelMediaOptions options) |
| Joins a channel with media options. More... | |
| abstract int | registerLocalUserAccount (String appId, String userAccount) |
| Registers a user account. More... | |
| abstract int | joinChannelWithUserAccount (String token, String channelName, String userAccount) |
| Joins a channel with a User Account and Token. More... | |
| abstract int | joinChannelWithUserAccount (String token, String channelName, String userAccount, ChannelMediaOptions options) |
| Join a channel using a user account and token, and set the media options. More... | |
| abstract int | getUserInfoByUserAccount (String userAccount, UserInfo userInfo) |
| Gets the user information by passing in the user account. More... | |
| abstract int | getUserInfoByUid (int uid, UserInfo userInfo) |
| Gets the user information by passing in the user ID. More... | |
| abstract int | leaveChannel () |
| Leaves a channel. More... | |
| abstract int | leaveChannel (LeaveChannelOptions options) |
| Sets channel options and leaves the channel. More... | |
| abstract int | renewToken (String token) |
| Renews the token. More... | |
| abstract DeviceInfo | getAudioDeviceInfo () |
| Gets the audio device information. More... | |
| abstract int | enableWebSdkInteroperability (boolean enabled) |
| Enables interoperability with the Agora Web SDK (applicable only in the live streaming scenarios). More... | |
| abstract int | getConnectionState () |
| Gets the current connection state of the SDK. More... | |
| abstract int | enableAudio () |
| Enables the audio module. More... | |
| abstract int | disableAudio () |
| Disables the audio module. More... | |
| abstract int | pauseAudio () |
| abstract int | resumeAudio () |
| abstract int | setAudioProfile (int profile) |
| Sets the audio profile. More... | |
| abstract int | setAudioProfile (int profile, int scenario) |
| Sets the audio profile and audio scenario. More... | |
| abstract int | setAudioScenario (int scenario) |
| Sets the audio scenario. More... | |
| abstract int | setHighQualityAudioParameters (boolean fullband, boolean stereo, boolean fullBitrate) |
| abstract int | adjustRecordingSignalVolume (int volume) |
| Adjusts the capturing signal volume. More... | |
| abstract int | adjustPlaybackSignalVolume (int volume) |
| Adjusts the playback signal volume of all remote users. More... | |
| abstract int | enableAudioVolumeIndication (int interval, int smooth, boolean reportVad) |
| Enables the reporting of users' volume indication. More... | |
| abstract int | enableLocalAudio (boolean enabled) |
| Enables or disables the local audio capture. More... | |
| abstract int | muteLocalAudioStream (boolean muted) |
| Stops or resumes publishing the local audio stream. More... | |
| abstract int | muteRemoteAudioStream (int uid, boolean muted) |
| Stops or resumes subscribing to the audio stream of a specified user. More... | |
| abstract int | adjustUserPlaybackSignalVolume (int uid, int volume) |
| Adjusts the playback signal volume of a specified remote user. More... | |
| abstract int | muteAllRemoteAudioStreams (boolean muted) |
| Stops or resumes subscribing to the audio streams of all remote users. More... | |
| abstract int | enableVideo () |
| Enables the video module. More... | |
| abstract int | disableVideo () |
| Disables the video module. More... | |
| abstract int | setVideoEncoderConfiguration (VideoEncoderConfiguration config) |
| Sets the video encoder configuration. More... | |
| abstract CodecCapInfo[] | queryCodecCapability () |
| Queries the video codec capabilities of the SDK. More... | |
| abstract int | queryDeviceScore () |
| Queries device score. More... | |
| abstract AgoraFocalLengthInfo[] | queryCameraFocalLengthCapability () |
| Queries the focal length capability supported by the camera. More... | |
| abstract int | setCameraCapturerConfiguration (CameraCapturerConfiguration config) |
| Sets the camera capture configuration. More... | |
| abstract int | setupLocalVideo (VideoCanvas local) |
| Initializes the local video view. More... | |
| abstract int | setupRemoteVideo (VideoCanvas remote) |
| Initializes the video view of a remote user. More... | |
| abstract int | setRemoteRenderMode (int uid, int renderMode) |
| abstract int | setLocalRenderMode (int renderMode) |
| Sets the local video display mode. More... | |
| abstract int | setRemoteRenderMode (int uid, int renderMode, int mirrorMode) |
| Updates the display mode of the video view of a remote user. More... | |
| abstract int | setLocalRenderMode (int renderMode, int mirrorMode) |
| Updates the display mode of the local video view. More... | |
| abstract int | startPreview () |
| Enables the local video preview. More... | |
| abstract int | startPreview (Constants.VideoSourceType sourceType) |
| Enables the local video preview and specifies the video source for the preview. More... | |
| abstract int | stopPreview () |
| Stops the local video preview. More... | |
| abstract int | stopPreview (Constants.VideoSourceType sourceType) |
| Stops the local video preview. More... | |
| abstract int | enableLocalVideo (boolean enabled) |
| Enables/Disables the local video capture. More... | |
| abstract int | startCameraCapture (Constants.VideoSourceType sourceType, CameraCapturerConfiguration config) |
| Starts camera capture. More... | |
| abstract int | stopCameraCapture (Constants.VideoSourceType sourceType) |
| Stops camera capture. More... | |
| abstract int | startLocalVideoTranscoder (LocalTranscoderConfiguration config) |
| Starts the local video mixing. More... | |
| abstract int | stopLocalVideoTranscoder () |
| Stops the local video mixing. More... | |
| abstract int | updateLocalTranscoderConfiguration (LocalTranscoderConfiguration config) |
| Updates the local video mixing configuration. More... | |
| abstract int | startLocalAudioMixer (LocalAudioMixerConfiguration config) |
| Starts local audio mixing. More... | |
| abstract int | updateLocalAudioMixerConfiguration (LocalAudioMixerConfiguration config) |
| Updates the configurations for mixing audio streams locally. More... | |
| abstract int | stopLocalAudioMixer () |
| Stops the local audio mixing. More... | |
| abstract int | muteLocalVideoStream (boolean muted) |
| Stops or resumes publishing the local video stream. More... | |
| abstract int | muteRemoteVideoStream (int uid, boolean muted) |
| Stops or resumes subscribing to the video stream of a specified user. More... | |
| abstract int | muteAllRemoteVideoStreams (boolean muted) |
| Stops or resumes subscribing to the video streams of all remote users. More... | |
| abstract int | setBeautyEffectOptions (boolean enabled, BeautyOptions options) |
| Sets the image enhancement options. More... | |
| abstract int | setBeautyEffectOptions (boolean enabled, BeautyOptions options, Constants.MediaSourceType sourceType) |
| Sets the image enhancement options and specifies the media source. More... | |
| abstract int | setFaceShapeBeautyOptions (boolean enabled, FaceShapeBeautyOptions options) |
| Sets the face shape beauty options. More... | |
| abstract int | setFaceShapeBeautyOptions (boolean enabled, FaceShapeBeautyOptions options, Constants.MediaSourceType sourceType) |
| Sets the face shape options and specifies the media source. More... | |
| abstract FaceShapeBeautyOptions | getFaceShapeBeautyOptions () |
| Gets the beauty effect options. More... | |
| abstract FaceShapeBeautyOptions | getFaceShapeBeautyOptions (Constants.MediaSourceType sourceType) |
| Gets the beauty effect options. More... | |
| abstract int | setFaceShapeAreaOptions (FaceShapeAreaOptions options) |
| Sets the options for beauty enhancement facial areas. More... | |
| abstract int | setFaceShapeAreaOptions (FaceShapeAreaOptions options, Constants.MediaSourceType sourceType) |
| Sets the image enhancement options for facial areas and specifies the media source. More... | |
| abstract FaceShapeAreaOptions | getFaceShapeAreaOptions (int shapeArea) |
| Gets the facial beauty area options. More... | |
| abstract FaceShapeAreaOptions | getFaceShapeAreaOptions (int shapeArea, Constants.MediaSourceType sourceType) |
| Gets the facial beauty area options. More... | |
| abstract int | setFilterEffectOptions (boolean enabled, FilterEffectOptions options) |
| Sets the filter effect options. More... | |
| abstract int | setFilterEffectOptions (boolean enabled, FilterEffectOptions options, Constants.MediaSourceType sourceType) |
| Sets the filter effect options and specifies the media source. More... | |
| abstract int | setLowlightEnhanceOptions (boolean enabled, LowLightEnhanceOptions options) |
| Sets low-light enhancement. More... | |
| abstract int | setLowlightEnhanceOptions (boolean enabled, LowLightEnhanceOptions options, Constants.MediaSourceType sourceType) |
| Sets low light enhance options and specifies the media source. More... | |
| abstract int | setVideoDenoiserOptions (boolean enabled, VideoDenoiserOptions options) |
| Sets video noise reduction. More... | |
| abstract int | setVideoDenoiserOptions (boolean enabled, VideoDenoiserOptions options, Constants.MediaSourceType sourceType) |
| Sets video noise reduction and specifies the media source. More... | |
| abstract int | setColorEnhanceOptions (boolean enabled, ColorEnhanceOptions options) |
| Sets color enhancement. More... | |
| abstract int | setColorEnhanceOptions (boolean enabled, ColorEnhanceOptions options, Constants.MediaSourceType sourceType) |
| Sets color enhance options and specifies the media source. More... | |
| abstract IVideoEffectObject | createVideoEffectObject (String bundlePath, Constants.MediaSourceType sourceType) |
| Creates a video effect object. More... | |
| abstract int | destroyVideoEffectObject (IVideoEffectObject videoEffectObject) |
| Destroys a video effect object. More... | |
| abstract int | enableVirtualBackground (boolean enabled, VirtualBackgroundSource backgroundSource, SegmentationProperty segproperty) |
| Enables/Disables the virtual background. More... | |
| abstract int | enableVirtualBackground (boolean enabled, VirtualBackgroundSource backgroundSource, SegmentationProperty segproperty, Constants.MediaSourceType sourceType) |
| Enables virtual background and specify the media source, or disables virtual background. More... | |
| abstract int | setDefaultAudioRoutetoSpeakerphone (boolean defaultToSpeaker) |
| Sets the default audio playback route. More... | |
| abstract int | setEnableSpeakerphone (boolean enabled) |
| Enables/Disables the audio route to the speakerphone. More... | |
| abstract int | setRouteInCommunicationMode (int route) |
| Selects the audio playback route in communication audio mode. More... | |
| abstract boolean | isSpeakerphoneEnabled () |
| Checks whether the speakerphone is enabled. More... | |
| abstract int | enableInEarMonitoring (boolean enabled) |
| Enables in-ear monitoring. More... | |
| abstract int | enableInEarMonitoring (boolean enabled, int includeAudioFilters) |
| Enables in-ear monitoring. More... | |
| abstract int | setInEarMonitoringVolume (int volume) |
| Sets the volume of the in-ear monitor. More... | |
| abstract int | setLocalVoicePitch (double pitch) |
| Changes the voice pitch of the local speaker. More... | |
| abstract int | setLocalVoiceFormant (double formantRatio) |
| Sets the formant ratio to change the timbre of human voice. More... | |
| abstract int | setLocalVoiceEqualization (Constants.AUDIO_EQUALIZATION_BAND_FREQUENCY bandFrequency, int bandGain) |
| Sets the local voice equalization effect. More... | |
| abstract int | setLocalVoiceReverb (Constants.AUDIO_REVERB_TYPE reverbKey, int value) |
| Sets the local voice reverberation. More... | |
| abstract int | setHeadphoneEQPreset (int preset) |
| Sets the preset headphone equalization effect. More... | |
| abstract int | setHeadphoneEQParameters (int lowGain, int highGain) |
| Sets the low- and high-frequency parameters of the headphone equalizer. More... | |
| abstract int | setAudioEffectPreset (int preset) |
| Sets an SDK preset audio effect. More... | |
| abstract int | setVoiceBeautifierPreset (int preset) |
| Sets a preset voice beautifier effect. More... | |
| abstract int | setVoiceConversionPreset (int preset) |
| Sets a preset voice beautifier effect. More... | |
| abstract int | setAudioEffectParameters (int preset, int param1, int param2) |
| Sets parameters for SDK preset audio effects. More... | |
| abstract int | setVoiceBeautifierParameters (int preset, int param1, int param2) |
| Sets parameters for the preset voice beautifier effects. More... | |
| abstract int | setVoiceConversionParameters (int preset, int param1, int param2) |
| abstract int | enableSoundPositionIndication (boolean enabled) |
| Enables or disables stereo panning for remote users. More... | |
| abstract int | setRemoteVoicePosition (int uid, double pan, double gain) |
| Sets the 2D position (the position on the horizontal plane) of the remote user's voice. More... | |
| abstract int | enableVoiceAITuner (boolean enabled, Constants.VOICE_AI_TUNER_TYPE type) |
| Enables or disables the voice AI tuner. More... | |
| abstract int | startAudioMixing (String filePath, boolean loopback, int cycle) |
| Starts playing the music file. More... | |
| abstract int | enableSpatialAudio (boolean enabled) |
| Enables or disables the spatial audio effect. More... | |
| abstract int | setRemoteUserSpatialAudioParams (int uid, SpatialAudioParams params) |
| Sets the spatial audio effect parameters of the remote user. More... | |
| abstract int | setRemoteVideoSubscriptionOptions (int uid, VideoSubscriptionOptions options) |
| Options for subscribing to remote video streams. More... | |
| abstract int | setAINSMode (boolean enabled, int mode) |
| Sets whether to enable the AI noise suppression function and set the noise suppression mode. More... | |
| abstract int | startAudioMixing (String filePath, boolean loopback, int cycle, int startPos) |
| Starts playing the music file. More... | |
| abstract int | stopAudioMixing () |
| Stops playing the music file. More... | |
| abstract int | pauseAudioMixing () |
| Pauses playing and mixing the music file. More... | |
| abstract int | resumeAudioMixing () |
| Resumes playing and mixing the music file. More... | |
| abstract int | adjustAudioMixingVolume (int volume) |
| Adjusts the volume during audio mixing. More... | |
| abstract int | adjustAudioMixingPlayoutVolume (int volume) |
| Adjusts the volume of audio mixing for local playback. More... | |
| abstract int | adjustAudioMixingPublishVolume (int volume) |
| Adjusts the volume of audio mixing for publishing. More... | |
| abstract int | getAudioMixingPlayoutVolume () |
| Retrieves the audio mixing volume for local playback. More... | |
| abstract int | getAudioMixingPublishVolume () |
| Retrieves the audio mixing volume for publishing. More... | |
| abstract int | getAudioMixingDuration () |
| Retrieves the duration (ms) of the music file. More... | |
| abstract int | getAudioMixingCurrentPosition () |
| Retrieves the playback position (ms) of the music file. More... | |
| abstract int | setAudioMixingPosition (int pos) |
| Sets the audio mixing position. More... | |
| abstract int | setAudioMixingDualMonoMode (Constants.AudioMixingDualMonoMode mode) |
| Sets the channel mode of the current audio file. More... | |
| abstract int | setAudioMixingPitch (int pitch) |
| Sets the pitch of the local music file. More... | |
| abstract int | setAudioMixingPlaybackSpeed (int speed) |
| Sets the playback speed of the current audio file. More... | |
| abstract int | selectAudioTrack (int audioIndex) |
| Selects the audio track used during playback. More... | |
| abstract int | getAudioTrackCount () |
| Gets the index of audio tracks of the current music file. More... | |
| abstract IAudioEffectManager | getAudioEffectManager () |
Gets the IAudioEffectManager class to manage the audio effect files. More... | |
| abstract int | startAudioRecording (String filePath, int quality) |
| Starts client-side recording. More... | |
| abstract int | startAudioRecording (AudioRecordingConfiguration config) |
| Starts client-side recording and applies recording configurations. More... | |
| abstract int | stopAudioRecording () |
| Stops client-side recording. More... | |
| abstract int | startEchoTest (EchoTestConfiguration config) |
| Starts an audio device loopback test. More... | |
| abstract int | stopEchoTest () |
| Stops the audio call test. More... | |
| abstract int | startLastmileProbeTest (LastmileProbeConfig config) |
| Starts the last mile network probe test. More... | |
| abstract int | stopLastmileProbeTest () |
| Stops the last mile network probe test. More... | |
| abstract int | setExternalAudioSource (boolean enabled, int sampleRate, int channels) |
| Sets the external audio source. More... | |
| abstract int | setExternalAudioSink (boolean enabled, int sampleRate, int channels) |
| Sets the external audio sink. More... | |
| abstract int | setExternalRemoteEglContext (Object eglContext) |
| Sets the EGL context for rendering remote video streams. More... | |
| abstract int | pullPlaybackAudioFrame (byte[] data, int lengthInByte) |
| Pulls the remote audio data. More... | |
| abstract int | pullPlaybackAudioFrame (ByteBuffer data, int lengthInByte) |
| Pulls the remote audio data. More... | |
| abstract int | startRecordingDeviceTest (int indicationInterval) |
| Starts the audio capturing device test. More... | |
| abstract int | stopRecordingDeviceTest () |
| Stops the audio capturing device test. More... | |
| abstract int | startPlaybackDeviceTest (String audioFileName) |
| Starts the audio playback device test. More... | |
| abstract int | stopPlaybackDeviceTest () |
| Stops the audio playback device test. More... | |
| abstract int | createCustomAudioTrack (Constants.AudioTrackType trackType, AudioTrackConfig config) |
| Creates a custom audio track. More... | |
| abstract int | destroyCustomAudioTrack (int trackId) |
| Destroys the specified audio track. More... | |
| abstract int | setExternalAudioSource (boolean enabled, int sampleRate, int channels, boolean localPlayback, boolean publish) |
| Sets the external audio source parameters. More... | |
| abstract int | pushExternalAudioFrame (byte[] data, long timestamp) |
| abstract int | pushExternalAudioFrame (ByteBuffer data, long timestamp, int trackId) |
| abstract int | pushExternalAudioFrame (byte[] data, long timestamp, int sampleRate, int channels, Constants.BytesPerSample bytesPerSample, int trackId) |
| Pushes the external audio frame to the SDK. More... | |
| abstract int | pushExternalAudioFrame (ByteBuffer data, long timestamp, int sampleRate, int channels, Constants.BytesPerSample bytesPerSample, int trackId) |
| abstract int | setExternalVideoSource (boolean enable, boolean useTexture, Constants.ExternalVideoSourceType sourceType) |
| Configures the external video source. More... | |
| abstract int | setExternalVideoSource (boolean enable, boolean useTexture, Constants.ExternalVideoSourceType sourceType, EncodedVideoTrackOptions encodedOpt) |
| abstract boolean | pushExternalVideoFrame (VideoFrame frame) |
| Pushes the external raw video frame to the SDK. More... | |
| abstract int | pushExternalVideoFrameById (VideoFrame frame, int videoTrackId) |
| Pushes the external raw video frame to the SDK through video tracks. More... | |
| abstract int | pushExternalVideoFrameById (AgoraVideoFrame frame, int videoTrackId) |
| Pushes the external raw video frame to the SDK through video tracks. More... | |
| abstract int | pushExternalEncodedVideoFrame (ByteBuffer data, EncodedVideoFrameInfo frameInfo) |
| abstract int | pushExternalEncodedVideoFrameById (ByteBuffer data, EncodedVideoFrameInfo frameInfo, int videoTrackId) |
| abstract boolean | pushExternalVideoFrame (AgoraVideoFrame frame) |
| Pushes the external raw video frame to the SDK. More... | |
| abstract boolean | isTextureEncodeSupported () |
| Check whether the video supports the Texture encoding. More... | |
| abstract int | registerAudioFrameObserver (IAudioFrameObserver observer) |
| Registers an audio frame observer object. More... | |
| abstract int | registerAudioEncodedFrameObserver (AudioEncodedFrameObserverConfig config, IAudioEncodedFrameObserver observer) |
| Registers an encoded audio observer. More... | |
| abstract int | setRecordingAudioFrameParameters (int sampleRate, int channel, int mode, int samplesPerCall) |
| Sets the format of the captured raw audio data. More... | |
| abstract int | setPlaybackAudioFrameParameters (int sampleRate, int channel, int mode, int samplesPerCall) |
| Sets the format of the raw audio playback data. More... | |
| abstract int | setMixedAudioFrameParameters (int sampleRate, int channel, int samplesPerCall) |
| Sets the format of the raw audio data after mixing for audio capture and playback. More... | |
| abstract int | setEarMonitoringAudioFrameParameters (int sampleRate, int channel, int mode, int samplesPerCall) |
| Sets the format of the in-ear monitoring raw audio data. More... | |
| abstract int | addVideoWatermark (AgoraImage watermark) |
| Adds a watermark image to the local video. More... | |
| abstract int | addVideoWatermark (String watermarkUrl, WatermarkOptions options) |
| Adds a watermark image to the local video. More... | |
| abstract int | addVideoWatermark (WatermarkConfig config) |
| Adds a watermark image to the local video. More... | |
| abstract int | removeVideoWatermark (String id) |
| Removes the watermark image from the local video. More... | |
| abstract int | clearVideoWatermarks () |
| Removes the watermark image from the video stream. More... | |
| abstract int | setRemoteUserPriority (int uid, int userPriority) |
| abstract int | setRemoteSubscribeFallbackOption (Constants.StreamFallbackOptions option) |
| Sets the fallback option for the subscribed video stream based on the network conditions. More... | |
| abstract int | setRemoteSubscribeFallbackOption (int option) |
| Sets the fallback option for the subscribed video stream based on the network conditions. More... | |
| abstract int | setHighPriorityUserList (int[] uidList, int option) |
| abstract int | enableDualStreamMode (boolean enabled) |
| Enables or disables dual-stream mode on the sender side. More... | |
| abstract int | enableDualStreamMode (boolean enabled, SimulcastStreamConfig streamConfig) |
| Sets the dual-stream mode on the sender side and the low-quality video stream. More... | |
| abstract int | setDualStreamMode (Constants.SimulcastStreamMode mode) |
| Sets the dual-stream mode on the sender side. More... | |
| abstract int | setDualStreamMode (Constants.SimulcastStreamMode mode, SimulcastStreamConfig streamConfig) |
| Sets dual-stream mode configuration on the sender side. More... | |
| abstract int | setSimulcastConfig (SimulcastConfig simulcastConfig) |
| Sets the simulcast video stream configuration. More... | |
| abstract int | setLocalRenderTargetFps (Constants.VideoSourceType sourceType, int targetFps) |
| Sets the maximum frame rate for rendering local video. More... | |
| abstract int | setRemoteRenderTargetFps (int targetFps) |
| Sets the maximum frame rate for rendering remote video. More... | |
| abstract int | setRemoteVideoStreamType (int uid, Constants.VideoStreamType streamType) |
| Sets the video stream type to subscribe to. More... | |
| abstract int | setRemoteVideoStreamType (int uid, int streamType) |
| Sets the video stream type to subscribe to. More... | |
| abstract int | setRemoteDefaultVideoStreamType (Constants.VideoStreamType streamType) |
| Sets the default video stream type to subscribe to. More... | |
| abstract int | setRemoteDefaultVideoStreamType (int streamType) |
| Sets the default video stream type to subscribe to. More... | |
| abstract int | setSubscribeAudioBlocklist (int[] uidList) |
| Sets the blocklist of subscriptions for audio streams. More... | |
| abstract int | setSubscribeAudioAllowlist (int[] uidList) |
| Sets the allowlist of subscriptions for audio streams. More... | |
| abstract int | setSubscribeVideoBlocklist (int[] uidList) |
| Sets the blocklist of subscriptions for video streams. More... | |
| abstract int | setSubscribeVideoAllowlist (int[] uidList) |
| Sets the allowlist of subscriptions for video streams. More... | |
| abstract int | enableEncryption (boolean enabled, EncryptionConfig config) |
| Enables or disables the built-in encryption. More... | |
| abstract int | startRtmpStreamWithoutTranscoding (String url) |
| Starts pushing media streams to a CDN without transcoding. More... | |
| abstract int | startRtmpStreamWithTranscoding (String url, LiveTranscoding transcoding) |
| Starts Media Push and sets the transcoding configuration. More... | |
| abstract int | updateRtmpTranscoding (LiveTranscoding transcoding) |
| Updates the transcoding configuration. More... | |
| abstract int | stopRtmpStream (String url) |
| Stops pushing media streams to a CDN. More... | |
| abstract int | createDataStream (boolean reliable, boolean ordered) |
| Creates a data stream. More... | |
| abstract int | createDataStream (DataStreamConfig config) |
| Creates a data stream. More... | |
| abstract int | sendStreamMessage (int streamId, byte[] message) |
| Sends data stream messages. More... | |
| abstract int | sendRdtMessage (int uid, int type, byte[] message) |
| Send Reliable message to remote uid in channel. More... | |
| abstract int | sendMediaControlMessage (int uid, byte[] message) |
| Send media control message. More... | |
| abstract int | setVideoQualityParameters (boolean preferFrameRateOverImageQuality) |
| abstract int | setLocalVideoMirrorMode (int mode) |
| Sets the local video mirror mode. More... | |
| abstract int | switchCamera () |
| Switches between front and rear cameras. More... | |
| abstract int | switchCamera (String cameraId) |
| Switches cameras by camera ID. More... | |
| abstract boolean | isCameraZoomSupported () |
| Checks whether the device supports camera zoom. More... | |
| abstract boolean | isCameraTorchSupported () |
| Checks whether the device supports camera flash. More... | |
| abstract boolean | isCameraFocusSupported () |
| Check whether the device supports the manual focus function. More... | |
| abstract boolean | isCameraExposurePositionSupported () |
| Checks whether the device supports manual exposure. More... | |
| abstract boolean | isCameraAutoFocusFaceModeSupported () |
| Checks whether the device supports the face auto-focus function. More... | |
| abstract boolean | isCameraFaceDetectSupported () |
| Checks whether the device camera supports face detection. More... | |
| abstract boolean | isCameraExposureSupported () |
| Queries whether the current camera supports adjusting exposure value. More... | |
| abstract int | setCameraZoomFactor (float factor) |
| Sets the camera zoom factor. More... | |
| abstract float | getCameraMaxZoomFactor () |
| Gets the maximum zoom ratio supported by the camera. More... | |
| abstract int | setCameraFocusPositionInPreview (float positionX, float positionY) |
| Sets the camera manual focus position. More... | |
| abstract int | setCameraExposurePosition (float positionXinView, float positionYinView) |
| Sets the camera exposure position. More... | |
| abstract int | enableFaceDetection (boolean enabled) |
| Enables or disables face detection for the local user. More... | |
| abstract int | setCameraTorchOn (boolean isOn) |
| Enables the camera flash. More... | |
| abstract int | setCameraAutoFocusFaceModeEnabled (boolean enabled) |
| Enables the camera auto-face focus function. More... | |
| abstract int | setCameraExposureFactor (int factor) |
| Sets the camera exposure value. More... | |
| abstract String | getCallId () |
| Retrieves the call ID. More... | |
| abstract int | rate (String callId, int rating, String description) |
| Allows a user to rate a call after the call ends. More... | |
| abstract int | complain (String callId, String description) |
| Allows a user to complain about the call quality after a call ends. More... | |
| abstract int | setLogFile (String filePath) |
| Sets the log file. More... | |
| abstract int | setLogFilter (int filter) |
| Sets the log output level of the SDK. More... | |
| abstract int | setLogLevel (int level) |
| Sets the output log level of the SDK. More... | |
| abstract int | setLogFileSize (long fileSizeInKBytes) |
| Sets the log file size. More... | |
| abstract String | uploadLogFile () |
| abstract int | writeLog (int level, String format, Object... args) |
| abstract long | getNativeHandle () |
| Gets the C++ handle of the Native SDK. More... | |
| void | addHandler (IRtcEngineEventHandler handler) |
| Adds event handlers. More... | |
| void | removeHandler (IRtcEngineEventHandler handler) |
| Removes the specified IRtcEngineEventHandler instance. More... | |
| abstract boolean | enableHighPerfWifiMode (boolean enable) |
| abstract long | getNativeMediaPlayer (int sourceId) |
| abstract int | queryHDRCapability (Constants.VIDEO_MODULE_TYPE moduleType) |
| abstract int | queryScreenCaptureCapability () |
| Queries the highest frame rate supported by the device during screen sharing. More... | |
| abstract void | monitorHeadsetEvent (boolean monitor) |
| abstract void | monitorBluetoothHeadsetEvent (boolean monitor) |
| abstract void | setPreferHeadset (boolean enabled) |
| abstract int | setParameters (String parameters) |
| Provides technical preview functionalities or special customizations by configuring the SDK with JSON options. More... | |
| abstract String | getParameters (String parameters) |
| abstract String | getParameter (String parameter, String args) |
| abstract int | registerMediaMetadataObserver (IMetadataObserver observer, int type) |
| Registers the metadata observer. More... | |
| abstract int | unregisterMediaMetadataObserver (IMetadataObserver observer, int type) |
| Unregisters the specified metadata observer. More... | |
| abstract int | startOrUpdateChannelMediaRelay (ChannelMediaRelayConfiguration channelMediaRelayConfiguration) |
| Starts relaying media streams across channels or updates channels for media relay. More... | |
| abstract int | stopChannelMediaRelay () |
| Stops the media stream relay. Once the relay stops, the host quits all the target channels. More... | |
| abstract int | pauseAllChannelMediaRelay () |
| Pauses the media stream relay to all target channels. More... | |
| abstract int | resumeAllChannelMediaRelay () |
| Resumes the media stream relay to all target channels. More... | |
| abstract int | updateChannelMediaOptions (ChannelMediaOptions options) |
| Updates the channel media options after joining the channel. More... | |
| abstract int | muteRecordingSignal (boolean muted) |
| Whether to mute the recording signal. More... | |
| abstract int | setPlaybackAudioFrameBeforeMixingParameters (int sampleRate, int channel) |
| Sets the format of the raw audio playback data before mixing. More... | |
| abstract int | setPlaybackAudioFrameBeforeMixingParameters (int sampleRate, int channel, int samplesPerCall) |
Sets the format of audio data in the onPlaybackAudioFrameBeforeMixing callback. More... | |
| abstract int | enableAudioSpectrumMonitor (int intervalInMS) |
| Turns on audio spectrum monitoring. More... | |
| abstract int | disableAudioSpectrumMonitor () |
| Disables audio spectrum monitoring. More... | |
| abstract int | registerAudioSpectrumObserver (IAudioSpectrumObserver observer) |
| Registers an audio spectrum observer. More... | |
| abstract int | unRegisterAudioSpectrumObserver (IAudioSpectrumObserver observer) |
| Unregisters the audio spectrum observer. More... | |
| abstract double | getEffectsVolume () |
| Retrieves the volume of the audio effects. More... | |
| abstract int | setEffectsVolume (double volume) |
| Sets the volume of the audio effects. More... | |
| abstract int | preloadEffect (int soundId, String filePath) |
| Preloads a specified audio effect file into the memory. More... | |
| abstract int | preloadEffect (int soundId, String filePath, int startPos) |
| abstract int | playEffect (int soundId, String filePath, int loopCount, double pitch, double pan, double gain, boolean publish) |
| Plays the specified local or online audio effect file. More... | |
| abstract int | playEffect (int soundId, String filePath, int loopCount, double pitch, double pan, double gain, boolean publish, int startPos) |
| Plays the specified local or online audio effect file. More... | |
| abstract int | playAllEffects (int loopCount, double pitch, double pan, double gain, boolean publish) |
| abstract int | getVolumeOfEffect (int soundId) |
| Gets the volume of a specified audio effect file. More... | |
| abstract int | setVolumeOfEffect (int soundId, double volume) |
| Gets the volume of a specified audio effect file. More... | |
| abstract int | pauseEffect (int soundId) |
| Pauses a specified audio effect file. More... | |
| abstract int | pauseAllEffects () |
| Pauses all audio effects. More... | |
| abstract int | resumeEffect (int soundId) |
| Resumes playing a specified audio effect. More... | |
| abstract int | resumeAllEffects () |
| Resumes playing all audio effect files. More... | |
| abstract int | stopEffect (int soundId) |
| Stops playing a specified audio effect. More... | |
| abstract int | stopAllEffects () |
| Stops playing all audio effects. More... | |
| abstract int | unloadEffect (int soundId) |
| Releases a specified preloaded audio effect from the memory. More... | |
| abstract int | unloadAllEffects () |
| abstract int | getEffectDuration (String filePath) |
| Retrieves the duration of the audio effect file. More... | |
| abstract int | setEffectPosition (int soundId, int pos) |
| Sets the playback position of an audio effect file. More... | |
| abstract int | getEffectCurrentPosition (int soundId) |
| Retrieves the playback position of the audio effect file. More... | |
| abstract int | registerVideoEncodedFrameObserver (IVideoEncodedFrameObserver receiver) |
| Registers a receiver object for the encoded video image. More... | |
| abstract int | registerFaceInfoObserver (IFaceInfoObserver receiver) |
| Registers or unregisters a facial information observer. More... | |
| abstract int | takeSnapshot (int uid, String filePath) |
| Takes a snapshot of a video stream. More... | |
| abstract int | takeSnapshot (int uid, SnapshotConfig config) |
| Takes a screenshot of the video at the specified observation point. More... | |
| abstract int | enableContentInspect (boolean enabled, ContentInspectConfig config) |
| Enables or disables video screenshot and upload. More... | |
| abstract int | loadExtensionProvider (String path) |
| abstract int | registerExtension (String provider, String extension, Constants.MediaSourceType sourceType) |
| Registers an extension. More... | |
| abstract int | enableExtension (String provider, String extension, boolean enable) |
| abstract int | enableExtension (String provider, String extension, boolean enable, Constants.MediaSourceType sourceType) |
| Enables or disables extensions. More... | |
| abstract int | setExtensionProperty (String provider, String extension, String key, String value) |
| Sets the properties of the extension. More... | |
| abstract int | setExtensionProperty (String provider, String extension, String key, String value, Constants.MediaSourceType sourceType) |
| abstract String | getExtensionProperty (String provider, String extension, String key) |
| Gets detailed information on the extensions. More... | |
| abstract String | getExtensionProperty (String provider, String extension, String key, Constants.MediaSourceType sourceType) |
| Gets detailed information on the extensions. More... | |
| abstract int | setExtensionProviderProperty (String provider, String key, String value) |
| Sets the properties of the extension provider. More... | |
| abstract int | enableExtension (String provider, String extension, ExtensionInfo extensionInfo, boolean enable) |
| abstract int | setExtensionProperty (String provider, String extension, ExtensionInfo extensionInfo, String key, String value) |
| abstract String | getExtensionProperty (String provider, String extension, ExtensionInfo extensionInfo, String key) |
| abstract int | startScreenCapture (ScreenCaptureParameters screenCaptureParameters) |
| Starts screen capture. More... | |
| abstract int | setExternalMediaProjection (MediaProjection mediaProjection) |
Configures MediaProjection outside of the SDK to capture screen video streams. More... | |
| abstract int | setScreenCaptureScenario (Constants.ScreenScenarioType screenScenario) |
| Sets the screen sharing scenario. More... | |
| abstract int | stopScreenCapture () |
| Stops screen capture. More... | |
| abstract int | setVideoScenario (Constants.VideoScenario scenarioType) |
| Sets video application scenarios. More... | |
| abstract int | setVideoQoEPreference (Constants.QoEPreference qoePreference) |
| abstract int | updateScreenCaptureParameters (ScreenCaptureParameters screenCaptureParameters) |
| Updates the screen capturing parameters. More... | |
| abstract int | registerVideoFrameObserver (IVideoFrameObserver observer) |
| Registers a raw video frame observer object. More... | |
| abstract IMediaPlayer | createMediaPlayer () |
| Creates a media player object. More... | |
| abstract AgoraMediaRecorder | createMediaRecorder (RecorderStreamInfo info) |
| Creates a recorder for audio and video. More... | |
| abstract void | destroyMediaRecorder (AgoraMediaRecorder mediaRecorder) |
| Destroys the audio and video recording object. More... | |
| abstract IMediaPlayerCacheManager | getMediaPlayerCacheManager () |
Gets one IMediaPlayerCacheManager instance. More... | |
| abstract IH265Transcoder | getH265Transcoder () |
| abstract int | enableExternalAudioSourceLocalPlayback (boolean enabled) |
| abstract int | adjustCustomAudioPublishVolume (int trackId, int volume) |
| Adjusts the volume of the custom audio track played remotely. More... | |
| abstract int | adjustCustomAudioPlayoutVolume (int trackId, int volume) |
| Adjusts the volume of the custom audio track played locally. More... | |
| abstract int | startRhythmPlayer (String sound1, String sound2, AgoraRhythmPlayerConfig config) |
| Enables the virtual metronome. More... | |
| abstract int | stopRhythmPlayer () |
| Disables the virtual metronome. More... | |
| abstract int | configRhythmPlayer (AgoraRhythmPlayerConfig config) |
| Configures the virtual metronome. More... | |
| abstract int | setDirectCdnStreamingAudioConfiguration (int profile) |
| Sets the audio profile of the audio streams directly pushed to the CDN by the host. More... | |
| abstract int | setDirectCdnStreamingVideoConfiguration (VideoEncoderConfiguration config) |
| Sets the video profile of the media streams directly pushed to the CDN by the host. More... | |
| abstract long | getCurrentMonotonicTimeInMs () |
| Gets the current Monotonic Time of the SDK. More... | |
| abstract int | startDirectCdnStreaming (IDirectCdnStreamingEventHandler eventHandler, String publishUrl, DirectCdnStreamingMediaOptions options) |
| Starts pushing media streams to the CDN directly. More... | |
| abstract int | stopDirectCdnStreaming () |
| Stops pushing media streams to the CDN directly. More... | |
| abstract int | updateDirectCdnStreamingMediaOptions (DirectCdnStreamingMediaOptions options) |
| abstract int | createCustomVideoTrack () |
| Creates a custom video track. More... | |
| abstract int | createCustomEncodedVideoTrack (EncodedVideoTrackOptions encodedOpt) |
| abstract int | destroyCustomVideoTrack (int video_track_id) |
| Destroys the specified video track. More... | |
| abstract int | destroyCustomEncodedVideoTrack (int video_track_id) |
| abstract int | setCloudProxy (int proxyType) |
| Sets up cloud proxy service. More... | |
| abstract int | setLocalAccessPoint (LocalAccessPointConfiguration config) |
| Configures the connection to the Agora private media server access module. More... | |
| abstract int | enableCustomAudioLocalPlayback (int trackId, boolean enabled) |
| Sets whether to enable the local playback of external audio source. More... | |
| abstract int | setAdvancedAudioOptions (AdvancedAudioOptions options) |
| Sets audio advanced options. More... | |
| abstract int | setAVSyncSource (String channelId, int uid) |
| Sets audio-video synchronization on the publishing side. More... | |
| abstract int | enableVideoImageSource (boolean enabled, ImageTrackOptions options) |
| Sets whether to replace the current video feeds with images when publishing video streams. More... | |
| abstract int | getNetworkType () |
| Gets the type of the local network connection. More... | |
| abstract long | getNtpWallTimeInMs () |
| Gets the current NTP (Network Time Protocol) time. More... | |
| abstract int | startMediaRenderingTracing () |
| Enables tracing the video frame rendering process. More... | |
| abstract int | enableInstantMediaRendering () |
| Enables audio and video frame instant rendering. More... | |
| abstract int | setupAudioAttributes (AudioAttributes attr) |
| abstract boolean | isFeatureAvailableOnDevice (int type) |
| Checks whether the device supports the specified advanced feature. More... | |
| abstract int | sendAudioMetadata (byte[] metadata) |
| Send audio metadata. More... | |
Static Public Member Functions | |
| static synchronized RtcEngine | create (Context context, String appId, IRtcEngineEventHandler handler) throws Exception |
Creates and initializes RtcEngine. More... | |
| static synchronized RtcEngine | create (RtcEngineConfig config) throws Exception |
Creates and initializes RtcEngine. More... | |
| static synchronized void | destroy () |
Releases the RtcEngine instance. More... | |
| static synchronized void | destroy (@Nullable IRtcEngineReleaseCallback callback) |
Destroys the RtcEngine instance and releases related resources. More... | |
| static int | getRecommendedEncoderType () |
| static String | getSdkVersion () |
| Gets the SDK version. More... | |
| static String | getMediaEngineVersion () |
| static String | getErrorDescription (int error) |
| Gets the warning or error description. More... | |
Static Protected Attributes | |
| static RtcEngineImpl | mInstance = null |
Main interface class of the Agora Native SDK.
Call the methods of this class to use all the functionalities of the SDK. Agora recommends calling the RtcEngine API methods in the same thread instead of in multiple threads. In previous versions, this class was named AgoraAudio, and was renamed to RtcEngine from version 1.0.
|
static |
Creates and initializes RtcEngine.
All called methods provided by the RtcEngine class are executed asynchronously. Agora recommends calling these methods in the same thread.
RtcEngine object.RtcEngine instance either by calling this method or by calling create(RtcEngineConfig config) . The difference between create(RtcEngineConfig config) and this method is that create(RtcEngineConfig config) supports more configurations when creating the RtcEngine instance, for example, specifying the region for connection and setting the log files.RtcEngine instance for an app.| context | The context of Android Activity. |
| appId | The App ID issued by Agora for your project. Only users in apps with the same App ID can join the same channel and communicate with each other. An App ID can only be used to create one RtcEngine instance. To change your App ID, call destroy() to destroy the current RtcEngine instance, and then create a new one. |
| handler | The event handler for RtcEngine. See IRtcEngineEventHandler. |
RtcEngine object if the method call succeeds.
|
static |
Creates and initializes RtcEngine.
You can create the RtcEngine instance either by calling this method or by calling create(Context context, String appId, IRtcEngineEventHandler handler) . The difference between create(Context context, String appId, IRtcEngineEventHandler handler) and this method is that this method supports more configurations when creating the RtcEngine instance, for example, specifying the region for connection and setting the log files. Call timing: Before calling other APIs, you must call this method to create the RtcEngine object.
RtcEngine instance for an app. All called methods provided by the RtcEngine class are executed asynchronously. Agora recommends calling these methods in the same thread.| config | Configurations for the RtcEngine instance. See RtcEngineConfig. |
RtcEngine object if the method call succeeds.
|
static |
Releases the RtcEngine instance.
This method releases all resources used by the Agora SDK. Use this method for apps in which users occasionally make voice or video calls. When users do not make calls, you can free up resources for other operations. After a successful method call, you can no longer use any method or callback in the SDK anymore. If you want to use the real-time communication functions again, you must call create(RtcEngineConfig config) to create a new RtcEngine instance.
destroy() in any callback of the SDK. Otherwise, the SDK cannot release the resources until the callbacks return results, which may result in a deadlock.
|
static |
Destroys the RtcEngine instance and releases related resources.
When you no longer need real-time communication, call this method to release the RtcEngine instance and its related resources, so that the released resources can be used for other operations. It is recommended for scenarios where users make voice or video interactions.
| callback | To configure the synchronous or asynchronous destruction of the engine:
|
|
abstract |
Sets the channel profile.
You can call this method to set the channel profile. The SDK adopts different optimization strategies for different channel profiles. For example, in a live streaming scenario, the SDK prioritizes video quality. After initializing the SDK, the default channel profile is the live streaming profile. Call timing: Call this method before joining a channel.
setDefaultAudioRouteToSpeakerphone.| profile | The channel profile.
|
|
abstract |
Sets the client role.
By default,the SDK sets the user role as audience. You can call this method to set the user role as host. The user role ( roles ) determines the users' permissions at the SDK level, including whether they can publish audio and video streams in a channel. Call timing: You can call this method either before or after joining a channel. If you call this method to set the user role as the host before joining the channel and set the local video property through the setupLocalVideo method, the local video preview is automatically enabled when the user joins the channel. If you call this method to set the user role after joining a channel, the SDK will automatically call the muteLocalAudioStream and muteLocalVideoStream method to change the state for publishing audio and video streams. Related callbacks: If you call this method to switch the user role after joining the channel, the SDK triggers the following callbacks:
onClientRoleChanged on the local client.Note: Calling this method before joining a channel and set the role to AUDIENCE will trigger this callback as well.onUserJoined or onUserOffline on the remote client. If you call this method to set the user role after joining a channel but encounter a failure, the SDK trigger the onClientRoleChangeFailed callback to report the reason for the failure and the current user role.BROADCASTER, the onClientRoleChanged callback will not be triggered on the local client. Calling this method before joining a channel and set the role to AUDIENCE will trigger this callback as well.| role | The user role:
|
|
abstract |
Sets the user role and the audience latency level in a live streaming scenario.
By default,the SDK sets the user role as audience. You can call this method to set the user role as host. The user role ( roles ) determines the users' permissions at the SDK level, including whether they can publish audio and video streams in a channel. The difference between this method and setClientRole(int role) is that, this method supports setting the audienceLatencyLevel. audienceLatencyLevel needs to be used together with role to determine the level of service that users can enjoy within their permissions. For example, an audience member can choose to receive remote streams with low latency or ultra-low latency. Call timing: You can call this method either before or after joining a channel. If you call this method to set the user role as the host before joining the channel and set the local video property through the setupLocalVideo method, the local video preview is automatically enabled when the user joins the channel. If you call this method to set the user role after joining a channel, the SDK will automatically call the muteLocalAudioStream and muteLocalVideoStream method to change the state for publishing audio and video streams. Related callbacks: If you call this method to switch the user role after joining the channel, the SDK triggers the following callbacks:
onClientRoleChanged on the local client.Note: Calling this method before joining a channel and set the role to AUDIENCE will trigger this callback as well.onUserJoined or onUserOffline on the remote client. If you call this method to set the user role after joining a channel but encounter a failure, the SDK trigger the onClientRoleChangeFailed callback to report the reason for the failure and the current user role.role to BROADCASTER, the onClientRoleChanged callback will not be triggered on the local client. Calling this method before joining a channel and set the role to AUDIENCE will trigger this callback as well.| role | The user role.
|
| options | The detailed options of a user, including the user level. See ClientRoleOptions. |
|
abstract |
Reports customized messages.
Agora supports reporting and analyzing customized messages. This function is in the beta stage with a free trial. The ability provided in its beta test version is reporting a maximum of 10 message pieces within 6 seconds, with each message piece not exceeding 256 bytes and each string not exceeding 100 bytes. To try out this function, contact support@agora.io and discuss the format of customized messages with us.
|
abstract |
Preloads a channel with token, channelName, and optionalUid.
When audience members need to switch between different channels frequently, calling the method can help shortening the time of joining a channel, thus reducing the time it takes for audience members to hear and see the host. If you join a preloaded channel, leave it and want to rejoin the same channel, you do not need to call this method unless the token for preloading the channel expires. Call timing: To improve the user experience of preloading channels, Agora recommends that before joining the channel, calling this method as early as possible once confirming the channel name and user information.
AUDIO_SCENARIO_CHORUS, otherwise, this method does not take effect.RtcEngine instance supports preloading 20 channels at most. When exceeding this limit, the latest 20 preloaded channels take effect. Failing to preload a channel does not mean that you can't join a channel, nor will it increase the time of joining a channel.| token | The token generated on your server for authentication. See .When the token for preloading channels expires, you can update the token based on the number of channels you preload.
|
| channelName | The channel name that you want to preload. This parameter signifies the channel in which users engage in real-time audio and video interaction. Under the premise of the same App ID, users who fill in the same channel ID enter the same channel for audio and video interaction. The string length must be less than 64 bytes. Supported characters (89 characters in total):
|
| optionalUid | The user ID. This parameter is used to identify the user in the channel for real-time audio and video interaction. You need to set and manage user IDs yourself, and ensure that each user ID in the same channel is unique. This parameter is a 32-bit signed integer. The value range is from -2^31 to 2^31-1. If the user ID is not assigned (or set to 0), the SDK assigns a random user ID and onJoinChannelSuccess returns it in the callback. Your application must record and maintain the returned user ID, because the SDK does not do so. |
|
abstract |
Preloads a channel with token, channelName, and userAccount.
When audience members need to switch between different channels frequently, calling the method can help shortening the time of joining a channel, thus reducing the time it takes for audience members to hear and see the host. If you join a preloaded channel, leave it and want to rejoin the same channel, you do not need to call this method unless the token for preloading the channel expires. Call timing: To improve the user experience of preloading channels, Agora recommends that before joining the channel, calling this method as early as possible once confirming the channel name and user information.
AUDIO_SCENARIO_CHORUS, otherwise, this method does not take effect.RtcEngine instance supports preloading 20 channels at most. When exceeding this limit, the latest 20 preloaded channels take effect. Failing to preload a channel does not mean that you can't join a channel, nor will it increase the time of joining a channel.| token | The token generated on your server for authentication. See .When the token for preloading channels expires, you can update the token based on the number of channels you preload.
|
| channelName | The channel name that you want to preload. This parameter signifies the channel in which users engage in real-time audio and video interaction. Under the premise of the same App ID, users who fill in the same channel ID enter the same channel for audio and video interaction. The string length must be less than 64 bytes. Supported characters (89 characters in total):
|
| userAccount | The user account. This parameter is used to identify the user in the channel for real-time audio and video engagement. You need to set and manage user accounts yourself and ensure that each user account in the same channel is unique. The maximum length of this parameter is 255 bytes. Ensure that you set this parameter and do not set it as NULL. Supported characters are as follows(89 in total):
|
RtcEngine object has not been initialized. You need to initialize the RtcEngine object before calling this method.
|
abstract |
Updates the wildcard token for preloading channels.
You need to maintain the life cycle of the wildcard token by yourself. When the token expires, you need to generate a new wildcard token and then call this method to pass in the new token. Applicable scenarios: In scenarios involving multiple channels, such as switching between different channels, using a wildcard token means users do not need to apply for a new token every time joinning a new channel, which can save users time for switching channels and reduce the pressure on your token server.
| token | The new token. |
|
abstract |
Joins a channel.
By default, the user subscribes to the audio and video streams of all the other users in the channel, giving rise to usage and billings. To stop subscribing to a specified stream or all remote streams, call the corresponding mute methods. Call timing: Call this method after create(RtcEngineConfig config). Related callbacks: A successful call of this method triggers the following callbacks:
onJoinChannelSuccess and onConnectionStateChanged callbacks.onUserJoined callback, if a user joining the channel in the COMMUNICATION profile, or a host joining a channel in the LIVE_BROADCASTING profile. When the connection between the local client and Agora's server is interrupted due to poor network conditions, the SDK tries reconnecting to the server. When the local client successfully rejoins the channel, the SDK triggers the onRejoinChannelSuccess callback on the local client.create(RtcEngineConfig config) method; otherwise, you may fail to join the channel with the token.| token | The token generated on your server for authentication.Note:
|
| channelId | The channel name. This parameter signifies the channel in which users engage in real-time audio and video interaction. Under the premise of the same App ID, users who fill in the same channel ID enter the same channel for audio and video interaction. The string length must be less than 64 bytes. Supported characters (89 characters in total):
|
| optionalInfo | (Optional) Reserved for future use. |
| uid | The user ID. This parameter is used to identify the user in the channel for real-time audio and video interaction. You need to set and manage user IDs yourself, and ensure that each user ID in the same channel is unique. This parameter is a 32-bit signed integer. The value range is from -2^31 to 2^31-1. If the user ID is not assigned (or set to 0), the SDK assigns a random user ID and onJoinChannelSuccess returns it in the callback. Your application must record and maintain the returned user ID, because the SDK does not do so. |
uid parameter is not set to an integer, or the value of a member in ChannelMediaOptions is invalid. You need to pass in a valid parameter and join the channel again.RtcEngine object. You need to reinitialize the RtcEngine object.RtcEngine object has not been initialized. You need to initialize the RtcEngine object before calling this method.RtcEngine object is wrong. The typical cause is that after calling startEchoTest to start a call loop test, you call this method to join the channel without calling stopEchoTest to stop the test. You need to call stopEchoTest before calling this method.onConnectionStateChanged callback to see whether the user is in the channel. Do not call this method to join the channel unless you receive the CONNECTION_STATE_DISCONNECTED (1) state.channelId to rejoin the channel.uid to rejoin the channel.
|
abstract |
Joins a channel with media options.
Compared to joinChannel(String token, String channelId, String optionalInfo, int uid), this method has the options parameter which is used to set media options, such as whether to publish audio and video streams within a channel, or whether to automatically subscribe to the audio and video streams of all remote users when joining a channel. By default, the user subscribes to the audio and video streams of all the other users in the channel, giving rise to usage and billings. To stop subscribing to other streams, set the options parameter or call the corresponding mute methods. Call timing: Call this method after create(RtcEngineConfig config). Related callbacks: A successful call of this method triggers the following callbacks:
onJoinChannelSuccess and onConnectionStateChanged callbacks.onUserJoined callback, if a user joining the channel in the COMMUNICATION profile, or a host joining a channel in the LIVE_BROADCASTING profile. When the connection between the local client and Agora's server is interrupted due to poor network conditions, the SDK tries reconnecting to the server. When the local client successfully rejoins the channel, the SDK triggers the onRejoinChannelSuccess callback on the local client.create(RtcEngineConfig config) method; otherwise, you may fail to join the channel with the token.| token | The token generated on your server for authentication.Note:
|
| channelId | The channel name. This parameter signifies the channel in which users engage in real-time audio and video interaction. Under the premise of the same App ID, users who fill in the same channel ID enter the same channel for audio and video interaction. The string length must be less than 64 bytes. Supported characters (89 characters in total):
|
| uid | The user ID. This parameter is used to identify the user in the channel for real-time audio and video interaction. You need to set and manage user IDs yourself, and ensure that each user ID in the same channel is unique. This parameter is a 32-bit signed integer. The value range is from -231 to 231-1. If the user ID is not assigned (or set to 0), the SDK assigns a random user ID and onJoinChannelSuccess returns it in the callback. Your application must record and maintain the returned user ID, because the SDK does not do so. |
| options | The channel media options. See ChannelMediaOptions. |
uid parameter is not set to an integer, or the value of a member in ChannelMediaOptions is invalid. You need to pass in a valid parameter and join the channel again.RtcEngine object. You need to reinitialize the RtcEngine object.RtcEngine object has not been initialized. You need to initialize the RtcEngine object before calling this method.RtcEngine object is wrong. The typical cause is that after calling startEchoTest to start a call loop test, you call this method to join the channel without calling stopEchoTest to stop the test. You need to call stopEchoTest before calling this method.onConnectionStateChanged callback to see whether the user is in the channel. Do not call this method to join the channel unless you receive the CONNECTION_STATE_DISCONNECTED (1) state.channelId to rejoin the channel.uid to rejoin the channel.
|
abstract |
Registers a user account.
Once registered, the user account can be used to identify the local user when the user joins the channel. After the registration is successful, the user account can identify the identity of the local user, and the user can use it to join the channel. This method is optional. If you want to join a channel using a user account, you can choose one of the following methods:
registerLocalUserAccount method to register a user account, and then call the joinChannelWithUserAccount(String token, String channelName, String userAccount) or joinChannelWithUserAccount(String token, String channelName, String userAccount, ChannelMediaOptions options) method to join a channel, which can shorten the time it takes to enter the channel.joinChannelWithUserAccount(String token, String channelName, String userAccount) or joinChannelWithUserAccount(String token, String channelName, String userAccount, ChannelMediaOptions options) method to join a channel. Related callbacks: After successfully calling this method, the onLocalUserRegistered callback will be triggered on the local client to report the local user's UID and user account.userAccount used when registering a User Account. If you want to join a channel with the original String userAccount used during registration, call the joinChannelWithUserAccount(String token, String channelName, String userAccount, ChannelMediaOptions options) method to join the channel, instead of calling joinChannel(String token, String channelId, int uid, ChannelMediaOptions options) and pass in the Int UID obtained through this methoduserAccount is unique in the channel.| appId | The App ID of your project on Agora Console. |
| userAccount | The user account. This parameter is used to identify the user in the channel for real-time audio and video engagement. You need to set and manage user accounts yourself and ensure that each user account in the same channel is unique. The maximum length of this parameter is 255 bytes. Ensure that you set this parameter and do not set it as NULL. Supported characters are as follow(89 in total):
|
|
abstract |
Joins a channel with a User Account and Token.
Before calling this method, if you have not called registerLocalUserAccount to register a user account, when you call this method to join a channel, the SDK automatically creates a user account for you. Calling the registerLocalUserAccount method to register a user account, and then calling this method to join a channel can shorten the time it takes to enter the channel. Once a user joins the channel, the user subscribes to the audio and video streams of all the other users in the channel by default, giving rise to usage and billings. To stop subscribing to a specified stream or all remote streams, call the corresponding mute methods. Call timing: Call this method after create(RtcEngineConfig config). Related callbacks: After the user successfully joins the channel, the SDK triggers the following callbacks:
onLocalUserRegistered, onJoinChannelSuccess and onConnectionStateChanged callbacks.onUserJoined and onUserInfoUpdated callbacks if a user joins the channel in the COMMUNICATION profile, or if a host joins the channel in the LIVE_BROADCASTING profile.create(RtcEngineConfig config) method; otherwise, you may fail to join the channel with the token. To ensure smooth communication, use the same parameter type to identify the user. For example, if a user joins the channel with a UID, then ensure all the other users use the UID too. The same applies to the user account. If a user joins the channel with the Agora Web SDK, ensure that the ID of the user is set to the same parameter type.| token | The token generated on your server for authentication.Note:
|
| channelName | The channel name. This parameter signifies the channel in which users engage in real-time audio and video interaction. Under the premise of the same App ID, users who fill in the same channel ID enter the same channel for audio and video interaction. The string length must be less than 64 bytes. Supported characters (89 characters in total):
|
| userAccount | The user account. This parameter is used to identify the user in the channel for real-time audio and video engagement. You need to set and manage user accounts yourself and ensure that each user account in the same channel is unique. The maximum length of this parameter is 255 bytes. Ensure that you set this parameter and do not set it as NULL. Supported characters are as follows(89 in total):
|
uid parameter is not set to an integer, or the value of a member in ChannelMediaOptions is invalid. You need to pass in a valid parameter and join the channel again.RtcEngine object. You need to reinitialize the RtcEngine object.RtcEngine object has not been initialized. You need to initialize the RtcEngine object before calling this method.RtcEngine object is wrong. The typical cause is that after calling startEchoTest to start a call loop test, you call this method to join the channel without calling stopEchoTest to stop the test. You need to call stopEchoTest before calling this method.onConnectionStateChanged callback to see whether the user is in the channel. Do not call this method to join the channel unless you receive the CONNECTION_STATE_DISCONNECTED (1) state.channelId to rejoin the channel.uid to rejoin the channel.
|
abstract |
Join a channel using a user account and token, and set the media options.
Before calling this method, if you have not called registerLocalUserAccount to register a user account, when you call this method to join a channel, the SDK automatically creates a user account for you. Calling the registerLocalUserAccount method to register a user account, and then calling this method to join a channel can shorten the time it takes to enter the channel. Compared to joinChannelWithUserAccount(String token, String channelName, String userAccount), this method has the options parameter which is used to set media options, such as whether to publish audio and video streams within a channel. By default, the user subscribes to the audio and video streams of all the other users in the channel, giving rise to usage and billings. To stop subscribing to other streams, set the options parameter or call the corresponding mute methods. Call timing: Call this method after create(RtcEngineConfig config). Related callbacks: After the user successfully joins the channel, the SDK triggers the following callbacks:
onLocalUserRegistered, onJoinChannelSuccess and onConnectionStateChanged callbacks.onUserJoined and onUserInfoUpdated callbacks if a user joins the channel in the COMMUNICATION profile, or if a host joins the channel in the LIVE_BROADCASTING profile.create(RtcEngineConfig config) method; otherwise, you may fail to join the channel with the token. To ensure smooth communication, use the same parameter type to identify the user. For example, if a user joins the channel with a UID, then ensure all the other users use the UID too. The same applies to the user account. If a user joins the channel with the Agora Web SDK, ensure that the ID of the user is set to the same parameter type.| token | The token generated on your server for authentication.Note:
|
| channelName | The channel name. This parameter signifies the channel in which users engage in real-time audio and video interaction. Under the premise of the same App ID, users who fill in the same channel ID enter the same channel for audio and video interaction. The string length must be less than 64 bytes. Supported characters (89 characters in total):
|
| userAccount | The user account. This parameter is used to identify the user in the channel for real-time audio and video engagement. You need to set and manage user accounts yourself and ensure that each user account in the same channel is unique. The maximum length of this parameter is 255 bytes. Ensure that you set this parameter and do not set it as NULL. Supported characters are as follows(89 in total):
|
| options | The channel media options. See ChannelMediaOptions. |
uid parameter is not set to an integer, or the value of a member in ChannelMediaOptions is invalid. You need to pass in a valid parameter and join the channel again.RtcEngine object. You need to reinitialize the RtcEngine object.RtcEngine object has not been initialized. You need to initialize the RtcEngine object before calling this method.RtcEngine object is wrong. The typical cause is that after calling startEchoTest to start a call loop test, you call this method to join the channel without calling stopEchoTest to stop the test. You need to call stopEchoTest before calling this method.onConnectionStateChanged callback to see whether the user is in the channel. Do not call this method to join the channel unless you receive the CONNECTION_STATE_DISCONNECTED (1) state.channelId to rejoin the channel.uid to rejoin the channel.
|
abstract |
Gets the user information by passing in the user account.
After a remote user joins the channel, the SDK gets the UID and user account of the remote user, caches them in a mapping table object, and triggers the onUserInfoUpdated callback on the local client. After receiving the callback, you can call this method and pass in the UID to get the user account of the specified user from the UserInfo object.
|
abstract |
Gets the user information by passing in the user ID.
After a remote user joins the channel, the SDK gets the UID and user account of the remote user, caches them in a mapping table object, and triggers the onUserInfoUpdated callback on the local client. After receiving the callback, you can call this method and pass in the UID to get the user account of the specified user from the UserInfo object. Call timing: Call this method after receiving the onUserInfoUpdated callback. Related callbacks: onUserInfoUpdated
| uid | The user ID. |
| userInfo | Input and output parameter. The UserInfo object that identifies the user information.
|
|
abstract |
Leaves a channel.
After calling this method, the SDK terminates the audio and video interaction, leaves the current channel, and releases all resources related to the session. After joining the channel, you must call this method to end the call; otherwise, the next call cannot be started. Call timing: Call this method after joining a channel. Related callbacks: A successful call of this method triggers the following callbacks:
onLeaveChannel callback will be triggered.onUserOffline callback will be triggered after the remote host leaves the channel.destroy() immediately after calling this method, the SDK does not trigger the onLeaveChannel callback.joinChannelEx to join multiple channels, calling this method will leave all the channels you joined.
|
abstract |
Sets channel options and leaves the channel.
After calling this method, the SDK terminates the audio and video interaction, leaves the current channel, and releases all resources related to the session. After joining a channel, you must call this method or leaveChannel() to end the call, otherwise, the next call cannot be started. If you have called joinChannelEx to join multiple channels, calling this method will leave all the channels you joined. Call timing: Call this method after joining a channel. Related callbacks: A successful call of this method triggers the following callbacks:
onLeaveChannel callback will be triggered.onUserOffline callback will be triggered after the remote host leaves the channel.destroy() immediately after calling this method, the SDK does not trigger the onLeaveChannel callback. This method call is asynchronous. When this method returns, it does not necessarily mean that the user has left the channel.| options | The options for leaving the channel. See LeaveChannelOptions. |
|
abstract |
Renews the token.
This method is used to update the token. After successfully calling this method, the SDK will trigger the onRenewTokenResult callback. A token will expire after a certain period of time, at which point the SDK will be unable to establish a connection with the server. Call timing: In any of the following cases, Agora recommends that you generate a new token on your server and then call this method to renew your token:
onTokenPrivilegeWillExpire callback reporting the token is about to expire.onRequestToken callback reporting the token has expired.onConnectionStateChanged callback reporting CONNECTION_CHANGED_TOKEN_EXPIRED (9).| token | The new token. |
RtcEngine object has not been initialized. You need to initialize the RtcEngine object before calling this method.
|
abstract |
Gets the audio device information.
After calling this method, you can get whether the audio device supports ultra-low-latency capture and playback.
DeviceInfo object that identifies the audio device information.
|
abstract |
Enables interoperability with the Agora Web SDK (applicable only in the live streaming scenarios).
You can call this method to enable or disable interoperability with the Agora Web SDK. If a channel has Web SDK users, ensure that you call this method, or the video of the Native user will be a black screen for the Web user. This method is only applicable in live streaming scenarios, and interoperability is enabled by default in communication scenarios.
| enabled | Whether to enable interoperability:
|
|
abstract |
Gets the current connection state of the SDK.
Call timing: This method can be called either before or after joining the channel.
|
abstract |
Enables the audio module.
The audio module is enabled by default After calling disableAudio to disable the audio module, you can call this method to re-enable it. Call timing: This method can be called either before or after joining the channel. It is still valid after one leaves channel.
enableLocalAudio: Whether to enable the microphone to create the local audio stream.muteLocalAudioStream: Whether to publish the local audio stream.muteRemoteAudioStream: Whether to subscribe and play the remote audio stream.muteAllRemoteAudioStreams: Whether to subscribe to and play all remote audio streams.enableLocalAudio, muteRemoteAudioStream, and muteAllRemoteAudioStreams. Proceed it with caution.
|
abstract |
Disables the audio module.
The audio module is enabled by default, and you can call this method to disable the audio module. Call timing: This method can be called either before or after joining the channel. It is still valid after one leaves channel.
enableLocalAudio: Whether to enable the microphone to create the local audio stream.muteLocalAudioStream: Whether to publish the local audio stream.muteRemoteAudioStream: Whether to subscribe and play the remote audio stream.muteAllRemoteAudioStreams: Whether to subscribe to and play all remote audio streams.
|
abstract |
Disables the audio function in the channel.
Note: This method controls the underlying states of the Engine. It is still valid after one leaves channel.
|
abstract |
Enables the audio function in the channel.
|
abstract |
Sets the audio profile.
If you need to set the audio scenario, you can either call setAudioScenario, or create(RtcEngineConfig config) and set the mAudioScenario in RtcEngineConfig. Applicable scenarios: This method is suitable for various audio scenarios. You can choose as needed. For example, in scenarios with high audio quality requirements such as music teaching, it is recommended to set profile to MUSIC_HIGH_QUALITY(4). Call timing: You can call this method either before or after joining a channel.
| profile | The audio profile, including the sampling rate, bitrate, encoding mode, and the number of channels.
|
|
abstract |
Sets the audio profile and audio scenario.
Applicable scenarios: This method is suitable for various audio scenarios. You can choose as needed. For example, in scenarios with high audio quality requirements such as music teaching, it is recommended to set profile to MUSIC_HIGH_QUALITY(4) and scenario to AUDIO_SCENARIO_GAME_STREAMING(3). Call timing: You can call this method either before or after joining a channel.
| profile | The audio profile, including the sampling rate, bitrate, encoding mode, and the number of channels.
|
| scenario | The audio scenarios. Under different audio scenarios, the device uses different volume types.
|
|
abstract |
Sets the audio scenario.
Applicable scenarios: This method is suitable for various audio scenarios. You can choose as needed. For example, in scenarios such as music teaching that require high sound quality, it is recommended to set scenario to AUDIO_SCENARIO_GAME_STREAMING(3). Call timing: You can call this method either before or after joining a channel.
| scenario | The audio scenarios. Under different audio scenarios, the device uses different volume types.
|
|
abstract |
Sets high-quality audio preferences.
Call this method and set all the modes before joining a channel. Do NOT call this method again after joining a channel.
Note: Agora does not recommend using this method. If you want to set the audio profile, see setAudioProfile.
| fullband | Full-band codec (48 kHz sampling rate), not compatible with versions before v1.7.4.
|
| stereo | Stereo codec, not compatible with versions before v1.7.4.
|
| fullBitrate | High bitrate. Recommended in voice-only mode.
|
|
abstract |
Adjusts the capturing signal volume.
If you only need to mute the audio signal, Agora recommends that you use muteRecordingSignal instead. Call timing: This method can be called either before or after joining the channel.
| volume | The volume of the user. The value range is [0,400].
|
|
abstract |
Adjusts the playback signal volume of all remote users.
This method is used to adjust the signal volume of all remote users mixed and played locally. If you need to adjust the signal volume of a specified remote user played locally, it is recommended that you call adjustUserPlaybackSignalVolume instead. Call timing: This method can be called either before or after joining the channel.
| volume | The volume of the user. The value range is [0,400].
|
|
abstract |
Enables the reporting of users' volume indication.
This method enables the SDK to regularly report the volume information to the app of the local user who sends a stream and remote users (three users at most) whose instantaneous volumes are the highest. Call timing: This method can be called either before or after joining the channel. Related callbacks: The SDK triggers the onAudioVolumeIndication callback according to the interval you set if this method is successfully called and there are users publishing streams in the channel.
| interval | Sets the time interval between two consecutive volume indications:
|
| smooth | The smoothing factor that sets the sensitivity of the audio volume indicator. The value ranges between 0 and 10. The recommended value is 3. The greater the value, the more sensitive the indicator. |
| reportVad | - true: Enables the voice activity detection of the local user. Once it is enabled, the vad parameter of the onAudioVolumeIndication callback reports the voice activity status of the local user.
|
|
abstract |
Enables or disables the local audio capture.
The audio function is enabled by default when users joining a channel. This method disables or re-enables the local audio function to stop or restart local audio capturing. The difference between this method and muteLocalAudioStream are as follows:
enableLocalAudio: Disables or re-enables the local audio capturing and processing. If you disable or re-enable local audio capturing using the enableLocalAudio method, the local user might hear a pause in the remote audio playback.muteLocalAudioStream: Sends or stops sending the local audio streams without affecting the audio capture status. Applicable scenarios: This method does not affect receiving the remote audio streams. enableLocalAudio (false) is suitable for scenarios where the user wants to receive remote audio streams without sending locally captured audio. Call timing: You can call this method either before or after joining a channel. Calling it before joining a channel only sets the device state, and it takes effect immediately after you join the channel. Related callbacks: Once the local audio function is disabled or re-enabled, the SDK triggers the onLocalAudioStateChanged callback, which reports LOCAL_AUDIO_STREAM_STATE_STOPPED (0) or LOCAL_AUDIO_STREAM_STATE_RECORDING (1).| enabled | - true: (Default) Re-enable the local audio function, that is, to start the local audio capturing device (for example, the microphone).
|
|
abstract |
Stops or resumes publishing the local audio stream.
This method is used to control whether to publish the locally captured audio stream. If you call this method to stop publishing locally captured audio streams, the audio capturing device will still work and won't be affected. Call timing: This method can be called either before or after joining the channel. Related callbacks: After successfully calling this method, the local end triggers callback onAudioPublishStateChanged; the remote end triggers onUserMuteAudio and onRemoteAudioStateChanged callbacks.
| muted | Whether to stop publishing the local audio stream:
|
|
abstract |
Stops or resumes subscribing to the audio stream of a specified user.
Call timing: Call this method after joining a channel. Related callbacks: After a successful method call, the SDK triggers the onAudioSubscribeStateChanged callback.
| uid | The user ID of the specified user. |
| muted | Whether to subscribe to the specified remote user's audio stream.
|
|
abstract |
Adjusts the playback signal volume of a specified remote user.
You can call this method to adjust the playback volume of a specified remote user. To adjust the playback volume of different remote users, call the method as many times, once for each remote user. Call timing: Call this method after joining a channel.
| uid | The user ID of the remote user. |
| volume | The volume of the user. The value range is [0,400].
|
|
abstract |
Stops or resumes subscribing to the audio streams of all remote users.
After successfully calling this method, the local user stops or resumes subscribing to the audio streams of all remote users, including all subsequent users. Call timing: Call this method after joining a channel.
enableAudio or disableAudio, the latest call will prevail. By default, the SDK subscribes to the audio streams of all remote users when joining a channel. To modify this behavior, you can set autoSubscribeAudio to false when calling joinChannel(String token, String channelId, int uid, ChannelMediaOptions options) to join the channel, which will cancel the subscription to the audio streams of all users upon joining the channel.| muted | Whether to stop subscribing to the audio streams of all remote users:
|
|
abstract |
Enables the video module.
The video module is disabled by default, call this method to enable it. If you need to disable the video module later, you need to call disableVideo. Call timing: This method can be called either before joining a channel or while in the channel:
onRemoteVideoStateChanged callback on the remote client.enableLocalVideo: Whether to enable the camera to create the local video stream.muteLocalVideoStream: Whether to publish the local video stream.muteRemoteVideoStream: Whether to subscribe to and play the remote video stream.muteAllRemoteVideoStreams: Whether to subscribe to and play all remote video streams.enableLocalVideo, muteRemoteVideoStream, and muteAllRemoteVideoStreams. Proceed it with caution.
|
abstract |
Disables the video module.
This method is used to disable the video module. Call timing: This method can be called either before or after joining the channel.
enableVideo can swithch to video mode again. Related callbacks: A successful call of this method triggers the onUserEnableVideo (false) callback on the remote client.enableLocalVideo: Whether to enable the camera to create the local video stream.muteLocalVideoStream: Whether to publish the local video stream.muteRemoteVideoStream: Whether to subscribe to and play the remote video stream.muteAllRemoteVideoStreams: Whether to subscribe to and play all remote video streams.
|
abstract |
Sets the video encoder configuration.
Sets the encoder configuration for the local video. Each configuration profile corresponds to a set of video parameters, including the resolution, frame rate, and bitrate. Call timing: You can call this method either before or after joining a channel. If the user does not need to reset the video encoding properties after joining the channel, Agora recommends calling this method before enableVideo to reduce the time to render the first video frame.
getMirrorApplied method support setting the mirroring effect. Agora recommends that you only choose one method to set it up. Using both methods at the same time causes the mirroring effect to overlap, and the mirroring settings fail.config specified in this method is the maximum value under ideal network conditions. If the video engine cannot render the video using the specified config due to unreliable network conditions, the parameters further down the list are considered until a successful configuration is found.| config | Video profile. See VideoEncoderConfiguration. |
|
abstract |
Queries the video codec capabilities of the SDK.
CodecCapInfo array indicating the video encoding capability of the device, if the method call succeeds.
|
abstract |
Queries device score.
Applicable scenarios: In high-definition or ultra-high-definition video scenarios, you can first call this method to query the device's score. If the returned score is low (for example, below 60), you need to lower the video resolution to avoid affecting the video experience. The minimum device score required for different business scenarios is varied. For specific score recommendations, please technical support.
|
abstract |
Queries the focal length capability supported by the camera.
If you want to enable the wide-angle or ultra-wide-angle mode for camera capture, it is recommended to start by calling this method to check whether the device supports the required focal length capability. Then, adjust the camera's focal length configuration based on the query result by calling setCameraCapturerConfiguration, ensuring the best camera capture performance.
AgoraFocalLengthInfo objects, which contain the camera's orientation and focal length type.
|
abstract |
Sets the camera capture configuration.
Call timing: Call this method before enabling local camera capture, such as before calling startPreview(Constants.VideoSourceType sourceType) and joinChannel(String token, String channelId, int uid, ChannelMediaOptions options).
queryCameraFocalLengthCapability first to check the device's focal length capabilities, and then configure based on the query results. Due to limitations on some Android devices, even if you set the focal length type according to the results returned in queryCameraFocalLengthCapability, the settings may not take effect.| config | The camera capture configuration. See CameraCapturerConfiguration. |
|
abstract |
Initializes the local video view.
This method initializes the video view of a local stream on the local device. It only affects the video seen by the local user and does not impact the publishing of the local video. Call this method to bind the local video stream to a video view ( view ) and to set the rendering and mirror modes of the video view. The binding remains valid after leaving the channel. To stop rendering or unbind the local video from the view, set view as NULL. Applicable scenarios: After initialization, call this method to set the local video and then join the channel. In real-time interactive scenarios, if you need to simultaneously view multiple preview frames in the local video preview, and each frame is at a different observation position along the video link, you can repeatedly call this method to set different view s and set different observation positions for each view. For example, by setting the video source to the camera and then configuring two view s with position setting to VIDEO_MODULE_POSITION_POST_CAPTURER_ORIGIN and VIDEO_MODULE_POSITION_POST_CAPTURER, you can simultaneously preview the raw, unprocessed video frame and the video frame that has undergone preprocessing (image enhancement effects, virtual background, watermark) in the local video preview. Call timing: You can call this method either before or after joining a channel.
setLocalRenderMode(int renderMode, int mirrorMode) instead.| local | The local video view and settings. See VideoCanvas. |
|
abstract |
Initializes the video view of a remote user.
This method initializes the video view of a remote stream on the local device. It affects only the video view that the local user sees. Call this method to bind the remote video stream to a video view and to set the rendering and mirror modes of the video view. You need to specify the ID of the remote user in this method. If the remote user ID is unknown to the application, set it after the app receives the onUserJoined callback. To unbind the remote user from the view, set the view parameter to NULL. Once the remote user leaves the channel, the SDK unbinds the remote user. In the scenarios of custom layout for mixed videos on the mobile end, you can call this method and set a separate view for rendering each sub-video stream of the mixed video stream.
setRemoteRenderMode method.onFirstRemoteVideoDecoded callback.| remote | The remote video view and settings. See VideoCanvas. |
|
abstract |
Updates the display mode of the video view of a remote user.
After initializing the video view of a remote user, you can call this method to update its rendering and mirror modes. This method affects only the video view that the local user sees.
| uid | ID of the remote user. |
| renderMode | Sets the remote display mode:
|
|
abstract |
Sets the local video display mode.
Call this method to set the local video display mode. This method can be called multiple times during a call to change the display mode.
| renderMode | The local video display mode.
|
|
abstract |
Updates the display mode of the video view of a remote user.
After initializing the video view of a remote user, you can call this method to update its rendering and mirror modes. This method affects only the video view that the local user sees.
setupRemoteVideo method.| uid | The user ID of the remote user. |
| renderMode | The rendering mode of the remote user view.
|
| mirrorMode | The mirror mode of the remote user view.
|
|
abstract |
Updates the display mode of the local video view.
After initializing the local video view, you can call this method to update its rendering and mirror modes. It affects only the video view that the local user sees and does not impact the publishing of the local video. Call timing: - Ensure that you have called the setupLocalVideo method to initialize the local video view before calling this method.
| renderMode | The local video display mode.
|
| mirrorMode | For the local user:
|
|
abstract |
Enables the local video preview.
You can call this method to enable local video preview. Call timing: This method must be called after enableVideo and setupLocalVideo.
stopPreview() to disable local preview.
|
abstract |
Enables the local video preview and specifies the video source for the preview.
This method is used to start local video preview and specify the video source that appears in the preview screen. Call timing: This method must be called after enableVideo and setupLocalVideo.
stopPreview() to disable local preview.| sourceType | The type of the video source. See VideoSourceType. |
|
abstract |
Stops the local video preview.
Applicable scenarios: After calling startPreview() to start the preview, if you want to stop the local video preview, call this method. Call timing: Call this method before joining a channel or after leaving a channel.
|
abstract |
Stops the local video preview.
Applicable scenarios: After calling startPreview(Constants.VideoSourceType sourceType) to start the preview, if you want to stop the local video preview, call this method. Call timing: Call this method before joining a channel or after leaving a channel.
| sourceType | The type of the video source. See VideoSourceType. |
|
abstract |
Enables/Disables the local video capture.
This method disables or re-enables the local video capture, and does not affect receiving the remote video stream. After calling enableVideo, the local video capture is enabled by default. If you call enableLocalVideo (false) to disable local video capture within the channel, it also simultaneously stops publishing the video stream within the channel. If you want to restart video catpure, you can call enableLocalVideo (true) and then call updateChannelMediaOptions to set the options parameter to publish the locally captured video stream in the channel. After the local video capturer is successfully disabled or re-enabled, the SDK triggers the onRemoteVideoStateChanged callback on the remote client.
| enabled | Whether to enable the local video capture.
|
|
abstract |
Starts camera capture.
You can call this method to start capturing video from one or more cameras by specifying sourceType.
| sourceType | The type of the video source. See VideoSourceType. Note:
|
| config | The configuration of the video capture. See CameraCapturerConfiguration. |
|
abstract |
Stops camera capture.
After calling startCameraCapture to start capturing video through one or more cameras, you can call this method and set the sourceType parameter to stop the capture from the specified cameras.
| sourceType | The type of the video source. See VideoSourceType. |
|
abstract |
Starts the local video mixing.
After calling this method, you can merge multiple video streams into one video stream locally. For example, you can merge the video streams captured by the camera, screen sharing, media player, remote video, video files, images, etc. into one video stream, and then publish the mixed video stream to the channel. Applicable scenarios: You can enable the local video mixing function in scenarios such as remote conferences, live streaming, and online education, which allows users to view and manage multiple videos more conveniently, and supports portrait-in-picture effect and other functions. The following is a typical use case for implementing the portrait-in-picture effect:1. Call enableVirtualBackground(boolean enabled, VirtualBackgroundSource backgroundSource, SegmentationProperty segproperty), and set the custom background image to BACKGROUND_NONE, that is, separate the portrait and the background in the video captured by the camera.
startScreenCapture to start capturing the screen sharing video stream.startCameraCapture or startScreenCapture.publishTranscodedVideoTrack in ChannelMediaOptions to true when calling joinChannel(String token, String channelId, int uid, ChannelMediaOptions options) or updateChannelMediaOptions. Related callbacks: When you fail to call this method, the SDK triggers the onLocalVideoTranscoderError callback to report the reason.| config | Configuration of the local video mixing, see LocalTranscoderConfiguration.Attention:
|
|
abstract |
Stops the local video mixing.
After calling startLocalVideoTranscoder, call this method if you want to stop the local video mixing.
|
abstract |
Updates the local video mixing configuration.
After calling startLocalVideoTranscoder, call this method if you want to update the local video mixing configuration.
startCameraCapture or startScreenCapture.| config | Configuration of the local video mixing, see LocalTranscoderConfiguration. |
|
abstract |
Starts local audio mixing.
This method supports merging multiple audio streams into one audio stream locally. For example, merging the audio streams captured from the local microphone, and that from the media player, the sound card, and the remote users into one audio stream, and then publish the merged audio stream to the channel.
ChannelMediaOptions to true, and then publish the mixed audio stream to the channel.| config | The configurations for mixing the lcoal audio. See LocalAudioMixerConfiguration. |
|
abstract |
Updates the configurations for mixing audio streams locally.
After calling startLocalAudioMixer, call this method if you want to update the local audio mixing configuration. Call timing: Call this method after startLocalAudioMixer.
| config | The configurations for mixing the lcoal audio. See LocalAudioMixerConfiguration. |
|
abstract |
Stops the local audio mixing.
After calling startLocalAudioMixer, call this method if you want to stop the local audio mixing. Call timing: Call this method after startLocalAudioMixer.
|
abstract |
Stops or resumes publishing the local video stream.
This method is used to control whether to publish the locally captured video stream. If you call this method to stop publishing locally captured video streams, the video capturing device will still work and won't be affected. Compared to enableLocalVideo (false), which can also cancel the publishing of local video stream by turning off the local video stream capture, this method responds faster. Call timing: This method can be called either before or after joining the channel. Related callbacks: After successfully calling this method, the local end triggers callback onVideoPublishStateChanged; the remote end triggers onUserMuteVideo and onRemoteVideoStateChanged callbacks.
| muted | Whether to stop publishing the local video stream.
|
|
abstract |
Stops or resumes subscribing to the video stream of a specified user.
Call timing: Call this method after joining a channel. Related callbacks: After a successful method call, the SDK triggers the onVideoSubscribeStateChanged callback.
| uid | The user ID of the specified user. |
| muted | Whether to subscribe to the specified remote user's video stream.
|
|
abstract |
Stops or resumes subscribing to the video streams of all remote users.
After successfully calling this method, the local user stops or resumes subscribing to the video streams of all remote users, including all subsequent users. Call timing: Call this method after joining a channel.
enableVideo or disableVideo, the latest call will prevail. By default, the SDK subscribes to the video streams of all remote users when joining a channel. To modify this behavior, you can set autoSubscribeVideo tofalse when calling joinChannel(String token, String channelId, int uid, ChannelMediaOptions options) to join the channel, which will cancel the subscription to the video streams of all users upon joining the channel.| muted | Whether to stop subscribing to the video streams of all remote users.
|
|
abstract |
Sets the image enhancement options.
Enables or disables image enhancement, and sets the options. Call timing: Call this method after calling enableVideo or startPreview(Constants.VideoSourceType sourceType).
libagora_clear_vision_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.| enabled | Whether to enable the image enhancement function:
|
| options | The image enhancement options. See BeautyOptions. |
|
abstract |
Sets the image enhancement options and specifies the media source.
Enables or disables image enhancement, and sets the options. Both this method and setBeautyEffectOptions(boolean enabled, BeautyOptions options) set image enhancement options, but this method allows you to specify the media source to which the image enhancement is applied. Call timing: Call this method after calling enableVideo or startPreview(Constants.VideoSourceType sourceType).
| enabled | Whether to enable the image enhancement function:
|
| options | The image enhancement options. See BeautyOptions. |
| sourceType | The type of the media source to which the filter effect is applied. See MediaSourceType.Attention: In this method, this parameter supports only the following two settings:
|
|
abstract |
Sets the face shape beauty options.
@technical preview
Calling this method allows for adjusting specific parts of the face, achieving effects such as slimming the face, enlarging the eyes, and slimming the nose through minor cosmetic procedures. Call timing: Call this method after calling enableVideo.
libagora_clear_vision_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.| enabled | Whether to enable the face shape effect:
|
| options | Face shaping style options, see FaceShapeBeautyOptions. |
|
abstract |
Sets the face shape options and specifies the media source.
@technical preview
Calling this method allows for modifying various parts of the face, achieving slimming, enlarging eyes, slimming nose, and other minor cosmetic effects all at once using preset parameters, supporting fine-tuning the overall modification intensity. Both this methods and setFaceShapeBeautyOptions(boolean enabled, FaceShapeBeautyOptions options) can be used to set beauty effects options, the difference is that this method supports specifying the media source to apply the beauty effects. Call timing: Call this method after calling enableVideo.
libagora_clear_vision_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.| enabled | Whether to enable the face shape effect:
|
| options | Face shaping style options, see FaceShapeBeautyOptions. |
| sourceType | The type of the media source to which the filter effect is applied. See MediaSourceType.Attention: In this method, this parameter supports only the following two settings:
|
|
abstract |
Gets the beauty effect options.
@technical preview
Calling this method can retrieve the current settings of the beauty effect. Applicable scenarios: When the user opens the beauty style and style intensity menu in the app, you can call this method to get the current beauty effect options, then refresh the menu in the user interface according to the results, and update the UI. Call timing: Call this method after calling enableVideo.
FaceShapeBeautyOptions instance, if the method call succeeds.
|
abstract |
Gets the beauty effect options.
@technical preview
Calling this method can retrieve the current settings of the beauty effect. Applicable scenarios: When the user opens the beauty style and style intensity menu in the app, you can call this method to get the current beauty effect options, then refresh the menu in the user interface according to the results, and update the UI. Call timing: Call this method after calling enableVideo.
| sourceType | The type of the media source to which the filter effect is applied. See MediaSourceType.Attention: In this method, this parameter supports only the following two settings:
|
FaceShapeBeautyOptions instance, if the method call succeeds.
|
abstract |
Sets the options for beauty enhancement facial areas.
@technical preview
If the preset beauty effects implemented in the setFaceShapeBeautyOptions(boolean enabled, FaceShapeBeautyOptions options, Constants.MediaSourceType sourceType) method do not meet expectations, you can use this method to set beauty area options, individually fine-tune each part of the face, and achieve a more refined beauty effect. Call timing: Call this method after calling setFaceShapeBeautyOptions(boolean enabled, FaceShapeBeautyOptions options, Constants.MediaSourceType sourceType) or setFaceShapeBeautyOptions(boolean enabled, FaceShapeBeautyOptions options, Constants.MediaSourceType sourceType).
libagora_clear_vision_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.| options | Facial enhancement areas, see FaceShapeAreaOptions. |
|
abstract |
Sets the image enhancement options for facial areas and specifies the media source.
@technical preview
If the preset beauty effects implemented in the setFaceShapeBeautyOptions(boolean enabled, FaceShapeBeautyOptions options, Constants.MediaSourceType sourceType) method do not meet expectations, you can use this method to set beauty area options, individually fine-tune each part of the face, and achieve a more refined beauty effect. Both this methods and setFaceShapeAreaOptions(FaceShapeAreaOptions options) can be used to set beauty effects options, the difference is that this method supports specifying the media source to apply the beauty effects. Call timing: Call this method after calling setFaceShapeBeautyOptions(boolean enabled, FaceShapeBeautyOptions options, Constants.MediaSourceType sourceType) or setFaceShapeBeautyOptions(boolean enabled, FaceShapeBeautyOptions options).
libagora_clear_vision_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.| options | Facial enhancement areas, see FaceShapeAreaOptions. |
| sourceType | The type of the media source to which the filter effect is applied. See MediaSourceType.Attention: In this method, this parameter supports only the following two settings:
|
|
abstract |
Gets the facial beauty area options.
@technical preview
Calling this method can retrieve the current settings of the beauty effect. Applicable scenarios: When the user opens the facial beauty area and shaping intensity menu in the app, you can call this method to get the current beauty effect options, then refresh the menu in the user interface according to the results, and update the UI. Call timing: Call this method after calling enableVideo.
| shapeArea | Facial enhancement areas.
|
FaceShapeAreaOptions instance, if the method call succeeds.
|
abstract |
Gets the facial beauty area options.
@technical preview
Calling this method can retrieve the current settings of the beauty effect. Applicable scenarios: When the user opens the facial beauty area and shaping intensity menu in the app, you can call this method to get the current beauty effect options, then refresh the menu in the user interface according to the results, and update the UI. Call timing: Call this method after calling enableVideo.
| shapeArea | Facial enhancement areas.
|
| sourceType | The type of the media source to which the filter effect is applied. See MediaSourceType.Attention: In this method, this parameter supports only the following two settings:
|
FaceShapeAreaOptions instance, if the method call succeeds.
|
abstract |
Sets the filter effect options.
Call timing: Call this method after calling enableVideo.
libagora_clear_vision_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.| enabled | Whether to enable the filter effect:
|
| options | The filter effect options. See FilterEffectOptions. |
|
abstract |
Sets the filter effect options and specifies the media source.
Both this method and setBeautyEffectOptions(boolean enabled, BeautyOptions options, Constants.MediaSourceType sourceType) set filter effect options. The difference is that this method allows you to specify the media source to which the filter effect option is applied. Call timing: Call this method after calling enableVideo.
libagora_clear_vision_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.| enabled | Whether to enable the filter effect:
|
| options | The filter effect options. See FilterEffectOptions. |
| sourceType | The type of the media source to which the filter effect is applied. See MediaSourceType.Attention: In this method, this parameter supports only the following two settings:
|
|
abstract |
Sets low-light enhancement.
You can call this method to enable the color enhancement feature and set the options of the color enhancement effect. Applicable scenarios: The low-light enhancement feature can adaptively adjust the brightness value of the video captured in situations with low or uneven lighting, such as backlit, cloudy, or dark scenes. It restores or highlights the image details and improves the overall visual effect of the video. Call timing: Call this method after calling enableVideo.
libagora_clear_vision_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.setVideoDenoiserOptions(boolean enabled, VideoDenoiserOptions options) to achieve video noise reduction, the specific corresponding relationship is as follows:| enabled | Whether to enable low-light enhancement:
|
| options | The low-light enhancement options. See LowLightEnhanceOptions. |
|
abstract |
Sets low light enhance options and specifies the media source.
This method and setLowlightEnhanceOptions(boolean enabled, LowLightEnhanceOptions options) both set low light enhance options, but this method allows you to specify the media source to which the low light enhance options are applied. Applicable scenarios: dark environments and low-end video capture devices can cause video images to contain significant noise, which affects video quality. In real-time interactive scenarios, video noise also consumes bitstream resources and reduces encoding efficiency during encoding. Call timing: Call this method after calling enableVideo.
libagora_clear_vision_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.| enabled | Whether to enable low-light enhancement:
|
| options | The low-light enhancement options. See LowLightEnhanceOptions. |
| sourceType | The type of the media source to which the filter effect is applied. See MediaSourceType.Attention: In this method, this parameter supports only the following two settings:
|
|
abstract |
Sets video noise reduction.
You can call this method to enable the video noise reduction feature and set the options of the video noise reduction effect. Applicable scenarios: dark environments and low-end video capture devices can cause video images to contain significant noise, which affects video quality. In real-time interactive scenarios, video noise also consumes bitstream resources and reduces encoding efficiency during encoding. Call timing: Call this method after calling enableVideo.
libagora_clear_vision_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.setBeautyEffectOptions(boolean enabled, BeautyOptions options) method to enable the beauty and skin smoothing function to achieve better video noise reduction effects. The recommended BeautyOptions settings for intense noise reduction effect are as follows:lighteningContrastLevel LIGHTENING_CONTRAST_NORMALlighteningLevel: 0.0smoothnessLevel: 0.5rednessLevel: 0.0sharpnessLevel: 0.1| enabled | Whether to enable video noise reduction:
|
| options | The video noise reduction options. See VideoDenoiserOptions. |
|
abstract |
Sets video noise reduction and specifies the media source.
You can call this method to enable the video noise reduction feature and set the options of the video noise reduction effect. Both this method and setVideoDenoiserOptions(boolean enabled, VideoDenoiserOptions options) set video noise reduction, but this method allows you to specify the media source to which the noise reduction is applied. Applicable scenarios: dark environments and low-end video capture devices can cause video images to contain significant noise, which affects video quality. In real-time interactive scenarios, video noise also consumes bitstream resources and reduces encoding efficiency during encoding. Call timing: Call this method after calling enableVideo.
libagora_clear_vision_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.setBeautyEffectOptions(boolean enabled, BeautyOptions options) method to enable the beauty and skin smoothing function to achieve better video noise reduction effects. The recommended BeautyOptions settings for intense noise reduction effect are as follows:lighteningContrastLevel LIGHTENING_CONTRAST_NORMALlighteningLevel: 0.0smoothnessLevel: 0.5rednessLevel: 0.0sharpnessLevel: 0.1| enabled | Whether to enable video noise reduction:
|
| options | The video noise reduction options. See VideoDenoiserOptions. |
| sourceType | The type of the media source to which the filter effect is applied. See MediaSourceType.Attention: In this method, this parameter supports only the following two settings:
|
|
abstract |
Sets color enhancement.
The video images captured by the camera can have color distortion. The color enhancement feature intelligently adjusts video characteristics such as saturation and contrast to enhance the video color richness and color reproduction, making the video more vivid. You can call this method to enable the color enhancement feature and set the options of the color enhancement effect.
enableVideo.libagora_clear_vision_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.| enabled | Whether to enable color enhancement:
|
| options | The color enhancement options. See ColorEnhanceOptions. |
|
abstract |
Sets color enhance options and specifies the media source.
The video images captured by the camera can have color distortion. The color enhancement feature intelligently adjusts video characteristics such as saturation and contrast to enhance the video color richness and color reproduction, making the video more vivid. Both this method and setColorEnhanceOptions(boolean enabled, ColorEnhanceOptions options) set color enhancement options, but this method allows you to specify the media source to which the color enhance options are applied.
enableVideo.libagora_clear_vision_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.| enabled | Whether to enable color enhancement:
|
| options | The color enhancement options. See ColorEnhanceOptions. |
| sourceType | The type of the media source to which the filter effect is applied. See MediaSourceType.Attention: In this method, this parameter supports only the following two settings:
|
|
abstract |
Creates a video effect object.
Creates an IVideoEffectObject video effect object and returns its pointer.
enableVideo method before calling this method. This method applies to Android 4.4 and later.| bundlePath | The path to the video effect resource package. |
| sourceType | The media source type, such as PRIMARY_CAMERA_SOURCE. See MediaSourceType. |
IVideoEffectObject object, if the method call succeeds.null, if the method call fails.
|
abstract |
Destroys a video effect object.
enableVideo method before using this method. This method is applicable to Android 4.4 and later.| videoEffectObject | The video effect object to be destroyed. See IVideoEffectObject. |
|
abstract |
Enables/Disables the virtual background.
The virtual background feature enables the local user to replace their original background with a static image, dynamic video, blurred background, or portrait-background segmentation to achieve picture-in-picture effect. Once the virtual background feature is enabled, all users in the channel can see the custom background. Call this method after calling enableVideo or startPreview(Constants.VideoSourceType sourceType).
libagora_segmentation_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.| enabled | Whether to enable virtual background:
|
| backgroundSource | The custom background. See VirtualBackgroundSource. To adapt the resolution of the custom background image to that of the video captured by the SDK, the SDK scales and crops the custom background image while ensuring that the content of the custom background image is not distorted. |
| segproperty | Processing properties for background images. See SegmentationProperty. |
|
abstract |
Enables virtual background and specify the media source, or disables virtual background.
The virtual background feature enables the local user to replace their original background with a static image, dynamic video, blurred background, or portrait-background segmentation to achieve picture-in-picture effect. Once the virtual background feature is enabled, all users in the channel can see the custom background. Both this method and enableVirtualBackground(boolean enabled, VirtualBackgroundSource backgroundSource, SegmentationProperty segproperty) enable/disable virtual background, but this method allows you to specify the media source to which the virtual background is applied. Call this method after calling enableVideo or startPreview(Constants.VideoSourceType sourceType).
libagora_segmentation_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.| enabled | Whether to enable virtual background:
|
| backgroundSource | The custom background. See VirtualBackgroundSource. To adapt the resolution of the custom background image to that of the video captured by the SDK, the SDK scales and crops the custom background image while ensuring that the content of the custom background image is not distorted. |
| segproperty | Processing properties for background images. See SegmentationProperty. |
| sourceType | The type of the media source to which the filter effect is applied. See MediaSourceType.Attention: In this method, this parameter supports only the following two settings:
|
|
abstract |
Sets the default audio playback route.
Most mobile phones have two audio routes: an earpiece at the top, and a speakerphone at the bottom. The earpiece plays at a lower volume, and the speakerphone at a higher volume. When setting the default audio route, you determine whether audio playback comes through the earpiece or speakerphone when no external audio device is connected. In different scenarios, the default audio routing of the system is also different. See the following:
setEnableSpeakerphone. Related callbacks: After successfully calling this method, the SDK triggers the onAudioRouteChanged callback to report the current audio route.| defaultToSpeaker | Whether to set the speakerphone as the default audio route:
|
|
abstract |
Enables/Disables the audio route to the speakerphone.
Applicable scenarios: If the default audio route of the SDK or the setting in setDefaultAudioRouteToSpeakerphone cannot meet your requirements, you can call this method to switch the current audio route. Call timing: Call this method after joining a channel. Related callbacks: After successfully calling this method, the SDK triggers the onAudioRouteChanged callback to report the current audio route.
| enabled | Sets whether to enable the speakerphone or earpiece:
|
|
abstract |
Selects the audio playback route in communication audio mode.
This method is used to switch the audio route from Bluetooth headphones to earpiece, wired headphones or speakers in communication audio mode ( MODE_IN_COMMUNICATION ). Call timing: This method can be called either before or after joining the channel. Related callbacks: After successfully calling this method, the SDK triggers the onAudioRouteChanged callback to report the current audio route.
setEnableSpeakerphone method at the same time may cause conflicts. Agora recommends that you use the setRouteInCommunicationMode method alone.| route | The audio playback route you want to use:
|
|
abstract |
Checks whether the speakerphone is enabled.
Call timing: You can call this method either before or after joining a channel.
true: The speakerphone is enabled, and the audio plays from the speakerphone.false: The speakerphone is not enabled, and the audio plays from devices other than the speakerphone. For example, the headset or earpiece.
|
abstract |
Enables in-ear monitoring.
This method enables or disables in-ear monitoring. Call timing: This method can be called either before or after joining the channel.
| enabled | Enables or disables in-ear monitoring.
|
|
abstract |
Enables in-ear monitoring.
This method enables or disables in-ear monitoring. Call timing: This method can be called either before or after joining the channel.
| enabled | Enables or disables in-ear monitoring.
|
| includeAudioFilters | The audio filter types of in-ear monitoring:
|
|
abstract |
Sets the volume of the in-ear monitor.
Call timing: This method can be called either before or after joining the channel.
| volume | The volume of the user. The value range is [0,400].
|
|
abstract |
Changes the voice pitch of the local speaker.
Call timing: This method can be called either before or after joining the channel.
| pitch | The local voice pitch. The value range is [0.5,2.0]. The lower the value, the lower the pitch. The default value is 1.0 (no change to the pitch). |
|
abstract |
Sets the formant ratio to change the timbre of human voice.
Formant ratio affects the timbre of voice. The smaller the value, the deeper the sound will be, and the larger, the sharper. After you set the formant ratio, all users in the channel can hear the changed voice. If you want to change the timbre and pitch of voice at the same time, Agora recommends using this method together with setLocalVoicePitch. Applicable scenarios: You can call this method to set the formant ratio of local audio to change the timbre of human voice. Call timing: This method can be called either before or after joining the channel.
| formantRatio | The formant ratio. The value range is [-1.0, 1.0]. The default value is 0.0, which means do not change the timbre of the voice.Note: Agora recommends setting this value within the range of [-0.4, 0.6]. Otherwise, the voice may be seriously distorted. |
|
abstract |
Sets the local voice equalization effect.
Call timing: This method can be called either before or after joining the channel.
| bandFrequency | The band frequency. The value ranges between 0 and 9; representing the respective 10-band center frequencies of the voice effects, including 31, 62, 125, 250, 500, 1k, 2k, 4k, 8k, and 16k Hz. See AUDIO_EQUALIZATION_BAND_FREQUENCY. |
| bandGain | The gain of each band in dB. The value ranges between -15 and 15. The default value is 0. |
|
abstract |
Sets the local voice reverberation.
The SDK provides an easier-to-use method, setAudioEffectPreset, to directly implement preset reverb effects for such as pop, R&B, and KTV.
| reverbKey | The reverberation key. Agora provides five reverberation keys, see AUDIO_REVERB_TYPE. |
| value | The value of the reverberation key. |
|
abstract |
Sets the preset headphone equalization effect.
This method is mainly used in spatial audio effect scenarios. You can select the preset headphone equalizer to listen to the audio to achieve the expected audio experience.
| preset | The preset headphone equalization effect:
|
|
abstract |
Sets the low- and high-frequency parameters of the headphone equalizer.
In a spatial audio effect scenario, if the preset headphone equalization effect is not achieved after calling the setHeadphoneEQPreset method, you can further adjust the headphone equalization effect by calling this method.
| lowGain | The low-frequency parameters of the headphone equalizer. The value range is [-10,10]. The larger the value, the deeper the sound. |
| highGain | The high-frequency parameters of the headphone equalizer. The value range is [-10,10]. The larger the value, the sharper the sound. |
|
abstract |
Sets an SDK preset audio effect.
Call this method to set an SDK preset audio effect for the local user who sends an audio stream. This audio effect does not change the gender characteristics of the original voice. After setting an audio effect, all users in the channel can hear the effect. Call timing: This method can be called either before or after joining the channel. To achieve better vocal effects, it is recommended that you call the following APIs before calling this method:
setAudioScenario to set the audio scenario to high-quality audio scenario, namely AUDIO_SCENARIO_GAME_STREAMING (3).setAudioProfile(int profile) to set the profile parameter to MUSIC_HIGH_QUALITY (4) or MUSIC_HIGH_QUALITY_STEREO (5).profile parameter in setAudioProfile(int profile) to SPEECH_STANDARD (1), or the method does not take effect.setAudioEffectPreset and set enumerators except for ROOM_ACOUSTICS_3D_VOICE or PITCH_CORRECTION, do not call setAudioEffectParameters; otherwise, setAudioEffectPreset is overridden.setAudioEffectPreset, Agora does not recommend you to call the following methods, otherwise the effect set by setAudioEffectPreset will be overwritten:setVoiceBeautifierPresetsetLocalVoicePitchsetLocalVoiceEqualizationsetLocalVoiceReverbsetVoiceBeautifierParameterssetVoiceConversionPresetlibagora_audio_beauty_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.| preset | Preset audio effects.
|
|
abstract |
Sets a preset voice beautifier effect.
Call this method to set a preset voice beautifier effect for the local user who sends an audio stream. After setting a voice beautifier effect, all users in the channel can hear the effect. You can set different voice beautifier effects for different scenarios. Call timing: This method can be called either before or after joining the channel. To achieve better vocal effects, it is recommended that you call the following APIs before calling this method:
setAudioScenario to set the audio scenario to high-quality audio scenario, namely AUDIO_SCENARIO_GAME_STREAMING (3).setAudioProfile(int profile) to set the profile parameter to MUSIC_HIGH_QUALITY (4) or MUSIC_HIGH_QUALITY_STEREO (5).profile parameter in setAudioProfile(int profile) to SPEECH_STANDARD (1), or the method does not take effect.setVoiceBeautifierPreset, Agora does not recommend calling the following methods, otherwise the effect set by setVoiceBeautifierPreset will be overwritten:setAudioEffectPresetsetAudioEffectParameterssetLocalVoicePitchsetLocalVoiceEqualizationsetLocalVoiceReverbsetVoiceBeautifierParameterssetVoiceConversionPresetlibagora_audio_beauty_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.| preset | The preset voice beautifier effect options.
|
|
abstract |
Sets a preset voice beautifier effect.
Call this method to set a preset voice changing effect for the local user who publishes an audio stream in a channel. After setting the voice changing effect, all users in the channel can hear the effect. You can set different voice changing effects for the user depending on different scenarios. Call timing: This method can be called either before or after joining the channel. To achieve better vocal effects, it is recommended that you call the following APIs before calling this method:
setAudioScenario to set the audio scenario to high-quality audio scenario, namely AUDIO_SCENARIO_GAME_STREAMING (3).setAudioProfile(int profile) to set the profile parameter to MUSIC_HIGH_QUALITY (4) or MUSIC_HIGH_QUALITY_STEREO (5).profile parameter in setAudioProfile(int profile) to SPEECH_STANDARD (1), or the method does not take effect.setVoiceConversionPreset, Agora does not recommend you to call the following methods, otherwise the effect set by setVoiceConversionPreset will be overwritten:setAudioEffectPresetsetAudioEffectParameterssetVoiceBeautifierPresetsetVoiceBeautifierParameterssetLocalVoicePitchsetLocalVoiceFormantsetLocalVoiceEqualizationsetLocalVoiceReverblibagora_audio_beauty_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.| preset | The options for SDK preset voice conversion effects.
|
|
abstract |
Sets parameters for SDK preset audio effects.
Call this method to set the following parameters for the local user who sends an audio stream:
setAudioScenario to set the audio scenario to high-quality audio scenario, namely AUDIO_SCENARIO_GAME_STREAMING (3).setAudioProfile(int profile) to set the profile parameter to MUSIC_HIGH_QUALITY (4) or MUSIC_HIGH_QUALITY_STEREO (5).profile parameter in setAudioProfile(int profile) to SPEECH_STANDARD (1), or the method does not take effect.setAudioEffectParameters, Agora does not recommend you to call the following methods, otherwise the effect set by setAudioEffectParameters will be overwritten:setAudioEffectPresetsetVoiceBeautifierPresetsetLocalVoicePitchsetLocalVoiceEqualizationsetLocalVoiceReverbsetVoiceBeautifierParameterssetVoiceConversionPresetlibagora_audio_beauty_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.| preset | The options for SDK preset audio effects:
|
| param1 | - If you set preset to ROOM_ACOUSTICS_3D_VOICE, param1 sets the cycle period of the 3D voice effect. The value range is [1,60] and the unit is seconds. The default value is 10, indicating that the voice moves around you every 10 seconds.
|
| param2 | - If you set preset to ROOM_ACOUSTICS_3D_VOICE , you need to set param2 to 0.
|
|
abstract |
Sets parameters for the preset voice beautifier effects.
Call this method to set a gender characteristic and a reverberation effect for the singing beautifier effect. This method sets parameters for the local user who sends an audio stream. After setting the audio parameters, all users in the channel can hear the effect. To achieve better vocal effects, it is recommended that you call the following APIs before calling this method:
setAudioScenario to set the audio scenario to high-quality audio scenario, namely AUDIO_SCENARIO_GAME_STREAMING (3).setAudioProfile(int profile) to set the profile parameter to MUSIC_HIGH_QUALITY (4) or MUSIC_HIGH_QUALITY_STEREO (5).profile parameter in setAudioProfile(int profile) to SPEECH_STANDARD (1), or the method does not take effect.setVoiceBeautifierParameters, Agora does not recommend calling the following methods, otherwise the effect set by setVoiceBeautifierParameters will be overwritten:setAudioEffectPresetsetAudioEffectParameterssetVoiceBeautifierPresetsetLocalVoicePitchsetLocalVoiceEqualizationsetLocalVoiceReverbsetVoiceConversionPresetlibagora_audio_beauty_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.| preset | The option for the preset audio effect:
|
| param1 | The gender characteristics options for the singing voice:
|
| param2 | The reverberation effect options for the singing voice:
|
|
abstract |
Set parameters for SDK preset voice conversion.
| preset | The options for SDK preset audio effects. See #VOICE_CONVERSION_PRESET. |
| param1 | reserved. |
| param2 | reserved. |
|
abstract |
Enables or disables stereo panning for remote users.
Ensure that you call this method before joining a channel to enable stereo panning for remote users so that the local user can track the position of a remote user by calling setRemoteVoicePosition.
| enabled | Whether to enable stereo panning for remote users:
|
|
abstract |
Sets the 2D position (the position on the horizontal plane) of the remote user's voice.
This method sets the 2D position and volume of a remote user, so that the local user can easily hear and identify the remote user's position. When the local user calls this method to set the voice position of a remote user, the voice difference between the left and right channels allows the local user to track the real-time position of the remote user, creating a sense of space. This method applies to massive multiplayer online games, such as Battle Royale games.
enableSoundPositionIndication method before joining a channel.| uid | The user ID of the remote user. |
| pan | The voice position of the remote user. The value ranges from -1.0 to 1.0:
|
| gain | The volume of the remote user. The value ranges from 0.0 to 100.0. The default value is 100.0 (the original volume of the remote user). The smaller the value, the lower the volume. |
|
abstract |
Enables or disables the voice AI tuner.
The voice AI tuner supports enhancing sound quality and adjusting tone style. Applicable scenarios: Social entertainment scenes including online KTV, online podcast and live streaming in showrooms, where high sound quality is required. Call timing: This method can be called either before or after joining the channel.
| enabled | Whether to enable the voice AI tuner:
|
| type | Voice AI tuner sound types, see VOICE_AI_TUNER_TYPE. |
|
abstract |
Starts playing the music file.
This method supports playing URI files starting with content://. For the audio file formats supported by this method, see What formats of audio files does the Agora RTC SDK support. If the local music file does not exist, the SDK does not support the file format, or the the SDK cannot access the music file URL, the SDK reports AUDIO_MIXING_REASON_CAN_NOT_OPEN. Call timing: You can call this method either before or after joining a channel. Related callbacks: A successful method call triggers the onAudioMixingStateChanged (AUDIO_MIXING_STATE_PLAYING) callback. When the audio mixing file playback finishes, the SDK triggers the onAudioMixingStateChanged (AUDIO_MIXING_STATE_STOPPED) callback on the local client.
playEffect(int soundId, String filePath, int loopCount, double pitch, double pan, double gain, boolean publish, int startPos) instead to play such files./sdcard/ directory and the format is MP3.| filePath | The file path. The SDK supports URI addresses starting with content://, paths starting with /assets/, URLs and absolute paths of local files. The absolute path needs to be accurate to the file name and extension. Supported audio formats include MP3, AAC, M4A, MP4, WAV, and 3GP. See Supported Audio Formats. Attention: If you have preloaded an audio effect into memory by calling preloadEffect, ensure that the value of this parameter is the same as that of filePath in preloadEffect. |
| loopback | Whether to only play music files on the local client:
|
| cycle | The number of times the music file plays.
|
|
abstract |
Enables or disables the spatial audio effect.
After enabling the spatial audio effect, you can call setRemoteUserSpatialAudioParams to set the spatial audio effect parameters of the remote user.
libagora_spatial_audio_extension.so. If the dynamic library is deleted, the function cannot be enabled normally.| enabled | Whether to enable the spatial audio effect:
|
|
abstract |
Sets the spatial audio effect parameters of the remote user.
Call this method after enableSpatialAudio. After successfully setting the spatial audio effect parameters of the remote user, the local user can hear the remote user with a sense of space.
| uid | The user ID. This parameter must be the same as the user ID passed in when the user joined the channel. |
| params | The spatial audio parameters. See SpatialAudioParams. |
|
abstract |
Options for subscribing to remote video streams.
When a remote user has enabled dual-stream mode, you can call this method to choose the option for subscribing to the video streams sent by the remote user. The default subscription behavior of the SDK for remote video streams depends on the type of registered video observer:
IVideoFrameObserver observer is registered, the default is to subscribe to both raw data and encoded data.IVideoEncodedFrameObserver observer is registered, the default is to subscribe only to the encoded data.IVideoFrameObserver observer, the default is to subscribe to both raw data and encoded data. If you want to modify the default behavior, or set different subscription options for different uids, you can call this method to set it.| uid | The user ID of the remote user. |
| options | The video subscription options. See VideoSubscriptionOptions. |
|
abstract |
Sets whether to enable the AI noise suppression function and set the noise suppression mode.
You can call this method to enable AI noise suppression function. Once enabled, the SDK automatically detects and reduces stationary and non-stationary noise from your audio on the premise of ensuring the quality of human voice. Stationary noise refers to noise signal with constant average statistical properties and negligibly small fluctuations of level within the period of observation. Common sources of stationary noises are:
libagora_ai_noise_suppression_extension.so. If the dynamic library is deleted, the function cannot be enabled.| enabled | Whether to enable the AI noise suppression function:
|
| mode | The AI noise suppression modes:
|
|
abstract |
Starts playing the music file.
This method supports playing URI files starting with content://. For the audio file formats supported by this method, see What formats of audio files does the Agora RTC SDK support. If the local music file does not exist, the SDK does not support the file format, or the the SDK cannot access the music file URL, the SDK reports AUDIO_MIXING_REASON_CAN_NOT_OPEN. Call timing: You can call this method either before or after joining a channel. Related callbacks: A successful method call triggers the onAudioMixingStateChanged (AUDIO_MIXING_STATE_PLAYING) callback. When the audio mixing file playback finishes, the SDK triggers the onAudioMixingStateChanged (AUDIO_MIXING_STATE_STOPPED) callback on the local client.
playEffect(int soundId, String filePath, int loopCount, double pitch, double pan, double gain, boolean publish, int startPos) instead to play such files./sdcard/ directory and the format is MP3.| filePath | File path:
|
| loopback | Whether to only play music files on the local client:
|
| cycle | The number of times the music file plays.
|
| startPos | The playback position (ms) of the music file. |
|
abstract |
Stops playing the music file.
After calling startAudioMixing(String filePath, boolean loopback, int cycle, int startPos) to play a music file, you can call this method to stop the playing. If you only need to pause the playback, call pauseAudioMixing. Call timing: Call this method after joining a channel.
|
abstract |
Pauses playing and mixing the music file.
After calling startAudioMixing(String filePath, boolean loopback, int cycle, int startPos) to play a music file, you can call this method to pause the playing. If you need to stop the playback, call stopAudioMixing. Call timing: Call this method after joining a channel.
|
abstract |
Resumes playing and mixing the music file.
After calling pauseAudioMixing to pause the playback, you can call this method to resume the playback. Call timing: Call this method after joining a channel.
|
abstract |
Adjusts the volume during audio mixing.
This method adjusts the audio mixing volume on both the local client and remote clients. Call timing: Call this method after startAudioMixing(String filePath, boolean loopback, int cycle, int startPos).
| volume | Audio mixing volume. The value ranges between 0 and 100. The default value is 100, which means the original volume. |
|
abstract |
Adjusts the volume of audio mixing for local playback.
Call timing: You need to call this method after calling startAudioMixing(String filePath, boolean loopback, int cycle, int startPos) and receiving the onAudioMixingStateChanged (AUDIO_MIXING_STATE_PLAYING) callback.
| volume | The volume of audio mixing for local playback. The value ranges between 0 and 100 (default). 100 represents the original volume. |
|
abstract |
Adjusts the volume of audio mixing for publishing.
This method adjusts the volume of audio mixing for publishing (sending to other users). Call timing: Call this method after calling startAudioMixing(String filePath, boolean loopback, int cycle, int startPos) and receiving the onAudioMixingStateChanged (AUDIO_MIXING_STATE_PLAYING) callback.
| volume | The volume of audio mixing for local playback. The value ranges between 0 and 100 (default). 100 represents the original volume. |
|
abstract |
Retrieves the audio mixing volume for local playback.
You can call this method to get the local playback volume of the mixed audio file, which helps in troubleshooting volume‑related issues. Call timing: Call this method after startAudioMixing(String filePath, boolean loopback, int cycle, int startPos) and receiving the onAudioMixingStateChanged (AUDIO_MIXING_STATE_PLAYING) callback.
|
abstract |
Retrieves the audio mixing volume for publishing.
This method helps troubleshoot audio volume‑related issues.
startAudioMixing(String filePath, boolean loopback, int cycle, int startPos) and receiving the onAudioMixingStateChanged (AUDIO_MIXING_STATE_PLAYING) callback.
|
abstract |
Retrieves the duration (ms) of the music file.
Retrieves the total duration (ms) of the audio. Call timing: Call this method after startAudioMixing(String filePath, boolean loopback, int cycle, int startPos) and receiving the onAudioMixingStateChanged (AUDIO_MIXING_STATE_PLAYING) callback.
|
abstract |
Retrieves the playback position (ms) of the music file.
Retrieves the playback position (ms) of the audio.
startAudioMixing(String filePath, boolean loopback, int cycle, int startPos) and receiving the onAudioMixingStateChanged (AUDIO_MIXING_STATE_PLAYING) callback.getAudioMixingCurrentPosition multiple times, ensure that the time interval between calling this method is more than 500 ms.
|
abstract |
Sets the audio mixing position.
Call this method to set the playback position of the music file to a different starting position (the default plays from the beginning). Call timing: Call this method after startAudioMixing(String filePath, boolean loopback, int cycle, int startPos) and receiving the onAudioMixingStateChanged (AUDIO_MIXING_STATE_PLAYING) callback.
| pos | Integer. The playback position (ms). |
|
abstract |
Sets the channel mode of the current audio file.
In a stereo music file, the left and right channels can store different audio data. According to your needs, you can set the channel mode to original mode, left channel mode, right channel mode, or mixed channel mode. Applicable scenarios: For example, in the KTV scenario, the left channel of the music file stores the musical accompaniment, and the right channel stores the original singer's vocals. You can set according to actual needs:
startAudioMixing(String filePath, boolean loopback, int cycle, int startPos) and receiving the onAudioMixingStateChanged (AUDIO_MIXING_STATE_PLAYING) callback.| mode | The channel mode. See AudioMixingDualMonoMode. |
|
abstract |
Sets the pitch of the local music file.
When a local music file is mixed with a local human voice, call this method to set the pitch of the local music file only. Call timing: You need to call this method after calling startAudioMixing(String filePath, boolean loopback, int cycle, int startPos) and receiving the onAudioMixingStateChanged (AUDIO_MIXING_STATE_PLAYING) callback.
| pitch | Sets the pitch of the local music file by the chromatic scale. The default value is 0, which means keeping the original pitch. The value ranges from -12 to 12, and the pitch value between consecutive values is a chromatic value. The greater the absolute value of this parameter, the higher or lower the pitch of the local music file. |
|
abstract |
Sets the playback speed of the current audio file.
Ensure you call this method after calling startAudioMixing(String filePath, boolean loopback, int cycle, int startPos) receiving the onAudioMixingStateChanged callback reporting the state as AUDIO_MIXING_STATE_PLAYING.
| speed | The playback speed. Agora recommends that you set this to a value between 50 and 400, defined as follows:
|
|
abstract |
Selects the audio track used during playback.
After getting the track index of the audio file, you can call this method to specify any track to play. For example, if different tracks of a multi-track file store songs in different languages, you can call this method to set the playback language.
https://docs.agora.io/en/help/general-product-inquiry/audio_format#extended-audio-file-formats.startAudioMixing(String filePath, boolean loopback, int cycle, int startPos) and receiving the onAudioMixingStateChanged (AUDIO_MIXING_STATE_PLAYING) callback.| audioIndex | The audio track you want to specify. The value should be greater than 0 and less than that of returned by getAudioTrackCount. |
|
abstract |
Gets the index of audio tracks of the current music file.
startAudioMixing(String filePath, boolean loopback, int cycle, int startPos) and receiving the onAudioMixingStateChanged (AUDIO_MIXING_STATE_PLAYING) callback.
|
abstract |
Gets the IAudioEffectManager class to manage the audio effect files.
IAudioEffectManager
|
abstract |
Starts client-side recording.
The SDK supports recording on the client side during a call. This method records the audio of all users in the channel and generates a recording file that contains all users' voices. The recording file only supports the following formats:
.wav: Large file size with high audio fidelity..aac: Smaller file size with some loss of audio fidelity.Make sure that the specified directory exists and is writable. You must call this API after joinChannel(String token, String channelId, int uid, ChannelMediaOptions options). If you call leaveChannel(LeaveChannelOptions options) while recording is in progress, the recording stops automatically.
| filePath | The absolute path where the recording file is saved locally, including the file name and extension. For example: /sdcard/emulated/0/audio.aac. Note: Make sure that the specified path exists and is writable. |
| quality | Recording quality.
|
Error Codes for details and troubleshooting.
|
abstract |
Starts client-side recording and applies recording configurations.
The SDK supports recording on the client side during a voice calling, voice-only interaction. After calling this method, you can record the audio of users in the channel and obtain a recording file. The recording file supports the following formats only:
AUDIO_RECORDING_QUALITY_MEDIUM, a 10-minute recording results in a file of approximately 2 MB. Recording automatically stops when the user leaves the channel. Call timing: You must call this method after joining the channel.| config | Recording configuration. See AudioRecordingConfiguration for details. |
Error Code for details and troubleshooting tips.
|
abstract |
Stops client-side recording.
Error Codes for details and troubleshooting suggestions.
|
abstract |
Starts an audio device loopback test.
To test whether the user's local sending and receiving streams are normal, you can call this method to perform an audio and video call loop test, which tests whether the audio and video devices and the user's upstream and downstream networks are working properly. After starting the test, the user needs to make a sound or face the camera. The audio or video is output after about two seconds. If the audio playback is normal, the audio device and the user's upstream and downstream networks are working properly; if the video playback is normal, the video device and the user's upstream and downstream networks are working properly. Call timing: You can call this method either before or after joining a channel.
stopEchoTest to end the test; otherwise, the user cannot perform the next audio and video call loop test and cannot join the channel.| config | The configuration of the audio and video call loop test. See EchoTestConfiguration. |
|
abstract |
Stops the audio call test.
After calling startEchoTest, you must call this method to end the test; otherwise, the user cannot perform the next audio and video call loop test and cannot join the channel.
|
abstract |
Starts the last mile network probe test.
This method starts the last-mile network probe test before joining a channel to get the uplink and downlink last mile network statistics, including the bandwidth, packet loss, jitter, and round-trip time (RTT). Call timing: Do not call other methods before receiving the onLastmileQuality and onLastmileProbeResult callbacks. Otherwise, the callbacks may be interrupted. Related callbacks: After successfully calling this method, the SDK sequentially triggers the following 2 callbacks:
onLastmileQuality: The SDK triggers this callback within two seconds depending on the network conditions. This callback rates the network conditions and is more closely linked to the user experience.onLastmileProbeResult: The SDK triggers this callback within 30 seconds depending on the network conditions. This callback returns the real-time statistics of the network conditions and is more objective.| config | The configurations of the last-mile network probe test. See LastmileProbeConfig. |
|
abstract |
Stops the last mile network probe test.
|
abstract |
Sets the external audio source.
Call this method before calling joinChannel(String token, String channelId, String optionalInfo, int uid) and startPreview().
| enabled | - true: Enable the external audio source.
|
| sampleRate | The sample rate (Hz) of the external audio source, which can be set as 8000, 16000, 32000, 44100, or 48000. |
| channels | The number of audio channels of the external audio source:
|
|
abstract |
Sets the external audio sink.
After enabling the external audio sink, you can call pullPlaybackAudioFrame(byte[] data, int lengthInByte) to pull remote audio frames. The app can process the remote audio and play it with the audio effects that you want. Applicable scenarios: This method applies to scenarios where you want to use external audio data for playback. Call timing: Call this method before joining a channel.
onPlaybackAudioFrame callback.| enabled | Whether to enable or disable the external audio sink:
|
| sampleRate | The sample rate (Hz) of the external audio sink, which can be set as 16000, 32000, 44100, or 48000. |
| channels | The number of audio channels of the external audio sink:
|
|
abstract |
Sets the EGL context for rendering remote video streams.
This method can replace the default remote EGL context within the SDK, making it easier to manage the EGL context. When the engine is destroyed, the SDK will automatically release the EGL context. Applicable scenarios: This method is suitable for using a custom video rendering method instead of the default SDK rendering method to render remote video frames in Texture format. Call timing: Call this method before joining a channel.
| eglContext | The EGL context for rendering remote video streams. |
|
abstract |
Pulls the remote audio data.
After a successful call of this method, the app pulls the decoded and mixed audio data for playback. Call timing: Call this method after joining a channel. Before calling this method, call setExternalAudioSink (enabled: true) to notify the app to enable and set the external audio rendering.
onPlaybackAudioFrame callback can be used to get audio data after remote mixing. After calling setExternalAudioSink to enable external audio rendering, the app will no longer be able to obtain data from the onPlaybackAudioFrame callback. Therefore, you should choose between this method and the onPlaybackAudioFrame callback based on your actual business requirements. The specific distinctions between them are as follows:onPlaybackAudioFrame callback, the SDK sends the audio data to the app through the callback. Any delay in processing the audio frames may result in audio jitter. This method is only used for retrieving audio data after remote mixing. If you need to get audio data from different audio processing stages such as capture and playback, you can register the corresponding callbacks by calling registerAudioFrameObserver.| data | The remote audio data to be pulled. The data type is byte[]. |
| lengthInByte | The data length (byte). The value of this parameter is related to the audio duration, and the values of the sampleRate and channels parameters that you set in setExternalAudioSink. lengthInByte = sampleRate/1000 × 2 × channels × audio duration (ms). |
|
abstract |
Pulls the remote audio data.
Before calling this method, call the setExternalAudioSink(enabled: true) method to notify the app to enable and set the external audio sink. After a successful method call, the app pulls the decoded and mixed audio data for playback.
onPlaybackAudioFrame callback is as follows:onPlaybackAudioFrame: The SDK sends the audio data to the app through this callback. Any delay in processing the audio frames may result in audio jitter.pullPlaybackAudioFrame(byte[] data, int lengthInByte): The app pulls the remote audio data. After setting the audio data parameters, the SDK adjusts the frame buffer and avoids problems caused by jitter in the external audio playback.| data | The remote audio data to be pulled. The data type is ByteBuffer. |
| lengthInByte | The length (in bytes) of the remote audio data. The value of this parameter is related to the audio duration,and the values of the sampleRate and channels parameters that you set in setExternalAudioSink. lengthInByte = sampleRate /1000 × 2 × channels × audio duration (ms). |
|
abstract |
Starts the audio capturing device test.
This method tests whether the audio capturing device works properly. After calling this method, the SDK triggers the onAudioVolumeIndication callback at the time interval set in this method, which reports uid = 0 and the volume information of the capturing device. The difference between this method and the startEchoTest method is that the former checks if the local audio capturing device is working properly, while the latter can check the audio and video devices and network conditions.
stopRecordingDeviceTest to stop the test before joining a channel.| indicationInterval | The interval (ms) for triggering the onAudioVolumeIndication callback. This value should be set to greater than 10, otherwise, you will not receive the onAudioVolumeIndication callback and the SDK returns the error code -2. Agora recommends that you set this value to 100. |
|
abstract |
Stops the audio capturing device test.
This method stops the audio capturing device test. You must call this method to stop the test after calling the startRecordingDeviceTest method.
|
abstract |
Starts the audio playback device test.
This method tests whether the audio device for local playback works properly. Once a user starts the test, the SDK plays an audio file specified by the user. If the user can hear the audio, the playback device works properly. After calling this method, the SDK triggers the onAudioVolumeIndication callback every 100 ms, reporting uid = 1 and the volume information of the playback device. The difference between this method and the startEchoTest method is that the former checks if the local audio playback device is working properly, while the latter can check the audio and video devices and network conditions.
stopPlaybackDeviceTest to stop the test before joining a channel.| audioFileName | The path of the audio file. The data format is string in UTF-8.
|
|
abstract |
Stops the audio playback device test.
This method stops the audio playback device test. You must call this method to stop the test after calling the startPlaybackDeviceTest method.
|
abstract |
Creates a custom audio track.
To publish a custom audio source, see the following steps:1. Call this method to create a custom audio track and get the audio track ID.
joinChannel(String token, String channelId, int uid, ChannelMediaOptions options) to join the channel. In ChannelMediaOptions, set publishCustomAudioTrackId to the audio track ID that you want to publish, and set publishCustomAudioTrack to true.pushExternalAudioFrame and specify trackId as the audio track ID set in step 2. You can then publish the corresponding custom audio source in the channel.| trackType | The type of the custom audio track. See AudioTrackType.Attention: If AUDIO_TRACK_DIRECT is specified for this parameter, you must set publishMicrophoneTrack to false in ChannelMediaOptions when calling joinChannel(String token, String channelId, int uid, ChannelMediaOptions options) to join the channel; otherwise, joining the channel fails and returns the error code -2. |
| config | The configuration of the custom audio track. See AudioTrackConfig. |
|
abstract |
Destroys the specified audio track.
| trackId | The custom audio track ID returned in createCustomAudioTrack. |
|
abstract |
Sets the external audio source parameters.
Call timing: Call this method before joining a channel.
| enabled | Whether to enable the external audio source:
|
| sampleRate | The sample rate (Hz) of the external audio source which can be set as 8000, 16000, 32000, 44100, or 48000. |
| channels | The number of channels of the external audio source, which can be set as 1 (Mono) or 2 (Stereo). |
| localPlayback | Whether to play the external audio source:
|
| publish | Whether to publish audio to the remote users:
|
|
abstract |
Pushes the external audio data to the app.
| data | The audio buffer data. |
| timestamp | The timestamp of the audio data. |
|
abstract |
Pushes the external audio data to the app.
| data | The audio buffer data. |
| timestamp | The timestamp of the audio data. |
| trackId | The audio track ID. |
|
abstract |
Pushes the external audio frame to the SDK.
Call this method to push external audio frames through the audio track. Call timing: Before calling this method to push external audio data, perform the following steps:1. Call createCustomAudioTrack to create a custom audio track and get the audio track ID.
joinChannel(String token, String channelId, int uid, ChannelMediaOptions options) to join the channel. In ChannelMediaOptions, set publishCustomAudioTrackId to the audio track ID that you want to publish, and set publishCustomAudioTrack to true.| data | The external audio data. |
| timestamp | The timestamp (ms) of the external audio frame. This parameter is required. You can use it to restore the order of the captured audio frames, or synchronize audio and video frames in video-related scenarios (including scenarios where external video sources are used). |
| sampleRate | The sample rate (Hz) of the external audio source which can be set as 8000, 16000, 32000, 44100, or 48000. |
| channels | The number of channels of the external audio source, which can be set as 1 (Mono) or 2 (Stereo). |
| bytesPerSample | The number of bytes per sample. For PCM, this parameter is generally set to 16 bits (2 bytes). |
| trackId | The audio track ID. Set this parameter to the custom audio track ID returned in createCustomAudioTrack. |
|
abstract |
Pushes the external audio data to the app.
| data | The audio buffer data. |
| timestamp | The timestamp of the audio data. |
| sampleRate | Sets the sample rate, which can be set as 8000, 16000, 32000, 44100, or 48000 Hz. |
| channel | Sets the number of audio channels
|
| bytesPerSample | bytes per sample |
| trackId | The audio track ID. |
|
abstract |
Configures the external video source.
After calling this method to enable an external video source, you can call pushExternalVideoFrameById(AgoraVideoFrame frame, int videoTrackId) to push external video data to the SDK. Call timing: Call this method before joining a channel.
| enable | Whether to use the external video source:
|
| useTexture | Whether to use the external video frame in the Texture format.
|
| sourceType | Whether the external video frame is encoded. See ExternalVideoSourceType. |
|
abstract |
Sets the external video source.
Once the external video source is enabled, the SDK prepares to accept the external video frame.
| enable | Determines whether to enable the external video source.
|
| useTexture | Determines whether to use textured video data.
|
| sourceType | Determines whether the external video source is encoded.
|
| encodedOpt | Determine encoded video track options including codec type, cc mode and target bitrate. |
|
abstract |
Pushes the external raw video frame to the SDK.
After calling the setExternalVideoSource method and setting the enabled parameter to true, and the encodedFrame parameter to false, you can use this method to push the raw external video frame to the SDK. You can push the video frame either by calling this method or by calling pushExternalVideoFrame(AgoraVideoFrame frame) . The difference is that this method supports video data in the texture format.
| frame | Video frame to be pushed. See VideoFrame. |
true: Success.false: Failure.
|
abstract |
Pushes the external raw video frame to the SDK through video tracks.
To publish a custom video source, see the following steps:1. Call createCustomVideoTrack to create a video track and get the video track ID.
joinChannel(String token, String channelId, int uid, ChannelMediaOptions options) to join the channel. In ChannelMediaOptions, set customVideoTrackId to the video track ID that you want to publish, and set publishCustomVideoTrack to true.videoTrackId as the video track ID set in step 2. You can then publish the corresponding custom video source in the channel. You can push the video frame either by calling this method or by calling pushExternalVideoFrameById(AgoraVideoFrame frame, int videoTrackId). The difference is that this method supports video data in the texture format.setExternalVideoSource method and the SDK will automatically create a video track with the videoTrackId set to 0. DANGER: After calling this method, even if you stop pushing external video frames to the SDK, the custom video stream will still be counted as the video duration usage and incur charges. Agora recommends that you take appropriate measures based on the actual situation to avoid such video billing.destroyCustomVideoTrack to destroy the custom video track.muteLocalVideoStream to cancel sending video stream or call updateChannelMediaOptions to set publishCustomVideoTrack to false.| frame | Video frame to be pushed. See VideoFrame. |
| videoTrackId | The video track ID returned by calling the createCustomVideoTrack method.Note: If you only need to push one custom video source, set videoTrackId to 0. |
|
abstract |
Pushes the external raw video frame to the SDK through video tracks.
To publish a custom video source, see the following steps:1. Call createCustomVideoTrack to create a video track and get the video track ID.
joinChannel(String token, String channelId, int uid, ChannelMediaOptions options) to join the channel. In ChannelMediaOptions, set customVideoTrackId to the video track ID that you want to publish, and set publishCustomVideoTrack to true.videoTrackId as the video track ID set in step 2. You can then publish the corresponding custom video source in the channel. You can push the video frame either by calling this method or by calling pushExternalVideoFrameById(VideoFrame frame, int videoTrackId). The difference is that this method does not support video data in Texture format.setExternalVideoSource method and the SDK will automatically create a video track with the videoTrackId set to 0. DANGER: After calling this method, even if you stop pushing external video frames to the SDK, the custom video stream will still be counted as the video duration usage and incur charges. Agora recommends that you take appropriate measures based on the actual situation to avoid such video billing.destroyCustomVideoTrack to destroy the custom video track.muteLocalVideoStream to cancel sending video stream or call updateChannelMediaOptions to set publishCustomVideoTrack to false.| frame | The external raw video frame to be pushed. See AgoraVideoFrame. |
| videoTrackId | The video track ID returned by calling the createCustomVideoTrack method.Note: If you only need to push one custom video source, set videoTrackId to 0. |
|
abstract |
Pushes the encoded external video frame to Agora SDK.
| data | The encoded external video data, which must be the direct buffer. |
| frameInfo | The encoded external video frame info: {EncodedVideoFrameInfo}. 0: Success, which means that the encoded external video frame is pushed successfully.< 0: Failure, which means that the encoded external video frame fails to be pushed. preview |
|
abstract |
Pushes the encoded external video frame to the app with specified connection.
| data | The encoded external video data, which must be the direct buffer. |
| frameInfo | The encoded external video frame info: {EncodedVideoFrameInfo}. videoTrackId The id of the video track. 0: Success, which means that the encoded external video frame is pushed successfully.< 0: Failure, which means that the encoded external video frame fails to be pushed. preview |
|
abstract |
Pushes the external raw video frame to the SDK.
After calling the setExternalVideoSource method and setting the enabled parameter to true, and the encodedFrame parameter to false, you can use this method to push the raw external video frame to the SDK. You can push the video frame either by calling this method or by calling pushExternalVideoFrame(VideoFrame frame) . The difference is that this method does not support video data in Texture format.
| frame | The external raw video frame to be pushed. See AgoraVideoFrame. |
true: Success.false: Failure.
|
abstract |
Check whether the video supports the Texture encoding.
true: Supports the Texture encoding.false: Does not support the Texture encoding.
|
abstract |
Registers an audio frame observer object.
Call this method to register an audio frame observer object (register a callback). When you need the SDK to trigger the onMixedAudioFrame, onRecordAudioFrame, onPlaybackAudioFrame, onPlaybackAudioFrameBeforeMixing or onEarMonitoringAudioFrame callback, you need to use this method to register the callbacks. Call timing: Call this method before joining a channel.
| observer | The observer instance. See IAudioFrameObserver. Set the value as NULL to release the instance. Agora recommends calling this method after receiving onLeaveChannel to release the audio observer object. |
|
abstract |
Registers an encoded audio observer.
startAudioRecording [2/2] to set the recording type and quality of audio files, but Agora does not recommend using this method and startAudioRecording [2/2] at the same time. Only the method called later will take effect.| config | Observer settings for the encoded audio. See AudioEncodedFrameObserverConfig. |
| observer | The encoded audio observer. See IAudioEncodedFrameObserver. |
|
abstract |
Sets the format of the captured raw audio data.
The SDK calculates the sampling interval based on the samplesPerCall, sampleRate and channel parameters set in this method.Sample interval (sec) = samplePerCall /( sampleRate × channel ). Ensure that the sample interval ≥ 0.01 (s). The SDK triggers the onRecordAudioFrame callback according to the sampling interval. Call timing: Call this method before joining a channel.
| sampleRate | The sample rate returned in the callback, which can be set as 8000, 16000, 32000, 44100, or 48000 Hz. |
| channel | The number of audio channels. You can set the value as 1 or 2.
|
| mode | The use mode of the audio frame:
|
| samplesPerCall | The number of data samples, such as 1024 for the Media Push. |
|
abstract |
Sets the format of the raw audio playback data.
The SDK calculates the sampling interval based on the samplesPerCall, sampleRate and channel parameters set in this method.Sample interval (sec) = samplePerCall /( sampleRate × channel ). Ensure that the sample interval ≥ 0.01 (s). The SDK triggers the onPlaybackAudioFrame callback according to the sampling interval. Call timing: Call this method before joining a channel.
| sampleRate | The sample rate returned in the callback, which can be set as 8000, 16000, 24000, 32000, 44100, or 48000 Hz. |
| channel | The number of audio channels. You can set the value as 1 or 2.
|
| mode | The use mode of the audio frame:
|
| samplesPerCall | The number of data samples, such as 1024 for the Media Push. |
|
abstract |
Sets the format of the raw audio data after mixing for audio capture and playback.
The SDK calculates the sampling interval based on the samplesPerCall, sampleRate and channel parameters set in this method.Sample interval (sec) = samplePerCall /( sampleRate × channel ). Ensure that the sample interval ≥ 0.01 (s). The SDK triggers the onMixedAudioFrame callback according to the sampling interval. Call timing: Call this method before joining a channel.
| sampleRate | The sample rate returned in the callback, which can be set as 8000, 16000, 32000, 44100, or 48000 Hz. |
| channel | The number of audio channels. You can set the value as 1 or 2.
|
| samplesPerCall | The number of data samples, such as 1024 for the Media Push. |
|
abstract |
Sets the format of the in-ear monitoring raw audio data.
This method is used to set the in-ear monitoring audio data format reported by the onEarMonitoringAudioFrame callback.
enableInEarMonitoring(boolean enabled, int includeAudioFilters), and set includeAudioFilters to EAR_MONITORING_FILTER_BUILT_IN_AUDIO_FILTERS or EAR_MONITORING_FILTER_NOISE_SUPPRESSION.samplesPerCall, sampleRate and channel parameters set in this method.Sample interval (sec) = samplePerCall /( sampleRate × channel ). Ensure that the sample interval ≥ 0.01 (s). The SDK triggers the onEarMonitoringAudioFrame callback according to the sampling interval.| sampleRate | The sample rate of the audio data reported in the onEarMonitoringAudioFrame callback, which can be set as 8,000, 16,000, 32,000, 44,100, or 48,000 Hz. |
| channel | The number of audio channels reported in the onEarMonitoringAudioFrame callback.
|
| mode | The use mode of the audio frame:
|
| samplesPerCall | The number of data samples reported in the onEarMonitoringAudioFrame callback, such as 1,024 for the Media Push. |
|
abstract |
Adds a watermark image to the local video.
This method adds a PNG watermark image to the local video stream in a live streaming session. Once the watermark image is added, all the users in the channel (CDN audience included) and the video capturing device can see and capture it. If you only want to add a watermark to the CDN live streaming, see startRtmpStreamWithTranscoding.
| watermark | The watermark image to be added to the local live streaming: AgoraImage. |
|
abstract |
Adds a watermark image to the local video.
This method adds a PNG watermark image to the local video in the live streaming. Once the watermark image is added, all the audience in the channel (CDN audience included), and the capturing device can see and capture it. The Agora SDK supports adding only one watermark image onto a live video stream. The newly added watermark image replaces the previous one. The watermark coordinates are dependent on the settings in the setVideoEncoderConfiguration method:
ORIENTATION_MODE ) is fixed landscape mode or the adaptive landscape mode, the watermark uses the landscape orientation.ORIENTATION_MODE ) is fixed portrait mode or the adaptive portrait mode, the watermark uses the portrait orientation.setVideoEncoderConfiguration method; otherwise, the watermark image will be cropped. You can control the visibility of the watermark during preview by setting the visibleInPreview parameter when calling this method. However, whether it ultimately takes effect also depends on the position parameter you set when calling setupLocalVideo (the position of the video frame in the video link). Refer to the table below for details. | Observation position | visibleInPreview value | Watermark visibility | | --------------------------------------------— | -------------------— | -----------------— | | (Default) VIDEO_MODULE_POSITION_POST_CAPTURER | true | Yes | | | false | No | | VIDEO_MODULE_POSITION_PRE_ENCODER | true | Yes | | | false | Yes |enableVideo.startRtmpStreamWithTranscoding method.| watermarkUrl | The local file path of the watermark image to be added. This method supports adding a watermark image from the local absolute or relative file path. |
| options | The options of the watermark image to be added. See WatermarkOptions. |
|
abstract |
Adds a watermark image to the local video.
You can use this method to overlay a watermark image on the local video stream, and configure the watermark's position, size, and visibility in the preview using WatermarkConfig.
| config | Watermark configuration. See WatermarkConfig. |
|
abstract |
Removes the watermark image from the local video.
This method removes a previously added watermark image from the local video stream using the specified unique ID.
| id | The ID of the watermark image to be removed. |
|
abstract |
Removes the watermark image from the video stream.
|
abstract |
Sets the priority of a remote user's media stream.
Use this method with the setRemoteSubscribeFallbackOption method. If the fallback function is enabled for a subscribed stream, the SDK ensures the high-priority user gets the best possible stream quality.
| uid | The ID of the remote user. |
| userPriority | The priority of the remote user:
|
|
abstract |
Sets the fallback option for the subscribed video stream based on the network conditions.
An unstable network affects the audio and video quality in a video call or interactive live video streaming. If option is set as STREAM_FALLBACK_OPTION_VIDEO_STREAM_LOW or STREAM_FALLBACK_OPTION_AUDIO_ONLY, the SDK automatically switches the video from a high-quality stream to a low-quality stream or disables the video when the downlink network conditions cannot support both audio and video to guarantee the quality of the audio. Meanwhile, the SDK continuously monitors network quality and resumes subscribing to audio and video streams when the network quality improves. When the subscribed video stream falls back to an audio-only stream, or recovers from an audio-only stream to an audio-video stream, the SDK triggers the onRemoteSubscribeFallbackToAudioOnly callback.
| option | Fallback options for the subscribed stream. See StreamFallbackOptions. |
|
abstract |
Sets the fallback option for the subscribed video stream based on the network conditions.
An unstable network affects the audio and video quality in a video call or interactive live video streaming. If option is set as STREAM_FALLBACK_OPTION_VIDEO_STREAM_LOW or STREAM_FALLBACK_OPTION_AUDIO_ONLY, the SDK automatically switches the video from a high-quality stream to a low-quality stream or disables the video when the downlink network conditions cannot support both audio and video to guarantee the quality of the audio. Meanwhile, the SDK continuously monitors network quality and resumes subscribing to audio and video streams when the network quality improves. When the subscribed video stream falls back to an audio-only stream, or recovers from an audio-only stream to an audio-video stream, the SDK triggers the onRemoteSubscribeFallbackToAudioOnly callback.
| option | Fallback options for the subscribed stream.
|
|
abstract |
Sets the high priority user list and related fallback option for the remotely subscribed video stream based on the network conditions in NASA2.
| uidList | The id list of high priority users. |
| option | The remote subscribe fallback option of high priority users. |
|
abstract |
Enables or disables dual-stream mode on the sender side.
Dual streams are a pairing of a high-quality video stream and a low-quality video stream:
setRemoteVideoStreamType(int uid, int streamType) to choose to receive either the high-quality video stream or the low-quality video stream on the subscriber side.enableDualStreamModeEx method.| enabled | Whether to enable dual-stream mode:
|
|
abstract |
Sets the dual-stream mode on the sender side and the low-quality video stream.
You can call this method to enable or disable the dual-stream mode on the publisher side. Dual streams are a pairing of a high-quality video stream and a low-quality video stream:
setRemoteVideoStreamType(int uid, int streamType) to choose to receive either the high-quality video stream or the low-quality video stream on the subscriber side.enableDualStreamModeEx method.| enabled | Whether to enable dual-stream mode:
|
| streamConfig | The configuration of the low-quality video stream. See SimulcastStreamConfig.Note: When setting mode to DISABLE_SIMULCAST_STREAM, setting streamConfig will not take effect. |
|
abstract |
Sets the dual-stream mode on the sender side.
The SDK defaults to enabling low-quality video stream adaptive mode ( AUTO_SIMULCAST_STREAM ) on the sender side, which means the sender does not actively send low-quality video stream. The receiving end with the role of the host can initiate a low-quality video stream request by calling setRemoteVideoStreamType(int uid, int streamType), and upon receiving the request, the sending end automatically starts sending low-quality stream.
mode to DISABLE_SIMULCAST_STREAM (never send low-quality video streams) or ENABLE_SIMULCAST_STREAM (always send low-quality video streams).mode set to AUTO_SIMULCAST_STREAM.enableDualStreamMode(boolean enabled) is as follows:mode to DISABLE_SIMULCAST_STREAM, it has the same effect as enableDualStreamMode(boolean enabled) (false).mode to ENABLE_SIMULCAST_STREAM, it has the same effect as enableDualStreamMode(boolean enabled) (true).| mode | The mode in which the video stream is sent. See SimulcastStreamMode. |
|
abstract |
Sets dual-stream mode configuration on the sender side.
The SDK defaults to enabling low-quality video stream adaptive mode ( AUTO_SIMULCAST_STREAM ) on the sender side, which means the sender does not actively send low-quality video stream. The receiving end with the role of the host can initiate a low-quality video stream request by calling setRemoteVideoStreamType(int uid, int streamType), and upon receiving the request, the sending end automatically starts sending low-quality stream.
mode to DISABLE_SIMULCAST_STREAM (never send low-quality video streams) or ENABLE_SIMULCAST_STREAM (always send low-quality video streams).mode set to AUTO_SIMULCAST_STREAM. The difference between this method and setDualStreamMode(Constants.SimulcastStreamMode mode) is that this method can also configure the low-quality video stream, and the SDK sends the stream according to the configuration in streamConfig.enableDualStreamMode(boolean enabled, SimulcastStreamConfig streamConfig) is as follows:mode to DISABLE_SIMULCAST_STREAM, it has the same effect as calling enableDualStreamMode(boolean enabled, SimulcastStreamConfig streamConfig) and setting enabled to false.mode to ENABLE_SIMULCAST_STREAM, it has the same effect as calling enableDualStreamMode(boolean enabled, SimulcastStreamConfig streamConfig) and setting enabled to true.| mode | The mode in which the video stream is sent. See SimulcastStreamMode. |
| streamConfig | The configuration of the low-quality video stream. See SimulcastStreamConfig.Note: When setting mode to DISABLE_SIMULCAST_STREAM, setting streamConfig will not take effect. |
|
abstract |
Sets the simulcast video stream configuration.
You can call this method to set video streams with different resolutions for the same video source. The subscribers can call setRemoteVideoStreamType(int uid, int streamType) to select which stream layer to receive. The broadcaster can publish up to four layers of video streams: one main stream (highest resolution) and three additional streams of different quality levels.
| simulcastConfig | Configuration for different video steam layers. See SimulcastConfig. |
|
abstract |
Sets the maximum frame rate for rendering local video.
Applicable scenarios: In scenarios where the requirements for video rendering frame rate are not high (such as screen sharing or online education), you can call this method to set the maximum frame rate for local video rendering. The SDK will attempt to keep the actual frame rate of local rendering close to this value, which helps to reduce CPU consumption and improving system performance. Call timing: You can call this method either before or after joining a channel.
| sourceType | The type of the video source. See VideoSourceType. |
| targetFps | The capture frame rate (fps) of the local video. Sopported values are: 1, 7, 10, 15, 24, 30, 60.CAUTION: Set this parameter to a value lower than the actual video frame rate; otherwise, the settings do not take effect. |
|
abstract |
Sets the maximum frame rate for rendering remote video.
Applicable scenarios: In scenarios where the video rendering frame rate is not critical (e.g., screen sharing, online education) or when the remote users are using mid-to-low-end devices, you can call this method to set the maximum frame rate for video rendering on the remote client. The SDK will attempt to render the actual frame rate as close as possible to this value, which helps to reduce CPU consumption and improve system performance. Call timing: You can call this method either before or after joining a channel.
| targetFps | The capture frame rate (fps) of the local video. Sopported values are: 1, 7, 10, 15, 24, 30, 60.CAUTION: Set this parameter to a value lower than the actual video frame rate; otherwise, the settings do not take effect. |
|
abstract |
Sets the video stream type to subscribe to.
Depending on the default behavior of the sender and the specific settings when calling setDualStreamMode(Constants.SimulcastStreamMode mode, SimulcastStreamConfig streamConfig), the scenarios for the receiver calling this method are as follows:
AUTO_SIMULCAST_STREAM ) on the sender side by default, meaning only the high-quality video stream is transmitted. Only the receiver with the role of the **host**can call this method to initiate a low-quality video stream request. Once the sender receives the request, it starts automatically sending the low-quality video stream. At this point, all users in the channel can call this method to switch to low-quality video stream subscription mode.setDualStreamMode(Constants.SimulcastStreamMode mode, SimulcastStreamConfig streamConfig) and sets mode to DISABLE_SIMULCAST_STREAM (never send low-quality video stream), then calling this method will have no effect.setDualStreamMode(Constants.SimulcastStreamMode mode, SimulcastStreamConfig streamConfig) and sets mode to ENABLE_SIMULCAST_STREAM (always send low-quality video stream), both the host and audience receivers can call this method to switch to low-quality video stream subscription mode. The SDK will dynamically adjust the size of the corresponding video stream based on the size of the video window to save bandwidth and computing resources. The default aspect ratio of the low-quality video stream is the same as that of the high-quality video stream. According to the current aspect ratio of the high-quality video stream, the system will automatically allocate the resolution, frame rate, and bitrate of the low-quality video stream.setRemoteDefaultVideoStreamType(int streamType), the setting of this method takes effect.| uid | The user ID. |
| streamType | The video stream type, see VideoStreamType. |
|
abstract |
Sets the video stream type to subscribe to.
Depending on the default behavior of the sender and the specific settings when calling setDualStreamMode(Constants.SimulcastStreamMode mode, SimulcastStreamConfig streamConfig), the scenarios for the receiver calling this method are as follows:
AUTO_SIMULCAST_STREAM ) on the sender side by default, meaning only the high-quality video stream is transmitted. Only the receiver with the role of the **host**can call this method to initiate a low-quality video stream request. Once the sender receives the request, it starts automatically sending the low-quality video stream. At this point, all users in the channel can call this method to switch to low-quality video stream subscription mode.setDualStreamMode(Constants.SimulcastStreamMode mode, SimulcastStreamConfig streamConfig) and sets mode to DISABLE_SIMULCAST_STREAM (never send low-quality video stream), then calling this method will have no effect.setDualStreamMode(Constants.SimulcastStreamMode mode, SimulcastStreamConfig streamConfig) and sets mode to ENABLE_SIMULCAST_STREAM (always send low-quality video stream), both the host and audience receivers can call this method to switch to low-quality video stream subscription mode. The SDK will dynamically adjust the size of the corresponding video stream based on the size of the video window to save bandwidth and computing resources. The default aspect ratio of the low-quality video stream is the same as that of the high-quality video stream. According to the current aspect ratio of the high-quality video stream, the system will automatically allocate the resolution, frame rate, and bitrate of the low-quality video stream.setRemoteDefaultVideoStreamType(int streamType), the setting of this method takes effect.| uid | The user ID. |
| streamType | The video stream type:
|
|
abstract |
Sets the default video stream type to subscribe to.
Depending on the default behavior of the sender and the specific settings when calling setDualStreamMode(Constants.SimulcastStreamMode mode, SimulcastStreamConfig streamConfig), the scenarios for the receiver calling this method are as follows:
AUTO_SIMULCAST_STREAM ) on the sender side by default, meaning only the high-quality video stream is transmitted. Only the receiver with the role of the **host**can call this method to initiate a low-quality video stream request. Once the sender receives the request, it starts automatically sending the low-quality video stream. At this point, all users in the channel can call this method to switch to low-quality video stream subscription mode.setDualStreamMode(Constants.SimulcastStreamMode mode, SimulcastStreamConfig streamConfig) and sets mode to DISABLE_SIMULCAST_STREAM (never send low-quality video stream), then calling this method will have no effect.setDualStreamMode(Constants.SimulcastStreamMode mode, SimulcastStreamConfig streamConfig) and sets mode to ENABLE_SIMULCAST_STREAM (always send low-quality video stream), both the host and audience receivers can call this method to switch to low-quality video stream subscription mode. The SDK will dynamically adjust the size of the corresponding video stream based on the size of the video window to save bandwidth and computing resources. The default aspect ratio of the low-quality video stream is the same as that of the high-quality video stream. According to the current aspect ratio of the high-quality video stream, the system will automatically allocate the resolution, frame rate, and bitrate of the low-quality video stream. Call timing: Call this method before joining a channel. The SDK does not support changing the default subscribed video stream type after joining a channel.setRemoteVideoStreamType(int uid, int streamType), the setting of setRemoteVideoStreamType(int uid, int streamType) takes effect.| streamType | The video stream type, see VideoStreamType. |
|
abstract |
Sets the default video stream type to subscribe to.
Depending on the default behavior of the sender and the specific settings when calling setDualStreamMode(Constants.SimulcastStreamMode mode, SimulcastStreamConfig streamConfig), the scenarios for the receiver calling this method are as follows:
AUTO_SIMULCAST_STREAM ) on the sender side by default, meaning only the high-quality video stream is transmitted. Only the receiver with the role of the **host**can call this method to initiate a low-quality video stream request. Once the sender receives the request, it starts automatically sending the low-quality video stream. At this point, all users in the channel can call this method to switch to low-quality video stream subscription mode.setDualStreamMode(Constants.SimulcastStreamMode mode, SimulcastStreamConfig streamConfig) and sets mode to DISABLE_SIMULCAST_STREAM (never send low-quality video stream), then calling this method will have no effect.setDualStreamMode(Constants.SimulcastStreamMode mode, SimulcastStreamConfig streamConfig) and sets mode to ENABLE_SIMULCAST_STREAM (always send low-quality video stream), both the host and audience receivers can call this method to switch to low-quality video stream subscription mode. The SDK will dynamically adjust the size of the corresponding video stream based on the size of the video window to save bandwidth and computing resources. The default aspect ratio of the low-quality video stream is the same as that of the high-quality video stream. According to the current aspect ratio of the high-quality video stream, the system will automatically allocate the resolution, frame rate, and bitrate of the low-quality video stream. Call timing: Call this method before joining a channel. The SDK does not support changing the default subscribed video stream type after joining a channel.setRemoteVideoStreamType(int uid, int streamType), the setting of setRemoteVideoStreamType(int uid, int streamType) takes effect.| streamType | The default video-stream type:
|
|
abstract |
Sets the blocklist of subscriptions for audio streams.
You can call this method to specify the audio streams of a user that you do not want to subscribe to.
muteRemoteAudioStream, muteAllRemoteAudioStreams, and autoSubscribeAudio in ChannelMediaOptions.| uidList | The user ID list of users that you do not want to subscribe to. If you want to specify the audio streams of a user that you do not want to subscribe to, add the user ID in this list. If you want to remove a user from the blocklist, you need to call the setSubscribeAudioBlocklist method to update the user ID list; this means you only add the uid of users that you do not want to subscribe to in the new user ID list. |
|
abstract |
Sets the allowlist of subscriptions for audio streams.
You can call this method to specify the audio streams of a user that you want to subscribe to.
muteRemoteAudioStream, muteAllRemoteAudioStreams and autoSubscribeAudio in ChannelMediaOptions.| uidList | The user ID list of users that you want to subscribe to. If you want to specify the audio streams of a user for subscription, add the user ID in this list. If you want to remove a user from the allowlist, you need to call the setSubscribeAudioAllowlist method to update the user ID list; this means you only add the uid of users that you want to subscribe to in the new user ID list. |
|
abstract |
Sets the blocklist of subscriptions for video streams.
You can call this method to specify the video streams of a user that you do not want to subscribe to.
muteRemoteVideoStream, muteAllRemoteVideoStreams and autoSubscribeAudio in ChannelMediaOptions.| uidList | The user ID list of users that you do not want to subscribe to. If you want to specify the video streams of a user that you do not want to subscribe to, add the user ID of that user in this list. If you want to remove a user from the blocklist, you need to call the setSubscribeVideoBlocklist method to update the user ID list; this means you only add the uid of users that you do not want to subscribe to in the new user ID list. |
|
abstract |
Sets the allowlist of subscriptions for video streams.
You can call this method to specify the video streams of a user that you want to subscribe to.
muteRemoteVideoStream, muteAllRemoteVideoStreams and autoSubscribeAudio in ChannelMediaOptions.| uidList | The user ID list of users that you want to subscribe to. If you want to specify the video streams of a user for subscription, add the user ID of that user in this list. If you want to remove a user from the allowlist, you need to call the setSubscribeVideoAllowlist method to update the user ID list; this means you only add the uid of users that you want to subscribe to in the new user ID list. |
|
abstract |
Enables or disables the built-in encryption.
After the user leaves the channel, the SDK automatically disables the built-in encryption. To enable the built-in encryption, call this method before the user joins the channel again. Applicable scenarios: Scenarios with higher security requirements. Call timing: Call this method before joining a channel.
| enabled | Whether to enable built-in encryption:
|
| config | Built-in encryption configurations. See EncryptionConfig. |
RtcEngine instance before calling this method.
|
abstract |
Starts pushing media streams to a CDN without transcoding.
Agora recommends that you use the server-side Media Push function. For details, see Use RESTful API. You can call this method to push an audio or video stream to the specified CDN address. This method can push media streams to only one CDN address at a time, so if you need to push streams to multiple addresses, call this method multiple times. After you call this method, the SDK triggers the onRtmpStreamingStateChanged callback on the local client to report the state of the streaming.
stopRtmpStream first, then call this method to retry pushing streams; otherwise, the SDK returns the same error code as the last failed push.| url | The address of Media Push. The format is RTMP or RTMPS. The character length cannot exceed 1024 bytes. Special characters such as Chinese characters are not supported. |
|
abstract |
Starts Media Push and sets the transcoding configuration.
Agora recommends that you use the server-side Media Push function. For details, see Use RESTful API. You can call this method to push a live audio-and-video stream to the specified CDN address and set the transcoding configuration. This method can push media streams to only one CDN address at a time, so if you need to push streams to multiple addresses, call this method multiple times. Under one Agora project, the maximum number of concurrent tasks to push media streams is 200 by default. If you need a higher quota, contact technical support. After you call this method, the SDK triggers the onRtmpStreamingStateChanged callback on the local client to report the state of the streaming.
stopRtmpStream first, then call this method to retry pushing streams; otherwise, the SDK returns the same error code as the last failed push.| url | The address of Media Push. The format is RTMP or RTMPS. The character length cannot exceed 1024 bytes. Special characters such as Chinese characters are not supported. |
| transcoding | The transcoding configuration for Media Push. See LiveTranscoding. |
|
abstract |
Updates the transcoding configuration.
Agora recommends that you use the server-side Media Push function. For details, see Use RESTful API. After you start pushing media streams to CDN with transcoding, you can dynamically update the transcoding configuration according to the scenario. The SDK triggers the onTranscodingUpdated callback after the transcoding configuration is updated.
| transcoding | The transcoding configuration for Media Push. See LiveTranscoding. |
|
abstract |
Stops pushing media streams to a CDN.
Agora recommends that you use the server-side Media Push function. For details, see Use RESTful API. You can call this method to stop the live stream on the specified CDN address. This method can stop pushing media streams to only one CDN address at a time, so if you need to stop pushing streams to multiple addresses, call this method multiple times. After you call this method, the SDK triggers the onRtmpStreamingStateChanged callback on the local client to report the state of the streaming.
| url | The address of Media Push. The format is RTMP or RTMPS. The character length cannot exceed 1024 bytes. Special characters such as Chinese characters are not supported. |
|
abstract |
Creates a data stream.
You can call this method to create a data stream and improve the reliability and ordering of data transmission. Call timing: You can call this method either before or after joining a channel. Related callbacks: After setting reliable to true, if the recipient does not receive the data within five seconds, the SDK triggers the onStreamMessageError callback and returns an error code.
RtcEngine. The data stream will be destroyed when leaving the channel, and the data stream needs to be recreated if needed. If you need a more comprehensive solution for low-latency, high-concurrency, and scalable real-time messaging and status synchronization, it is recommended to use Signaling.| reliable | Sets whether the recipients are guaranteed to receive the data stream within five seconds:
|
| ordered | Sets whether the recipients receive the data stream in the sent order:
|
|
abstract |
Creates a data stream.
Compared to createDataStream(boolean reliable, boolean ordered), this method does not guarantee the reliability of data transmission. If a data packet is not received five seconds after it was sent, the SDK directly discards the data. Call timing: You can call this method either before or after joining a channel.
RtcEngine. The data stream will be destroyed when leaving the channel, and the data stream needs to be recreated if needed. If you need a more comprehensive solution for low-latency, high-concurrency, and scalable real-time messaging and status synchronization, it is recommended to use Signaling.| config | The configurations for the data stream. See DataStreamConfig. |
|
abstract |
Sends data stream messages.
After calling createDataStream(DataStreamConfig config), you can call this method to send data stream messages to all users in the channel. The SDK has the following restrictions on this method:
onStreamMessage callback on the remote client, from which the remote user gets the stream message. A failed method call triggers the onStreamMessageError callback on the remote client.Signaling.createDataStream(DataStreamConfig config) and joining the channel.| streamId | The data stream ID. You can get the data stream ID by calling createDataStream(DataStreamConfig config) |
| message | The message to be sent. |
|
abstract |
Send Reliable message to remote uid in channel.
@technical preview
| uid | remote user id. |
| type | Reliable Data Transmission tunnel message type. |
| message | The sent data. |
|
abstract |
Send media control message.
@technical preview
| uid | remote user id. In particular, if the uid is set to 0, it means broadcasting the message to the entire channel. |
| message | The sent data, max 1024 Bytes. |
|
abstract |
Sets the video quality preferences.
| preferFrameRateOverImageQuality | The video preference to be set:
|
|
abstract |
Sets the local video mirror mode.
| mode | The local video mirror mode:
|
|
static |
Gets the recommended encoder type.
|
abstract |
Switches between front and rear cameras.
You can call this method to dynamically switch cameras based on the actual camera availability during the app's runtime, without having to restart the video stream or reconfigure the video source. This method and switchCamera(String cameraId) are both used to switch cameras. The difference is that switchCamera(String cameraId) switches to a specific camera by specifying the camera ID, while this method switches the direction of the camera (front or rear). Call timing: This method must be called after the camera is successfully enabled, that is, after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LOCAL_VIDEO_STREAM_STATE_CAPTURING (1).
VIDEO_SOURCE_CAMERA_PRIMARY (0) when calling startCameraCapture.
|
abstract |
Switches cameras by camera ID.
You can call this method to dynamically switch cameras based on the actual camera availability during the app's runtime, without having to restart the video stream or reconfigure the video source. This method and switchCamera() both are used to switch cameras. The difference is that switchCamera() switches the camera direction (front or rear), while this method switches to a specific camera by specifying the camera ID. Call timing: This method must be called after the camera is successfully enabled, that is, after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LOCAL_VIDEO_STREAM_STATE_CAPTURING (1).
VIDEO_SOURCE_CAMERA_PRIMARY (0) when calling startCameraCapture.| cameraId | The camera ID. You can get the camera ID through the Android native system API, see Camera.open() and CameraManager.getCameraIdList() for details. |
|
abstract |
Checks whether the device supports camera zoom.
Call timing: This method must be called after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LOCAL_VIDEO_STREAM_STATE_CAPTURING (1).
true: The device supports camera zoom.false: The device does not support camera zoom.
|
abstract |
Checks whether the device supports camera flash.
onLocalVideoStateChanged callback and returns the local video state as LOCAL_VIDEO_STREAM_STATE_CAPTURING (1).switchCamera() before this method.true: The device supports camera flash.false: The device does not support camera flash.
|
abstract |
Check whether the device supports the manual focus function.
onLocalVideoStateChanged callback and returns the local video state as LOCAL_VIDEO_STREAM_STATE_CAPTURING (1).true: The device supports the manual focus function.false: The device does not support the manual focus function.
|
abstract |
Checks whether the device supports manual exposure.
onLocalVideoStateChanged callback and returns the local video state as LOCAL_VIDEO_STREAM_STATE_CAPTURING (1).true: The device supports manual exposure.false: The device does not support manual exposure.
|
abstract |
Checks whether the device supports the face auto-focus function.
onLocalVideoStateChanged callback and returns the local video state as LOCAL_VIDEO_STREAM_STATE_CAPTURING (1).true: The device supports the face auto-focus function.false: The device does not support the face auto-focus function.
|
abstract |
Checks whether the device camera supports face detection.
true: The device camera supports face detection.false: The device camera does not support face detection.
|
abstract |
Queries whether the current camera supports adjusting exposure value.
onLocalVideoStateChanged callback and returns the local video state as LOCAL_VIDEO_STREAM_STATE_CAPTURING (1).setCameraExposureFactor, Agora recoomends that you call this method to query whether the current camera supports adjusting the exposure value.setCameraCapturerConfiguration.true: Success.false: Failure.
|
abstract |
Sets the camera zoom factor.
enableVideo. The setting result will take effect after the camera is successfully turned on, that is, after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LOCAL_VIDEO_STREAM_STATE_CAPTURING (1).| factor | Camera zoom factor. You can get the maximum zoom factor supported by the device by calling the getCameraMaxZoomFactor method. |
factor value, if successful.
|
abstract |
Gets the maximum zoom ratio supported by the camera.
onLocalVideoStateChanged callback and returns the local video state as LOCAL_VIDEO_STREAM_STATE_CAPTURING (1).
|
abstract |
Sets the camera manual focus position.
enableVideo. The setting result will take effect after the camera is successfully turned on, that is, after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LOCAL_VIDEO_STREAM_STATE_CAPTURING (1).onCameraFocusAreaChanged callback.| positionX | The horizontal coordinate of the touchpoint in the view. |
| positionY | The vertical coordinate of the touchpoint in the view. |
|
abstract |
Sets the camera exposure position.
enableVideo. The setting result will take effect after the camera is successfully turned on, that is, after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LOCAL_VIDEO_STREAM_STATE_CAPTURING (1).onCameraExposureAreaChanged callback.| positionXinView | The horizontal coordinate of the touchpoint in the view. |
| positionYinView | The vertical coordinate of the touchpoint in the view. |
|
abstract |
Enables or disables face detection for the local user.
Call timing: This method needs to be called after the camera is started (for example, by calling startPreview(Constants.VideoSourceType sourceType) or enableVideo ). Related callbacks: Once face detection is enabled, the SDK triggers the onFacePositionChanged callback to report the face information of the local user, which includes the following:
| enabled | Whether to enable face detection for the local user:
|
|
abstract |
Enables the camera flash.
enableVideo. The setting result will take effect after the camera is successfully turned on, that is, after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LOCAL_VIDEO_STREAM_STATE_CAPTURING (1).| isOn | Whether to turn on the camera flash:
|
|
abstract |
Enables the camera auto-face focus function.
The SDK disables face autofocus by default. To enable face autofocus, call this method. Call timing: This method must be called after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LOCAL_VIDEO_STREAM_STATE_CAPTURING (1).
| enabled | Whether to enable face autofocus:
|
|
abstract |
Sets the camera exposure value.
Insufficient or excessive lighting in the shooting environment can affect the image quality of video capture. To achieve optimal video quality, you can use this method to adjust the camera's exposure value.
enableVideo. The setting result will take effect after the camera is successfully turned on, that is, after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LOCAL_VIDEO_STREAM_STATE_CAPTURING (1).isCameraExposureSupported to check whether the current camera supports adjusting the exposure value.setCameraCapturerConfiguration.| factor | The camera exposure value. The default value is 0, which means using the default exposure of the camera. The larger the value, the greater the exposure. When the video image is overexposed, you can reduce the exposure value; when the video image is underexposed and the dark details are lost, you can increase the exposure value. If the exposure value you specified is beyond the range supported by the device, the SDK will automatically adjust it to the actual supported range of the device. The value range is [-20, 20]. |
|
abstract |
Retrieves the call ID.
When a user joins a channel on a client, a callId is generated to identify the call from the client. You can call this method to get callId, and pass it in when calling methods such as rate and complain. Call timing: Call this method after joining a channel.
| callId | Output parameter, the current call ID. |
|
abstract |
Allows a user to rate a call after the call ends.
| callId | The current call ID. You can get the call ID by calling getCallId. |
| rating | The value is between 1 (the lowest score) and 5 (the highest score). |
| description | (Optional) A description of the call. The string length should be less than 800 bytes. |
|
abstract |
Allows a user to complain about the call quality after a call ends.
This method allows users to complain about the quality of the call. Call this method after the user leaves the channel.
| callId | The current call ID. You can get the call ID by calling getCallId. |
| description | (Optional) A description of the call. The string length should be less than 800 bytes. |
RtcEngine is initialized.
|
static |
Gets the SDK version.
|
static |
Returns the media engine version.
|
abstract |
Sets the log file.
Specifies an SDK output log file. The log file records all log data for the SDK’s operation. Call timing: This method needs to be called immediately after create(RtcEngineConfig config), otherwise the output log may be incomplete.
| filePath | The complete path of the log files. These log files are encoded in UTF-8. |
|
abstract |
Sets the log output level of the SDK.
This method sets the output log level of the SDK. You can use one or a combination of the log filter levels. The log level follows the sequence of LOG_FILTER_OFF, LOG_FILTER_CRITICAL, LOG_FILTER_ERROR, LOG_FILTER_WARN, LOG_FILTER_INFO, and LOG_FILTER_DEBUG. Choose a level to see the logs preceding that level. If, for example, you set the log level to LOG_FILTER_WARN, you see the logs within levels LOG_FILTER_CRITICAL, LOG_FILTER_ERROR and LOG_FILTER_WARN.
| filter | The output log level of the SDK.
|
|
abstract |
Sets the output log level of the SDK.
Choose a level to see the logs preceding that level.
| level | The log level. See LogLevel. |
|
abstract |
Sets the log file size.
By default, the SDK generates five SDK log files and five API call log files with the following rules:
agorasdk.log, agorasdk.1.log, agorasdk.2.log, agorasdk.3.log, and agorasdk.4.log.agoraapi.log, agoraapi.1.log, agoraapi.2.log, agoraapi.3.log, and agoraapi.4.log.agorasdk.log or agoraapi.log.agorasdk.log is full, the SDK processes the log files in the following order:1. Delete the agorasdk.4.log file (if any).agorasdk.3.log to agorasdk.4.log.agorasdk.2.log to agorasdk.3.log.agorasdk.1.log to agorasdk.2.log.agorasdk.log file.agoraapi.log file are the same as for agorasdk.log.agorasdk.log file only and does not effect the agoraapi.log file.| fileSizeInKBytes | The size (KB) of an agorasdk.log file. The value range is [128,20480]. The default value is 2,048 KB. If you set fileSizeInKByte smaller than 128 KB, the SDK automatically adjusts it to 128 KB; if you set fileSizeInKByte greater than 20,480 KB, the SDK automatically adjusts it to 20,480 KB. |
|
abstract |
Upload current log file immediately to server. only use this when an error occurs block before log file upload success or timeout.
|
abstract |
Write the log to SDK . @technical preview
You can use one of the level defined in LogLevel.
| level | Sets the log level: LogLevel. |
|
abstract |
Gets the C++ handle of the Native SDK.
This method retrieves the C++ handle of the SDK, which is used for registering the audio and video frame observer.
| void io.agora.rtc2.RtcEngine.addHandler | ( | IRtcEngineEventHandler | handler | ) |
Adds event handlers.
The SDK uses the IRtcEngineEventHandler class to send callbacks to the app. The app inherits the methods of this class to receive these callbacks. All methods in this class have default (empty) implementations. Therefore, apps only need to inherits callbacks according to the scenarios. In the callbacks, avoid time-consuming tasks or calling APIs that can block the thread, such as the sendStreamMessage method. Otherwise, the SDK may not work properly.
| handler | Callback events to be added. See IRtcEngineEventHandler. |
| void io.agora.rtc2.RtcEngine.removeHandler | ( | IRtcEngineEventHandler | handler | ) |
Removes the specified IRtcEngineEventHandler instance.
This method removes the specified callback handler. For callback events that you want to listen for only once, call this method to remove the relevant callback handler after you have received them.
| handler | The callback handler to be deleted. See IRtcEngineEventHandler. |
|
abstract |
Enables the Wi-Fi mode.
| enable | Whether to enable/disable the Wi-Fi mode:
|
|
abstract |
Returns the native handler of the mediaplayer.
|
static |
Gets the warning or error description.
| error | The error code reported by the SDK. |
|
abstract |
Queries the HDR capability of the video module.
@technical preview
|
abstract |
Queries the highest frame rate supported by the device during screen sharing.
Applicable scenarios: To ensure optimal screen sharing performance, particularly in enabling high frame rates like 60 fps, Agora recommends you to query the device's maximum supported frame rate using this method beforehand. This way, if the device cannot support such a high frame rate, you can adjust the screen sharing stream accordingly to avoid any negative impact on the sharing quality. If the device does not support high frame rate, you can reduce the frame rate of the screen sharing stream appropriately when sharing the screen to ensure that the sharing effect meets your expectation.
|
abstract |
Monitors external headset device events.
| monitor | Whether to enable monitoring external headset device events. True/False. |
|
abstract |
Monitors Bluetooth headset device events.
| monitor | Whether to enable monitoring Bluetooth headset device events. True/False. |
|
abstract |
Sets the default audio route to the headset.
| enabled | Sets whether or not the default audio route is to the headset:
|
|
abstract |
Provides technical preview functionalities or special customizations by configuring the SDK with JSON options.
| parameters | Pointer to the set parameters in a JSON string. |
|
abstract |
Queries internal states
| parameters | JSON string |
|
abstract |
Gets the Agora SDK’s parameters for customization purposes. This method is not disclosed yet. Contact support@agora.io for more information.
|
abstract |
Registers the metadata observer.
You need to implement the IMetadataObserver class and specify the metadata type in this method. This method enables you to add synchronized metadata in the video stream for more diversified live interactive streaming, such as sending shopping links, digital coupons, and online quizzes. A successful call of this method triggers the getMaxMetadataSize callback.
joinChannel(String token, String channelId, int uid, ChannelMediaOptions options).| observer | The metadata observer. See IMetadataObserver. |
| type | The metadata type. The SDK currently only supports VIDEO_METADATA. |
|
abstract |
Unregisters the specified metadata observer.
| observer | The metadata observer. See IMetadataObserver. |
| type | The metadata type. The SDK currently only supports VIDEO_METADATA. |
|
abstract |
Starts relaying media streams across channels or updates channels for media relay.
The first successful call to this method starts relaying media streams from the source channel to the destination channels. To relay the media stream to other channels, or exit one of the current media relays, you can call this method again to update the destination channels. This feature supports relaying media streams to a maximum of six destination channels. After a successful method call, the SDK triggers the onChannelMediaRelayStateChanged callback, and this callback returns the state of the media stream relay. Common states are as follows:
onChannelMediaRelayStateChanged callback returns RELAY_STATE_RUNNING (2) and RELAY_OK (0), it means that the SDK starts relaying media streams from the source channel to the destination channel.onChannelMediaRelayStateChanged callback returns RELAY_STATE_FAILURE (3), an exception occurs during the media stream relay.technical support.| channelMediaRelayConfiguration | The configuration of the media stream relay. See ChannelMediaRelayConfiguration. |
|
abstract |
Stops the media stream relay. Once the relay stops, the host quits all the target channels.
After a successful method call, the SDK triggers the onChannelMediaRelayStateChanged callback. If the callback reports RELAY_STATE_IDLE (0) and RELAY_OK (0), the host successfully stops the relay.
onChannelMediaRelayStateChanged callback with the RELAY_ERROR_SERVER_NO_RESPONSE (2) or RELAY_ERROR_SERVER_CONNECTION_LOST (8) status code. You can call the leaveChannel(LeaveChannelOptions options) method to leave the channel, and the media stream relay automatically stops.
|
abstract |
Pauses the media stream relay to all target channels.
After the cross-channel media stream relay starts, you can call this method to pause relaying media streams to all target channels; after the pause, if you want to resume the relay, call resumeAllChannelMediaRelay.
startOrUpdateChannelMediaRelay.
|
abstract |
Resumes the media stream relay to all target channels.
After calling the pauseAllChannelMediaRelay method, you can call this method to resume relaying media streams to all destination channels.
pauseAllChannelMediaRelay.
|
abstract |
Updates the channel media options after joining the channel.
| options | The channel media options. See ChannelMediaOptions. |
ChannelMediaOptions is invalid. For example, the token or the user ID is invalid. You need to fill in a valid parameter.RtcEngine object has not been initialized. You need to initialize the RtcEngine object before calling this method.RtcEngine object is wrong. The possible reason is that the user is not in the channel. Agora recommends that you use the onConnectionStateChanged callback to see whether the user is in the channel. If you receive the CONNECTION_STATE_DISCONNECTED (1) or CONNECTION_STATE_FAILED (5) state, the user is not in the channel. You need to call joinChannel(String token, String channelId, int uid, ChannelMediaOptions options) to join a channel before calling this method.
|
abstract |
Whether to mute the recording signal.
If you have already called adjustRecordingSignalVolume to adjust the recording signal volume, when you call this method and set it to true, the SDK behaves as follows:1. Records the adjusted volume.
false, the recording signal volume will be restored to the volume recorded by the SDK before muting. Call timing: This method can be called either before or after joining the channel.| muted | - true: Mute the recording signal.
|
|
abstract |
Sets the format of the raw audio playback data before mixing.
The SDK triggers the onPlaybackAudioFrameBeforeMixing callback according to the sampling interval. Call timing: Call this method before joining a channel.
| sampleRate | The sample rate returned in the callback, which can be set as 8000, 16000, 32000, 44100, or 48000 Hz. |
| channel | The number of audio channels. You can set the value as 1 or 2.
|
|
abstract |
Sets the format of audio data in the onPlaybackAudioFrameBeforeMixing callback.
Used to set the sample rate, number of channels, and number of samples per callback for the audio data returned in the onPlaybackAudioFrameBeforeMixing callback.
| sampleRate | Set the sample rate returned in the onPlaybackAudioFrameBeforeMixing callback. It can be set as the following values:
|
| channel | Set the number of channels for the audio data returned in the onPlaybackAudioFrameBeforeMixing callback. It can be set to:
|
| samplesPerCall | Set the sample rate of the audio data returned in the onMixedAudioFrame callback. In the RTMP streaming scenario, it is recommended to set it to 1024. |
|
abstract |
Turns on audio spectrum monitoring.
If you want to obtain the audio spectrum data of local or remote users, you can register the audio spectrum observer and enable audio spectrum monitoring.
| intervalInMS | The interval (in milliseconds) at which the SDK triggers the onLocalAudioSpectrum and onRemoteAudioSpectrum callbacks. The default value is 100. Do not set this parameter to a value less than 10, otherwise calling this method would fail. |
|
abstract |
Disables audio spectrum monitoring.
After calling enableAudioSpectrumMonitor, if you want to disable audio spectrum monitoring, you can call this method.
|
abstract |
Registers an audio spectrum observer.
After successfully registering the audio spectrum observer and calling enableAudioSpectrumMonitor to enable the audio spectrum monitoring, the SDK reports the callback that you implement in the IAudioSpectrumObserver class according to the time interval you set.
| observer | The audio spectrum observer. See IAudioSpectrumObserver. |
|
abstract |
Unregisters the audio spectrum observer.
After calling registerAudioSpectrumObserver, if you want to disable audio spectrum monitoring, you can call this method.
| observer | The audio spectrum observer. See IAudioSpectrumObserver. |
|
abstract |
Retrieves the volume of the audio effects.
The volume is an integer ranging from 0 to 100. The default value is 100, which means the original volume.
playEffect(int soundId, String filePath, int loopCount, double pitch, double pan, double gain, boolean publish, int startPos).
|
abstract |
Sets the volume of the audio effects.
Call timing: Call this method after playEffect(int soundId, String filePath, int loopCount, double pitch, double pan, double gain, boolean publish, int startPos).
| volume | The playback volume. The value range is [0, 100]. The default value is 100, which represents the original volume. |
|
abstract |
Preloads a specified audio effect file into the memory.
Ensure the size of all preloaded files does not exceed the limit. For the audio file formats supported by this method, see What formats of audio files does the Agora RTC SDK support. Call timing: Agora recommends that you call this method before joining a channel.
| soundId | The audio effect ID. The ID of each audio effect file is unique. |
| filePath | File path:
|
|
abstract |
Preloads a specified audio effect.
This method preloads only one specified audio effect into the memory each time it is called. To preload multiple audio effects, call this method multiple times.
After preloading, you can call playEffect to play the preloaded audio effect or call playAllEffects to play all the preloaded audio effects.
| soundId | The ID of the audio effect. |
| filePath | The absolute path of the local audio effect file or the URL |
| startPos | The start position of the online audio effect file. Supported audio formats: mp3, mp4, m4a, aac, 3gp, mkv and wav. |
|
abstract |
Plays the specified local or online audio effect file.
This method supports playing URI files starting with content://. For the audio file formats supported by this method, see What formats of audio files does the Agora RTC SDK support. If the local music file does not exist, the SDK does not support the file format, or the the SDK cannot access the music file URL, the SDK reports AUDIO_MIXING_REASON_CAN_NOT_OPEN. To play multiple audio effect files at the same time, call this method multiple times with different soundId and filePath. To achieve the optimal user experience, Agora recommends that you do not playing more than three audio files at the same time. Call timing: You can call this method either before or after joining a channel. Related callbacks: After the playback of an audio effect file completes, the SDK triggers the onAudioEffectFinished callback.
preloadEffect to preload the file into memory, and then call this method to play the audio effect. Otherwise, you might encounter playback failures or no sound during playback due to loading timeouts or failures.| soundId | The audio effect ID. The ID of each audio effect file is unique.Attention: If you have preloaded an audio effect into memory by calling preloadEffect, ensure that the value of this parameter is the same as that of soundId in preloadEffect. |
| filePath | The file path. The SDK supports URI addresses starting with content://, paths starting with /assets/, URLs and absolute paths of local files. The absolute path needs to be accurate to the file name and extension. Supported audio formats include MP3, AAC, M4A, MP4, WAV, and 3GP. See Supported Audio Formats. Attention: If you have preloaded an audio effect into memory by calling preloadEffect, ensure that the value of this parameter is the same as that of filePath in preloadEffect. |
| loopCount | The number of times the audio effect loops.
|
| pitch | The pitch of the audio effect. The value range is 0.5 to 2.0. The default value is 1.0, which means the original pitch. The lower the value, the lower the pitch. |
| pan | The spatial position of the audio effect. The value ranges between -1.0 and 1.0:
|
| gain | The volume of the audio effect. The value range is 0.0 to 100.0. The default value is 100.0, which means the original volume. The smaller the value, the lower the volume. |
| publish | Whether to publish the audio effect to the remote users.
|
|
abstract |
Plays the specified local or online audio effect file.
This method supports playing URI files starting with content://. For the audio file formats supported by this method, see What formats of audio files does the Agora RTC SDK support. If the local music file does not exist, the SDK does not support the file format, or the the SDK cannot access the music file URL, the SDK reports AUDIO_MIXING_REASON_CAN_NOT_OPEN. To play multiple audio effect files at the same time, call this method multiple times with different soundId and filePath. To achieve the optimal user experience, Agora recommends that you do not playing more than three audio files at the same time. Call timing: You can call this method either before or after joining a channel. Related callbacks: After the playback of an audio effect file completes, the SDK triggers the onAudioEffectFinished callback.
preloadEffect to preload the file into memory, and then call this method to play the audio effect. Otherwise, you might encounter playback failures or no sound during playback due to loading timeouts or failures.| soundId | The audio effect ID. The ID of each audio effect file is unique.Attention: If you have preloaded an audio effect into memory by calling preloadEffect, ensure that the value of this parameter is the same as that of soundId in preloadEffect. |
| filePath | The file path. The SDK supports URI addresses starting with content://, paths starting with /assets/, URLs and absolute paths of local files. The absolute path needs to be accurate to the file name and extension. Supported audio formats include MP3, AAC, M4A, MP4, WAV, and 3GP. See Supported Audio Formats. Attention: If you have preloaded an audio effect into memory by calling preloadEffect, ensure that the value of this parameter is the same as that of filePath in preloadEffect. |
| loopCount | The number of times the audio effect loops.
|
| pitch | The pitch of the audio effect. The value range is 0.5 to 2.0. The default value is 1.0, which means the original pitch. The lower the value, the lower the pitch. |
| pan | The spatial position of the audio effect. The value ranges between -1.0 and 1.0:
|
| gain | The volume of the audio effect. The value range is 0.0 to 100.0. The default value is 100.0, which means the original volume. The smaller the value, the lower the volume. |
| publish | Whether to publish the audio effect to the remote users:
|
| startPos | The playback position (ms) of the audio effect file. |
|
abstract |
@breif Plays all audio effects.
After calling preloadEffect multiple times to preload multiple audio effects into the memory, you can call this method to play all the specified audio effects for all users in the channel.
| loopCount | The number of times the audio effect loops:
|
| pitch | The pitch of the audio effect. The value ranges between 0.5 and 2.0. The default value is 1.0 (original pitch). The lower the value, the lower the pitch. |
| pan | The spatial position of the audio effect. The value ranges between -1.0 and 1.0:
|
| gain | The volume of the audio effect. The value range is [0, 100]. The default value is 100 (original volume). The lower the value, the lower the volume of the audio effect. |
| publish | Whether to publish the specified audio effect to the remote users:
|
|
abstract |
Gets the volume of a specified audio effect file.
| soundId | The ID of the audio effect file. |
|
abstract |
Gets the volume of a specified audio effect file.
Call timing: Call this method after playEffect(int soundId, String filePath, int loopCount, double pitch, double pan, double gain, boolean publish, int startPos).
| soundId | The ID of the audio effect. The unique ID of each audio effect file. |
| volume | The playback volume. The value range is [0, 100]. The default value is 100, which represents the original volume. |
|
abstract |
Pauses a specified audio effect file.
| soundId | The audio effect ID. The ID of each audio effect file is unique. |
|
abstract |
Pauses all audio effects.
|
abstract |
Resumes playing a specified audio effect.
| soundId | The audio effect ID. The ID of each audio effect file is unique. |
|
abstract |
Resumes playing all audio effect files.
After you call pauseAllEffects to pause the playback, you can call this method to resume the playback. Call timing: Call this method after pauseAllEffects.
|
abstract |
Stops playing a specified audio effect.
When you no longer need to play the audio effect, you can call this method to stop the playback. If you only need to pause the playback, call pauseEffect. Call timing: Call this method after the playEffect(int soundId, String filePath, int loopCount, double pitch, double pan, double gain, boolean publish) or playEffect(int soundId, String filePath, int loopCount, double pitch, double pan, double gain, boolean publish, int startPos).
| soundId | The ID of the audio effect. Each audio effect has a unique ID. |
|
abstract |
Stops playing all audio effects.
When you no longer need to play the audio effect, you can call this method to stop the playback. If you only need to pause the playback, call pauseAllEffects. Call timing: Call this method after the playEffect(int soundId, String filePath, int loopCount, double pitch, double pan, double gain, boolean publish) or playEffect(int soundId, String filePath, int loopCount, double pitch, double pan, double gain, boolean publish, int startPos).
|
abstract |
Releases a specified preloaded audio effect from the memory.
After loading the audio effect file into memory using preloadEffect, if you need to release the audio effect file, call this method. Call timing: You can call this method either before or after joining a channel.
| soundId | The ID of the audio effect. Each audio effect has a unique ID. |
|
abstract |
Releases all preloaded audio effects from the memory.
|
abstract |
Retrieves the duration of the audio effect file.
| filePath | File path:
|
|
abstract |
Sets the playback position of an audio effect file.
After a successful setting, the local audio effect file starts playing at the specified position.
playEffect.| soundId | The audio effect ID. The ID of each audio effect file is unique. |
| pos | The playback position (ms) of the audio effect file. |
|
abstract |
Retrieves the playback position of the audio effect file.
playEffect(int soundId, String filePath, int loopCount, double pitch, double pan, double gain, boolean publish, int startPos).| soundId | The audio effect ID. The ID of each audio effect file is unique. |
|
abstract |
Registers a receiver object for the encoded video image.
If you only want to observe encoded video frames (such as H.264 format) without decoding and rendering the video, Agora recommends that you implement one IVideoEncodedFrameObserver class through this method.
| receiver | The video frame observer object. See IVideoEncodedFrameObserver. |
|
abstract |
Registers or unregisters a facial information observer.
You can call this method to register the onFaceInfo callback to receive the facial information processed by Agora speech driven extension. When calling this method to register a facial information observer, you can register callbacks in the IFaceInfoObserver class as needed. After successfully registering the facial information observer, the SDK triggers the callback you have registered when it captures the facial information converted by the speech driven extension. Applicable scenarios: Facial information processed by the Agora speech driven extension is BS (Blend Shape) data that complies with ARkit standards. You can further process the BS data using third-party 3D rendering engines, such as driving avatar to make mouth movements corresponding to speech.
enableExtension.| receiver | Facial information observer, see IFaceInfoObserver. If you need to unregister a facial information observer, pass in NULL. |
|
abstract |
Takes a snapshot of a video stream.
This method takes a snapshot of a video stream from the specified user, generates a JPG image, and saves it to the specified path. Call timing: Call this method after joining a channel. Related callbacks: After a successful call of this method, the SDK triggers the onSnapshotTaken callback to report whether the snapshot is successfully taken, as well as the details for that snapshot.
ChannelMediaOptions.| uid | The user ID. Set uid as 0 if you want to take a snapshot of the local user's video. |
| filePath | The local path (including filename extensions) of the snapshot. For example:
|
|
abstract |
Takes a screenshot of the video at the specified observation point.
This method takes a snapshot of a video stream from the specified user, generates a JPG image, and saves it to the specified path. Call timing: Call this method after joining a channel. Related callbacks: After a successful call of this method, the SDK triggers the onSnapshotTaken callback to report whether the snapshot is successfully taken, as well as the details for that snapshot.
ChannelMediaOptions.| uid | The user ID. Set uid as 0 if you want to take a snapshot of the local user's video. |
| config | The configuration of the snaptshot. See SnapshotConfig. |
|
abstract |
Enables or disables video screenshot and upload.
When video screenshot and upload function is enabled, the SDK takes screenshots and uploads videos sent by local users based on the type and frequency of the module you set in ContentInspectConfig. After video screenshot and upload, the Agora server sends the callback notification to your app server in HTTPS requests and sends all screenshots to the third-party cloud storage service. Call timing: This method can be called either before or after joining the channel.
CONTENT_INSPECT_TYPE_SUPERVISE ), the video screenshot and upload dynamic library agora_content_inspect_extension.so is required. Deleting this library disables the screenshot and upload feature.| enabled | Whether to enalbe video screenshot and upload:
|
| config | Screenshot and upload configuration. See ContentInspectConfig. |
|
abstract |
Registers an extension.
For extensions external to the SDK (such as those from Extensions Marketplace and SDK Extensions), you need to load them before calling this method. Extensions internal to the SDK (those included in the full SDK package) are automatically loaded and registered after the initialization of RtcEngine. Call timing: - Agora recommends you call this method after the initialization of RtcEngine and before joining a channel.
enableVideo or enableLocalVideo.addExtension to load the extension first.| provider | The name of the extension provider. |
| extension | The name of the extension. |
| sourceType | Source type of the extension. See MediaSourceType. |
|
abstract |
Enable/Disable an extension. By calling this function, you can dynamically enable/disable the extension without changing the pipeline. For example, enabling/disabling Extension_A means the data will be adapted/bypassed by Extension_A.
NOTE: For compatibility reasons, if you haven't call registerExtension, enableExtension will automatically register the specified extension. We suggest you call registerExtension explicitly.
| provider | The name of the extension provider, e.g. agora.io. |
| extension | The name of the extension, e.g. agora.beauty. |
| enable | Whether to enable the extension:
|
|
abstract |
Enables or disables extensions.
Call timing: Agora recommends that you call this method after joining a channel. Related callbacks: When this method is successfully called within the channel, it triggers onStartedWithContext or onStoppedWithContext.
| provider | The name of the extension provider. |
| extension | The name of the extension. |
| enable | Whether to enable the extension:
|
| sourceType | Source type of the extension. See MediaSourceType. |
|
abstract |
Sets the properties of the extension.
After enabling the extension, you can call this method to set the properties of the extension. Call timing: Call this mehtod after calling enableExtension. Related callbacks: After calling this method, it may trigger the onEventWithContext callback, and the specific triggering logic is related to the extension itself.
| provider | The name of the extension provider. |
| extension | The name of the extension. |
| key | The key of the extension. |
| value | The value of the extension key. |
|
abstract |
Sets the properties of an extension.
| provider | The name of the extension provider, e.g. agora.io. |
| extension | The name of the extension, e.g. agora.beauty. |
| key | The key of the extension. |
| value | The JSON formatted value of the extension key. |
| sourceType | The source type of the extension, e.g. PRIMARY_CAMERA_SOURCE. See {}. true, if get property success, otherwise false |
|
abstract |
Gets detailed information on the extensions.
Call timing: This method can be called either before or after joining the channel.
| provider | The name of the extension provider. |
| extension | The name of the extension. |
| key | The key of the extension. |
|
abstract |
Gets detailed information on the extensions.
Call timing: This method can be called either before or after joining the channel.
| provider | The name of the extension provider. |
| extension | The name of the extension. |
| key | The key of the extension. |
| sourceType | Source type of the extension. See MediaSourceType. |
|
abstract |
Sets the properties of the extension provider.
You can call this method to set the attributes of the extension provider and initialize the relevant parameters according to the type of the provider. Call timing: Call this method before enableExtension and after registerExtension.
| provider | The name of the extension provider. |
| key | The key of the extension. |
| value | The value of the extension key. |
|
abstract |
Enable/Disable an extension. By calling this function, you can dynamically enable/disable the extension without changing the pipeline. For example, enabling/disabling Extension_A means the data will be adapted/bypassed by Extension_A.
NOTE: For compatibility reasons, if you haven't call registerExtension, enableExtension will automatically register the specified extension. We suggest you call registerExtension explicitly.
| provider | The name of the extension provider, e.g. agora.io. |
| extension | The name of the extension, e.g. agora.beauty. |
| extensionInfo | The information for extension. See ExtensionInfo. |
| enable | Whether to enable the extension:
|
|
abstract |
Sets the properties of an extension.
| provider | The name of the extension provider, e.g. agora.io. |
| extension | The name of the extension, e.g. agora.beauty. |
| extensionInfo | The information for extension. See ExtensionInfo. |
| key | The key of the extension. |
| value | The JSON formatted value of the extension key. |
|
abstract |
Gets the properties of an extension.
| provider | The name of the extension provider, e.g. agora.io. |
| extension | The name of the extension, e.g. agora.beauty. |
| extensionInfo | The information for extension. See ExtensionInfo. |
| key | The key of the extension. |
|
abstract |
Starts screen capture.
Applicable scenarios: In the screen sharing scenario, you need to call this method to start capturing the screen video stream. Call timing: You can call this method either before or after joining the channel, with the following differences:
joinChannel(String token, String channelId, int uid, ChannelMediaOptions options) to join channel and set publishScreenCaptureVideo to true to start screen sharing.updateChannelMediaOptions and set publishScreenCaptureVideo to true to start screen sharing.onPermissionError (2) callback.android.permission.FOREGROUND_SERVICE to the /app/Manifests/AndroidManifest.xml file.AUDIO_SCENARIO_GAME_STREAMING by using the setAudioScenario method before joining the channel.dimensions in VideoCaptureParameters:| screenCaptureParameters | The screen sharing encoding parameters. See ScreenCaptureParameters. |
|
abstract |
Configures MediaProjection outside of the SDK to capture screen video streams.
@technical preview
After successfully calling this method, the external MediaProjection you set will replace the MediaProjection requested by the SDK to capture the screen video stream. When the screen sharing is stopped or RtcEngine is destroyed, the SDK will automatically release the MediaProjection. Applicable scenarios: If you are able to apply for MediaProjection, you can directly use your MediaProjection instead of the one applied for by the SDK. The following lists two applicable scenarios:``
startScreenCapture.MediaProjection permission.| mediaProjection | An MediaProjection object used to capture screen video streams. |
|
abstract |
Sets the screen sharing scenario.
When you start screen sharing or window sharing, you can call this method to set the screen sharing scenario. The SDK adjusts the video quality and experience of the sharing according to the scenario.
| screenScenario | The screen sharing scenario. See ScreenScenarioType. |
|
abstract |
Stops screen capture.
Applicable scenarios: If you start screen capture by calling startScreenCapture, you need to call this method to stop screen capture. Call timing: You can call this method either before or after joining a channel.
|
abstract |
Sets video application scenarios.
After successfully calling this method, the SDK will automatically enable the best practice strategies and adjust key performance metrics based on the specified scenario, to optimize the video experience.
| scenarioType | The type of video application scenario. See VideoScenario.APPLICATION_SCENARIO_MEETING (1) is suitable for meeting scenarios. The SDK automatically enables the following strategies:
|
RtcEngine object has not been initialized. You need to initialize the RtcEngine object before calling this method.
|
abstract |
Sets the video qoe preference.
| qoePreference | he qoe preference type. |
|
abstract |
Updates the screen capturing parameters.
| screenCaptureParameters | The screen sharing encoding parameters. See ScreenCaptureParameters.Attention: The video properties of the screen sharing stream only need to be set through this parameter, and are unrelated to setVideoEncoderConfiguration. |
stopScreenCapture to stop the current sharing and start sharing the screen again.
|
abstract |
Registers a raw video frame observer object.
If you want to observe raw video frames (such as YUV or RGBA format), Agora recommends that you implement one IVideoFrameObserver class with this method. When calling this method to register a video observer, you can register callbacks in the IVideoFrameObserver class as needed. After you successfully register the video frame observer, the SDK triggers the registered callbacks each time a video frame is received. Applicable scenarios: After registering the raw video observer, you can use the obtained raw video data in various video pre-processing scenarios, such as virtual backgrounds and image enhacement by yourself. Agora provides an open source sample project beautyapi on GitHub for your reference. Call timing: Call this method before joining a channel.
width and height parameters, which may be adapted under the following circumstances:| observer | The observer instance. See IVideoFrameObserver. To release the instance, set the value as NULL. |
|
abstract |
Creates a media player object.
Before calling any APIs in the IMediaPlayer class, you need to call this method to create an instance of the media player. If you need to create multiple instances, you can call this method multiple times. Call timing: You can call this method either before or after joining a channel.
IMediaPlayer object, if the method call succeeds.
|
abstract |
Creates a recorder for audio and video.
Before starting to record audio and video streams, you need to call this method to create a recorder. The SDK supports recording multiple audio and video streams from local or remote users. You can call this method multiple times to create multiple recorders, and specify the channel name and the user ID of the stream to be recorded through the info parameter.
After successfully creating the recorder, you need to call setMediaRecorderObserver to register an observer for the recorder to listen for recording-related callbacks, and then call startRecording to start recording.
| info | Information about the audio and video stream to be recorded. See RecorderStreamInfo. |
AgoraMediaRecorder instance.
|
abstract |
Destroys the audio and video recording object.
When you no longer need to record audio and video streams, you can call this method to destroy the corresponding audio and video recording object. If you are currently recording, call stopRecording to stop the recording before calling this method to destroy the audio and video recording object.
| mediaRecorder | The AgoraMediaRecorder object to be destroyed. |
Error Codes for details and troubleshooting suggestions.
|
abstract |
Gets one IMediaPlayerCacheManager instance.
Before calling any APIs in the IMediaPlayerCacheManager class, you need to call this method to get a cache manager instance of a media player. Call timing: Make sure the RtcEngine is initialized before you call this method.
IMediaPlayerCacheManager instance.
|
abstract |
get an H265Transcoder instance, which is used to
|
abstract |
Enable or disable the external audio source local playback.
| enabled | Determines whether to enable the external audio source local playback: |
|
abstract |
Adjusts the volume of the custom audio track played remotely.
If you want to change the volume of the audio played remotely, you need to call this method again.
createCustomAudioTrack method to create a custom audio track before calling this method.| trackId | The audio track ID. Set this parameter to the custom audio track ID returned in createCustomAudioTrack. |
| volume | The volume of the audio source. The value can range from 0 to 100. 0 means mute; 100 means the original volume. |
|
abstract |
Adjusts the volume of the custom audio track played locally.
If you want to change the volume of the audio to be played locally, you need to call this method again.
createCustomAudioTrack method to create a custom audio track before calling this method.| trackId | The audio track ID. Set this parameter to the custom audio track ID returned in createCustomAudioTrack. |
| volume | The volume of the audio source. The value can range from 0 to 100. 0 means mute; 100 means the original volume. |
|
abstract |
Enables the virtual metronome.
beatsPerMinute you set in AgoraRhythmPlayerConfig. For example, if you set beatsPerMinute as 60, the SDK plays one beat every second. If the file duration exceeds the beat duration, the SDK only plays the audio within the beat duration.publishRhythmPlayerTrack in ChannelMediaOptions as true. Applicable scenarios: In music education, physical education and other scenarios, teachers usually need to use a metronome so that students can practice with the correct beat. The meter is composed of a downbeat and upbeats. The first beat of each measure is called a downbeat, and the rest are called upbeats. Call timing: This method can be called either before or after joining the channel. Related callbacks: After successfully calling this method, the SDK triggers the onRhythmPlayerStateChanged callback locally to report the status of the virtual metronome.| sound1 | The absolute path or URL address (including the filename extensions) of the file for the downbeat. For example, content://com.android.providers.media.documents/document/audio%203A14441. For the audio file formats supported by this method, see What formats of audio files does the Agora RTC SDK support. |
| sound2 | The absolute path or URL address (including the filename extensions) of the file for the upbeats. For example, content://com.android.providers.media.documents/document/audio%203A14441. For the audio file formats supported by this method, see What formats of audio files does the Agora RTC SDK support. |
| config | The metronome configuration. See AgoraRhythmPlayerConfig. |
sound1 and sound2.
|
abstract |
Disables the virtual metronome.
After calling startRhythmPlayer, you can call this method to disable the virtual metronome.
|
abstract |
Configures the virtual metronome.
startRhythmPlayer, you can call this method to reconfigure the virtual metronome.beatsPerMinute you set in AgoraRhythmPlayerConfig. For example, if you set beatsPerMinute as 60, the SDK plays one beat every second. If the file duration exceeds the beat duration, the SDK only plays the audio within the beat duration.publishRhythmPlayerTrack in ChannelMediaOptions as true. Call timing: This method can be called either before or after joining the channel. Related callbacks: After successfully calling this method, the SDK triggers the onRhythmPlayerStateChanged callback locally to report the status of the virtual metronome.| config | The metronome configuration. See AgoraRhythmPlayerConfig. |
|
abstract |
Sets the audio profile of the audio streams directly pushed to the CDN by the host.
When you set the publishMicrophoneTrack or publishCustomAudioTrack in the DirectCdnStreamingMediaOptions as true to capture audios, you can call this method to set the audio profile.
| profile | The audio profile, including the sampling rate, bitrate, encoding mode, and the number of channels.
|
|
abstract |
Sets the video profile of the media streams directly pushed to the CDN by the host.
This method only affects video streams captured by cameras or screens, or from custom video capture sources. That is, when you set publishCameraTrack or publishCustomVideoTrack in DirectCdnStreamingMediaOptions as true to capture videos, you can call this method to set the video profiles. If your local camera does not support the video resolution you set,the SDK automatically adjusts the video resolution to a value that is closest to your settings for capture, encoding or streaming, with the same aspect ratio as the resolution you set. You can get the actual resolution of the video streams through the onDirectCdnStreamingStats callback.
| config | Video profile. See VideoEncoderConfiguration.Note: During CDN live streaming, Agora only supports setting ORIENTATION_MODE as ORIENTATION_MODE_FIXED_LANDSCAPE or ORIENTATION_MODE_FIXED_PORTRAIT. |
|
abstract |
Gets the current Monotonic Time of the SDK.
Monotonic Time refers to a monotonically increasing time series whose value increases over time. The unit is milliseconds. In custom video capture and custom audio capture scenarios, in order to ensure audio and video synchronization, Agora recommends that you call this method to obtain the current Monotonic Time of the SDK, and then pass this value into the timestamp parameter in the captured video frame ( VideoFrame ) and audio frame ( AudioFrame ). Call timing: This method can be called either before or after joining the channel.
|
abstract |
Starts pushing media streams to the CDN directly.
Aogra does not support pushing media streams to one URL repeatedly. Media options Agora does not support setting the value of publishCameraTrack and publishCustomVideoTrack as true, or the value of publishMicrophoneTrack and publishCustomAudioTrack as true at the same time. When choosing media setting options ( DirectCdnStreamingMediaOptions ), you can refer to the following examples: If you want to push audio and video streams captured by the host from a custom source, the media setting options should be set as follows:
publishCustomAudioTrack is set as true and call the pushExternalAudioFrame methodpublishCustomVideoTrack is set as true and call the pushExternalVideoFrameById(AgoraVideoFrame frame, int videoTrackId) methodpublishCameraTrack is set as false (the default value)publishMicrophoneTrack is set as false (the default value) As of v4.2.0, Agora SDK supports audio-only live streaming. You can set publishCustomAudioTrack or publishMicrophoneTrack in DirectCdnStreamingMediaOptions as true and call pushExternalAudioFrame to push audio streams.| eventHandler | See onDirectCdnStreamingStateChanged and onDirectCdnStreamingStats. |
| publishUrl | The CDN live streaming URL. |
| options | The media setting options for the host. See DirectCdnStreamingMediaOptions. |
|
abstract |
Stops pushing media streams to the CDN directly.
|
abstract |
Change the media source during the pushing
| options | The direct cdn streaming media options: DirectCdnStreamingMediaOptions. |
|
abstract |
Creates a custom video track.
To publish a custom video source, see the following steps:1. Call this method to create a video track and get the video track ID.
joinChannel(String token, String channelId, int uid, ChannelMediaOptions options) to join the channel. In ChannelMediaOptions, set customVideoTrackId to the video track ID that you want to publish, and set publishCustomVideoTrack to true.pushExternalVideoFrameById(VideoFrame frame, int videoTrackId) and specify videoTrackId as the video track ID set in step 2. You can then publish the corresponding custom video source in the channel.
|
abstract |
Get an custom encoded video track id created by internal,which could used to publish or preview
|
abstract |
Destroys the specified video track.
| video_track_id | The video track ID returned by calling the createCustomVideoTrack method. |
|
abstract |
destroy a created custom encoded video track id
| video_track_id | The video track id which was created by createCustomEncodedVideoTrack |
|
abstract |
Sets up cloud proxy service.
When users' network access is restricted by a firewall, configure the firewall to allow specific IP addresses and ports provided by Agora; then, call this method to enable the cloud proxyType and set the cloud proxy type with the proxyType parameter. After successfully connecting to the cloud proxy, the SDK triggers the onConnectionStateChanged ( CONNECTION_STATE_CONNECTING, CONNECTION_CHANGED_SETTING_PROXY_SERVER ) callback. To disable the cloud proxy that has been set, call the setCloudProxy(TRANSPORT_TYPE_NONE_PROXY). To change the cloud proxy type that has been set, call the setCloudProxy (TRANSPORT_TYPE_NONE_PROXY) first, and then call the setCloudProxy to set the proxyType you want.
startAudioMixing(String filePath, boolean loopback, int cycle, int startPos) method to play online music files in the HTTP protocol. The services for Media Push and cohosting across channels use the cloud proxy with the TCP protocol.| proxyType | The type of the cloud proxy.
|
|
abstract |
Configures the connection to the Agora private media server access module.
After successfully deploying the Agora private media server and integrating the 4.x RTC SDK into your internal network client, you can call this method to specify the Local Access Point and assign the access module to the SDK. Call timing: You must call this method before joining a channel.
contact sales to learn more about and deploy the Agora hybrid cloud.| config | Local Access Point configuration. See LocalAccessPointConfiguration for details. |
Error Codes for details and troubleshooting suggestions.
|
abstract |
Sets whether to enable the local playback of external audio source.
After calling this method to enable the local playback of external audio source, if you need to stop local playback, you can call this method again and set enabled to false. You can call adjustCustomAudioPlayoutVolume to adjust the local playback volume of the custom audio track.
createCustomAudioTrack method to create a custom audio track before calling this method.| trackId | The audio track ID. Set this parameter to the custom audio track ID returned in createCustomAudioTrack. |
| enabled | Whether to play the external audio source:
|
|
abstract |
Sets audio advanced options.
If you have advanced audio processing requirements, such as capturing and sending stereo audio, you can call this method to set advanced audio options.
joinChannel(String token, String channelId, int uid, ChannelMediaOptions options), enableAudio and enableLocalAudio.| options | The advanced options for audio. See AdvancedAudioOptions. |
|
abstract |
Sets audio-video synchronization on the publishing side.
A single user may use two separate devices to send audio and video streams respectively. To ensure that the audio and video received are synchronized in time, you can call this method on the video publishing device and pass in the channel name and user ID of the audio publishing device. The SDK uses the timestamp of the sent audio stream as the reference and automatically adjusts the video stream accordingly. This ensures time synchronization of the received audio and video even if the two publishing devices have different uplink network conditions (e.g., one using Wi-Fi and the other using 4G).
| channelId | The name of the channel where the audio publishing device is located. |
| uid | The user ID of the audio publishing device. |
Error Codes for details and troubleshooting advice.
|
abstract |
Sets whether to replace the current video feeds with images when publishing video streams.
When publishing video streams, you can call this method to replace the current video feeds with custom images. Once you enable this function, you can select images to replace the video feeds through the ImageTrackOptions parameter. If you disable this function, the remote users see the video feeds that you publish. Call timing: Call this method after joining a channel.
| enabled | Whether to replace the current video feeds with custom images:
|
| options | Image configurations. See ImageTrackOptions. |
|
abstract |
Gets the type of the local network connection.
You can use this method to get the type of network in use at any stage.
|
abstract |
Gets the current NTP (Network Time Protocol) time.
In the real-time chorus scenario, especially when the downlink connections are inconsistent due to network issues among multiple receiving ends, you can call this method to obtain the current NTP time as the reference time, in order to align the lyrics and music of multiple receiving ends and achieve chorus synchronization.
|
abstract |
Enables tracing the video frame rendering process.
The SDK starts tracing the rendering status of the video frames in the channel from the moment this method is successfully called and reports information about the event through the onVideoRenderingTracingResult callback. Applicable scenarios: Agora recommends that you use this method in conjunction with the UI settings (such as buttons and sliders) in your app to improve the user experience. For example, call this method when the user clicks the Join Channel button, and then get the time spent during the video frame rendering process through the onVideoRenderingTracingResult callback, so as to optimize the indicators accordingly.
joinChannel(String token, String channelId, int uid, ChannelMediaOptions options) to join the channel. You can call this method at an appropriate time according to the actual application scenario to set the starting position for tracking video rendering events.RtcEngine is initialized.
|
abstract |
Enables audio and video frame instant rendering.
After successfully calling this method, the SDK enables the instant frame rendering mode, which can speed up the first frame rendering after the user joins the channel. Applicable scenarios: Agora recommends that you enable this mode for the audience in a live streaming scenario. Call timing: Call this method before joining a channel.
destroy() to destroy the RtcEngine object.RtcEngine is initialized.
|
abstract |
配置AudioAttribute
| AudioAttributes |
|
abstract |
Checks whether the device supports the specified advanced feature.
Checks whether the capabilities of the current device meet the requirements for advanced features such as virtual background and image enhancement. Applicable scenarios: Before using advanced features, you can check whether the current device supports these features based on the call result. This helps to avoid performance degradation or unavailable features when enabling advanced features on low-end devices. Based on the return value of this method, you can decide whether to display or enable the corresponding feature button, or notify the user when the device's capabilities are insufficient.
| type | The type of the advanced feature.
|
true: The current device supports the specified feature.false: The current device does not support the specified feature.
|
abstract |
Send audio metadata.
| metadata | Audio Metadata. |
1.8.18