|
Agora RTC Objective-C API Reference
Refactor
|
Public Member Functions | |
| virtual int | registerAudioFrameObserver (IAudioFrameObserver *observer)=0 |
| virtual int | registerVideoFrameObserver (IVideoFrameObserver *observer)=0 |
| virtual int | registerVideoEncodedFrameObserver (IVideoEncodedFrameObserver *observer)=0 |
| virtual int | registerFaceInfoObserver (IFaceInfoObserver *observer)=0 |
| virtual int | pushAudioFrame (IAudioFrameObserverBase::AudioFrame *frame, rtc::track_id_t trackId=0)=0 |
| virtual int | pullAudioFrame (IAudioFrameObserverBase::AudioFrame *frame)=0 |
| virtual int | setExternalVideoSource (bool enabled, bool useTexture, EXTERNAL_VIDEO_SOURCE_TYPE sourceType=VIDEO_FRAME, rtc::SenderOptions encodedVideoOption=rtc::SenderOptions())=0 |
| virtual int | setExternalAudioSource (bool enabled, int sampleRate, int channels, bool localPlayback=false, bool publish=true) 1=0 |
| virtual rtc::track_id_t | createCustomAudioTrack (rtc::AUDIO_TRACK_TYPE trackType, const rtc::AudioTrackConfig &config)=0 |
| virtual int | destroyCustomAudioTrack (rtc::track_id_t trackId)=0 |
| virtual int | setExternalAudioSink (bool enabled, int sampleRate, int channels)=0 |
| virtual int | enableCustomAudioLocalPlayback (rtc::track_id_t trackId, bool enabled)=0 |
| virtual int | pushVideoFrame (base::ExternalVideoFrame *frame, unsigned int videoTrackId=0)=0 |
| virtual int | pushEncodedVideoImage (const unsigned char *imageBuffer, size_t length, const agora::rtc::EncodedVideoFrameInfo &videoEncodedFrameInfo, unsigned int videoTrackId=0)=0 |
| virtual int | addVideoFrameRenderer (IVideoFrameObserver *renderer)=0 |
| virtual int | removeVideoFrameRenderer (IVideoFrameObserver *renderer)=0 |
The IMediaEngine class.
|
inlineprotectedvirtual |
|
pure virtual |
Registers an audio frame observer object.
Call this method to register an audio frame observer object (register a callback). When you need the SDK to trigger the onMixedAudioFrame, onRecordAudioFrame, onPlaybackAudioFrame, onPlaybackAudioFrameBeforeMixing or onEarMonitoringAudioFrame callback, you need to use this method to register the callbacks. Call timing: Call this method before joining a channel.
| observer | The observer instance. See IAudioFrameObserver. Set the value as NULL to release the instance. Agora recommends calling this method after receiving onLeaveChannel to release the audio observer object. |
|
pure virtual |
Registers a raw video frame observer object.
If you want to observe raw video frames (such as YUV or RGBA format), Agora recommends that you implement one IVideoFrameObserver class with this method. When calling this method to register a video observer, you can register callbacks in the IVideoFrameObserver class as needed. After you successfully register the video frame observer, the SDK triggers the registered callbacks each time a video frame is received. Applicable scenarios: After registering the raw video observer, you can use the obtained raw video data in various video pre-processing scenarios, such as virtual backgrounds and image enhacement by yourself. Call timing: Call this method before joining a channel.
width and height parameters, which may be adapted under the following circumstances:| observer | The observer instance. See IVideoFrameObserver. To release the instance, set the value as NULL. |
|
pure virtual |
Registers a receiver object for the encoded video image.
If you only want to observe encoded video frames (such as H.264 format) without decoding and rendering the video, Agora recommends that you implement one IVideoEncodedFrameObserver class through this method.
| observer | The video frame observer object. See IVideoEncodedFrameObserver. |
|
pure virtual |
Registers or unregisters a facial information observer.
You can call this method to register the onFaceInfo callback to receive the facial information processed by Agora speech driven extension. When calling this method to register a facial information observer, you can register callbacks in the IFaceInfoObserver class as needed. After successfully registering the facial information observer, the SDK triggers the callback you have registered when it captures the facial information converted by the speech driven extension. Applicable scenarios: Facial information processed by the Agora speech driven extension is BS (Blend Shape) data that complies with ARkit standards. You can further process the BS data using third-party 3D rendering engines, such as driving avatar to make mouth movements corresponding to speech.
enableExtension.| observer | Facial information observer, see IFaceInfoObserver. If you need to unregister a facial information observer, pass in NULL. |
|
pure virtual |
Pushes the external audio frame.
Call this method to push external audio frames through the audio track. Call timing: Before calling this method to push external audio data, perform the following steps:1. Call createCustomAudioTrack to create a custom audio track and get the audio track ID.
joinChannel(const char* token, const char* channelId, uid_t uid, const ChannelMediaOptions& options) to join the channel. In ChannelMediaOptions, set publishCustomAudioTrackId to the audio track ID that you want to publish, and set publishCustomAudioTrack to true.| frame | The external audio frame. See AudioFrame. |
| trackId | The audio track ID. If you want to publish a custom external audio source, set this parameter to the ID of the corresponding custom audio track you want to publish. |
|
pure virtual |
Pulls the remote audio data.
After a successful call of this method, the app pulls the decoded and mixed audio data for playback. Call timing: Call this method after joining a channel. Before calling this method, call setExternalAudioSink (enabled: true) to notify the app to enable and set the external audio rendering.
onPlaybackAudioFrame callback can be used to get audio data after remote mixing. After calling setExternalAudioSink to enable external audio rendering, the app will no longer be able to obtain data from the onPlaybackAudioFrame callback. Therefore, you should choose between this method and the onPlaybackAudioFrame callback based on your actual business requirements. The specific distinctions between them are as follows:onPlaybackAudioFrame callback, the SDK sends the audio data to the app through the callback. Any delay in processing the audio frames may result in audio jitter. This method is only used for retrieving audio data after remote mixing. If you need to get audio data from different audio processing stages such as capture and playback, you can register the corresponding callbacks by calling registerAudioFrameObserver.| frame | Pointers to AudioFrame. |
|
pure virtual |
Configures the external video source.
After calling this method to enable an external video source, you can call pushVideoFrame to push external video data to the SDK. Call timing: Call this method before joining a channel.
| enabled | Whether to use the external video source:
|
| useTexture | Whether to use the external video frame in the Texture format.
|
| sourceType | Whether the external video frame is encoded. See EXTERNAL_VIDEO_SOURCE_TYPE. |
| encodedVideoOption | Video encoding options. This parameter needs to be set if sourceType is ENCODED_VIDEO_FRAME. To set this parameter, contact technical support. |
|
pure virtual |
Sets the external audio source parameters.
Call timing: Call this method before joining a channel.
| enabled | Whether to enable the external audio source:
|
| sampleRate | The sample rate (Hz) of the external audio source which can be set as 8000, 16000, 32000, 44100, or 48000. |
| channels | The number of channels of the external audio source, which can be set as 1 (Mono) or 2 (Stereo). |
| localPlayback | Whether to play the external audio source:
|
| publish | Whether to publish audio to the remote users:
|
|
pure virtual |
Creates a custom audio track.
To publish a custom audio source, see the following steps:1. Call this method to create a custom audio track and get the audio track ID.
joinChannel(const char* token, const char* channelId, uid_t uid, const ChannelMediaOptions& options) to join the channel. In ChannelMediaOptions, set publishCustomAudioTrackId to the audio track ID that you want to publish, and set publishCustomAudioTrack to true.pushAudioFrame and specify trackId as the audio track ID set in step 2. You can then publish the corresponding custom audio source in the channel.| trackType | The type of the custom audio track. See AUDIO_TRACK_TYPE.Attention: If AUDIO_TRACK_DIRECT is specified for this parameter, you must set publishMicrophoneTrack to false in ChannelMediaOptions when calling joinChannel(const char* token, const char* channelId, uid_t uid, const ChannelMediaOptions& options) to join the channel; otherwise, joining the channel fails and returns the error code -2. |
| config | The configuration of the custom audio track. See AudioTrackConfig. |
|
pure virtual |
Destroys the specified audio track.
| trackId | The custom audio track ID returned in createCustomAudioTrack. |
|
pure virtual |
Sets the external audio sink.
After enabling the external audio sink, you can call pullAudioFrame to pull remote audio frames. The app can process the remote audio and play it with the audio effects that you want. Applicable scenarios: This method applies to scenarios where you want to use external audio data for playback. Call timing: Call this method before joining a channel.
onPlaybackAudioFrame callback.| enabled | Whether to enable or disable the external audio sink:
|
| sampleRate | The sample rate (Hz) of the external audio sink, which can be set as 16000, 32000, 44100, or 48000. |
| channels | The number of audio channels of the external audio sink:
|
|
pure virtual |
Sets the external audio track.
| trackId | The custom audio track id. |
| enabled | Enable/Disables the local playback of external audio track:
|
|
pure virtual |
Pushes the external raw video frame to the SDK through video tracks.
To publish a custom video source, see the following steps:1. Call createCustomVideoTrack to create a video track and get the video track ID.
joinChannel(const char* token, const char* channelId, uid_t uid, const ChannelMediaOptions& options) to join the channel. In ChannelMediaOptions, set customVideoTrackId to the video track ID that you want to publish, and set publishCustomVideoTrack to true.videoTrackId as the video track ID set in step 2. You can then publish the corresponding custom video source in the channel. Applicable scenarios: The SDK supports the ID3D11Texture2D video format since v4.2.3, which is widely used in game scenarios. When you need to push this type of video frame to the SDK, call this method and set the format in the frame to VIDEO_TEXTURE_ID3D11TEXTURE2D, set the d3d11_texture_2d and texture_slice_index members, and set the format of the video frame to ID3D11Texture2D.setExternalVideoSource method and the SDK will automatically create a video track with the videoTrackId set to 0. DANGER: After calling this method, even if you stop pushing external video frames to the SDK, the custom video stream will still be counted as the video duration usage and incur charges. Agora recommends that you take appropriate measures based on the actual situation to avoid such video billing.destroyCustomVideoTrack to destroy the custom video track.muteLocalVideoStream to cancel sending video stream or call updateChannelMediaOptions to set publishCustomVideoTrack to false.| frame | The external raw video frame to be pushed. See ExternalVideoFrame. |
| videoTrackId | The video track ID returned by calling the createCustomVideoTrack method.Note: If you only need to push one custom video source, set videoTrackId to 0. |
|
pure virtual |
Pushes the encoded video image to the app.
| imageBuffer | A pointer to the video image. |
| length | The data length. |
| videoEncodedFrameInfo | The reference to the information of the encoded video frame: EncodedVideoFrameInfo. |
| videoTrackId | The id of the video track.
|
|
pure virtual |
@hide For internal usage only
|
pure virtual |
@hide For internal usage only
|
pure virtual |