Using the SDK with your own UI
Using the SDK with your own UI
The Talk SDK includes a ready to use, built-in UI. However, if it doesn't suit your integration needs, you can build and use your own UI, connecting it with SDK APIs that encapsulate and deliver all of the Talk SDK functionality.
This section describes how to use the Talk SDK API with your own UI. To customize the look and feel of the SDK's built-in UI, see Customizing the look.
Prerequisites
The Talk SDK is integrated in your application as described in Getting started. All the steps up to Making a call apply.
Before you start
The process of making a call with the Talk SDK consist of two steps:
- Prepare and configure the digital line setup. This includes ensuring microphone access permissions are granted, fetching the digital line's status and the recording consent, asking the user for recording consent if required, and verifying agent availability
- Make the call and respond to its status changes. This includes displaying call information to the user and letting the user perform call-related actions
In the Talk SDK, these two responsibilities are handled by the CallConfigurationScreen
and CallScreen
view controllers. For a sample implementation, see Making a call using the out-of-the-box UI.
When using the SDK with a custom UI, you have the option of managing one or both responsibilities. You can use one of two approaches:
- An all-custom approach, where you implement the responsibilities of both the
CallConfigurationScreen
andCallScreen
from scratch - A mixed approach, where you implement only one of the responsibilities from scratch and use the SDK for the other. For example, you can implement your own
CallConfigurationScreen
but use theCallScreen
from the SDK. Alternatively, you can use theCallConfigurationScreen
from the SDK but implement your ownCallScreen
from scratch
The following sections describe how to implement the required functionalities for CallConfigurationScreen
and CallScreen
using system and SDK APIs:
If you want to use the mixed approach, see Making a call using out-of-the-box UI for information on using and customizing the part provided by the SDK.
Implementing a custom call configuration screen
You can build your own call configuration screen functionality using system and SDK APIs. Before you can make a call, you must perform the following steps in the setup flow:
- Check for microphone access
- Request microphone access if applicable
- Check the line status
- Starting a call
Checking for microphone access
Microphone access permission is required for making a call. The UI flow should check for the following permission values:
Microphone permission | Required action |
---|---|
undetermined / AVAudioSessionRecordPermissionUndetermined | Make a request for permission using the system API |
denied / AVAudioSessionRecordPermissionDenied | Inform the user that the call cannot be made and the user can grant permission in the system's Settings > Privacy > Microphone |
granted / AVAudioSessionRecordPermissionGranted | No additional action is required. The flow can move to next step |
Use the system AVFoundation
framework to check for microphone access permission. The following code sample shows how to check for the current state of microphone access permissions:
Swift
private func checkMicrophonePermission() {
switch AVAudioSession.sharedInstance().recordPermission {
case .undetermined:
// may request for microphone access using:
// AVAudioSession.sharedInstance().requestRecordPermission
case .denied:
// access denied, inform user in the UI that the call
// cannot be made until microphone access is allowed
// in Settings -> Privacy -> Microphone
case .granted:
// access granted, may proceed to the next step of setting up the call
}
}
You will need to import AVFoundation for the above Swift Example
Objective-C
- (void)checkMicrophonePermission
{
switch ([AVAudioSession sharedInstance].recordPermission) {
case AVAudioSessionRecordPermissionUndetermined:
// may request for microphone access using:
// [[AVAudioSession sharedInstance] requestRecordPermission:]
break;
case AVAudioSessionRecordPermissionDenied:
// access denied, inform user in the UI that the call
// cannot be made until microphone access is allowed
// in Settings -> Privacy -> Microphone
break;
case AVAudioSessionRecordPermissionGranted:
// access granted, may proceed to the next step of setting up the call
break;
}
}
You will need to import AVFoundation for the above Objective-C Example
Requesting microphone access
If the user has not answered the microphone permission request yet, the permission has a value of undetermined
/ AVAudioSessionRecordPermissionUndetermined
. You can use the system API to prompt the user to grant access to the microphone.
Swift
private func requestMicrophonePermission() {
AVAudioSession.sharedInstance().requestRecordPermission { granted in
if granted {
// access granted, may proceed to the next step of setting up the call
} else {
// access denied, inform user in the UI that the call
// cannot be made until microphone access is allowed
// in Settings -> Privacy -> Microphone
}
}
}
You will need to import AVFoundation for the above Swift Example
Objective-C
- (void)requestMicrophonePermission
{
[[AVAudioSession sharedInstance] requestRecordPermission:^(BOOL granted) {
if (granted) {
// access granted, may proceed to the next step of setting up the call
} else {
// access denied, inform user in the UI that the call
// cannot be made until microphone access is allowed
// in Settings -> Privacy -> Microphone
}
}];
}
You will need to import AVFoundation for the above Objective-C Example
For more details, see the Apple documentation for the requestRecordPermission(_:) method.
Checking the line status and recording consent requirement
Use the lineStatus(digitalLine:completion:)
method to determine if any agents are available for the digital line. We recommend hiding the call button when no agents are available to improve the user experience. In addition to agent availability, this method returns the current recording consent configuration for the digital line.
Swift
talk.lineStatus(digitalLine: "digitalLineNickname") { (result) in
let isAgentAvailable: Bool
let recordingConsent: RecordingConsent
switch result {
case .success(let lineStatus):
isAgentAvailable = lineStatus.agentAvailable
recordingConsent = lineStatus.recordingConsent
case .failure(let agentStatusError):
isAgentAvailable = false
recordingConsent = .unknown
// handle agentStatusError
}
// update the UI accoording to isAgentAvailable & recordingConsent
}
Objective-C
__weak typeof(self) weakSelf = self;
[talk lineStatusWithDigitalLine: @"digitalLineNickname" completion: ^(id<LineStatus> _Nullable lineStatus, NSError * _Nullable error) {
BOOL isAgentAvailable = NO;
RecordingConsent recordingConsent = RecordingConsentUnknown;
if (lineStatus != nil) {
isAgentAvailable = lineStatus.agentAvailable;
recordingConsent = lineStatus.recordingConsent;
} else {
// handle error
}
// update the UI accoording to isAgentAvailable & recordingConsent
}];
When the lineStatus(digitalLine:completion:)
method completes successfully, it contains following properties:
LineStatus property | Description |
---|---|
agentAvailable | Boolean value informing of agent availability |
recordingConsent | Enumeration describing the line's setup for recording consent |
Recording consent configuration
The digital line's recording consent configuration may have the following values:
Recording consent | Description |
---|---|
optIn / RecordingConsentOptIn | Call recording is disabled by default but the end user has the option to opt-in |
optOut / RecordingConsentOptOut | Call recording is enabled by default but the end user has the option to opt-out |
unknown / RecordingConsentUnknown | Call recording is not defined in Talk settings |
When the recording consent for the digital line is defined as unknown
, no UI needs to be presented to the end user. In the other two cases, the UI needs to inform the end user of the line's configuration and provide the user with the option to either opt-in or opt-out from the call recording.
Starting a call
After completing previous checks and gathering the end user's consent for call recording, the call can be started. You have two options:
- Create and present the SDK call screen directly as seen in the
presentCallScreen(recordingConsentAnswer:)
function in the second part of the example in Making a call using out-of-the-box UI - Implement a custom call screen for starting and managing the call. See the next section
Implementing a custom call screen
The call screen's responsibility is to start a call and present call-related information to the user. It also has to let the user perform call-related actions. The call screen doesn't necessarily have to be represented by a full-screen view controller. How you represent it in your UI depends on your use case.
Making a call
Use the call(callData:statusChangeHandler:)
function of a Talk
instance to start the call. The function takes as a parameter a CallData
protocol-conforming object. The SDK's TalkCallData
fulfils that requirement. The object has the following properties:
digitalLine
- the digital line's nicknamerecordingConsentAnswer
- the end user's answer to question about recording consent related to the default recording configuration for that digital line. TherecordingConsentAnswer
should state whether the end user opted-in or opted-out from the call recording.
Note: If lineStatus(digitalLine:completion:)
returns "unknown" for the digital line's recordingConsent
configuration, the CallData
should be provided with the unknown
value. This reflects the situation when the digital line either does not support recording, recording consent configuration, or it was not set up.
The second parameter of the call(callData:statusChangeHandler:)
is a handler enabling the application to properly react to various call state transitions and reflect those transitions in the UI. For details and a code sample, see Handling call states.
Swift
let callData = TalkCallData(digitalLine: "digitalLineNickname",
recordingConsentAnswer: recordingConsentAnswer)
// returned `TalkCall` object needs to held strongly for the duration
// of the call, as its too early deallocation will stop the call
self.talkCall = talk.call(callData: callData,
statusChangeHandler: onCallStatusChange(status:error:))
Objective-C
ZDKTalkCallData *callData = [[ZDKTalkCallData alloc] initWithDigitalLine:@"digitalLineNickname"
recordingConsentAnswer:answer];
__weak typeof(self) weakSelf = self;
// returned `TalkCall` object needs to held strongly for the duration
// of the call, as its too early deallocation will stop the call
self.talkCall = [self.talk callWithCallData:callData statusChangeHandler:^(enum CallStatus status, NSError * _Nullable error) {
[weakSelf __onCallStatusChange:status error:error];
}];
Note: When starting a call, a TalkCall
instance is returned. You must store this object and control its lifecycle according to the ongoing call's duration. Failing to do so will result in the call stack being deallocated and the call being dropped. A reference to this instance is also necessary to enable any of the user actions that may be performed during the call. Example actions include changing audio source, muting the microphone, or hanging up.
Handling call states
The StatusChangeHandler
may be provided as an inline closure but in the example below it's extracted as a separate function for clarity. It's called on every call state change with an updated status
. The error
is optional and is nil
when no failure occurs, when it contains an error object for the disconnected
and failed
statuses.
Swift
private func onCallStatusChange(status: CallStatus, error: TalkCallError?) {
switch status {
case .connecting:
// The call is connecting
case .connected:
// The call has been connected
case .disconnected:
// The call has disconnected, either by end-user, agent or error condition
// May be due to an `error`, check if not nil
case .failed:
// The call has failed to connect
// May be due to an `error`, check if not nil
case .reconnecting:
// The call starts to reconnect
case .reconnected:
// The call has reconnected
}
}
Objective-C
- (void)__onCallStatusChange:(CallStatus)status error:(NSError *)error
{
switch (status) {
case CallStatusConnecting:
// The call is connecting
break;
case CallStatusConnected:
// The call has been connected
break;
case CallStatusDisconnected:
// The call has disconnected, either by end-user, agent or error condition
// May be due to an `error`, check if not nil
break;
case CallStatusFailed:
// The call has failed to connect
// May be due to an `error`, check if not nil
break;
case CallStatusReconnecting:
// The call starts to reconnect
break;
case CallStatusReconnected:
// The call has reconnected
break;
}
}
Muting or unmuting the call
You can mute or unmute the ongoing call. Use the getter and setter of the muted
property of the TalkCall
instance to check and change the mute state of the call.
Swift
// get the current mute state
let isMuted = talkCall.muted
// toggle the mute state to the opposite
talkCall.muted = !isMuted
Objective-C
// get the current mute state
BOOL isMuted = self.talkCall.muted;
// toggle the mute state to the opposite
self.talkCall.muted = !isMuted;
Changing the audio output of the call
You can change the audio output of the call from the device's headset to its built-in speakers. By default, the audio is played through the headset. Use the getter and setter of the audioOutput
property of the TalkCall
instance to check and change the audio output of the current call.
The AudioOutput
enumeration has the following values:
AudioOutput values | Description |
---|---|
speaker | The audio is routed through the built-in device speakers |
headset | The audio is routed through the device headset |
Swift
// check if current audio output is set to headset
let isAudioOutputHeadset = talkCall.audioOutput == .headset
// toggle the current audio output to other value
talkCall.audioOutput = isAudioOutputHeadset ? .speaker : .headset
Objective-C
// check if current audio output is set to headset
BOOL isAudioOutputHeadset = self.talkCall.audioOutput == AudioOutputHeadset;
// toggle the current audio output to other value
self.talkCall.audioOutput = isAudioOutputHeadset ? AudioOutputSpeaker : AudioOutputHeadset;
Changing the audio routing of the call
You can check and change the currently selected audio routing option for the ongoing call. By default the audio routing is set to a built-in option. Depending on the device type, there may be different built-in options available. If available, external Bluetooth-connected devices may be selected for audio routing at the system level (this usually happens as they connect). Use the getter and setter of the audioRouting
property of the TalkCall
instance to check and change the audio routing setup of the current call.
The AudioRoutingOption
object has the following properties:
AudioRoutingOption properies | Description |
---|---|
name | Localized name of the routing option |
type | Type of audio routing option of AudioRoutingType type |
The AudioRoutingType
enumeration has the following values:
AudioRoutingType values | Description |
---|---|
builtIn | Used for routing option using built-in speakers and microphone |
type | Used for routing option using Bluetooth connectivity |
To access the list of currently available audio routing options, use the availableAudioRoutingOptions
property of the TalkCall
instance.
Note: As an alternative for handling the audio input/output routing, you can use AVRoutePickerView
from the AVKit
framework. See AVRoutePickerView in the Apple documentation.
Disconnecting the call
To disconnect the call, use the disconnect()
function. After calling it, the TalkCall
instance may be released and deallocated.
Swift
talkCall.disconnect()
Objective-C
[self.talkCall disconnect];