-
Notifications
You must be signed in to change notification settings - Fork 356
Advanced Meeting Controls
- Introduction
- Available Controls
- Screen Recording
- Lock Meeting
- Sending DTMF Tones
- Transcription
- Using Phone Audio
- Effects: Meeting Audio & Video
This wiki discusses some of the advanced meeting controls available in the Webex Web SDK for Meetings.
The advanced meeting controls can be classified as follows,
- Screen Recording
- Lock a Meeting
- Send DTMF tones
- Transcription
- Using a Phone audio
- Effects for Audio and Video.
Once the meeting is created and joined, the screen recording controls shall be invoked.
To start a recording, the following API available in a meeting object needs to be called,
await meeting.startRecording();
Asynchronous | Yes |
Parameters | None |
Returns | Promise<undefined> |
When the recording has already started, one can pause and resume the recording. Below is a code example of how to do that,
await meeting.pauseRecording();
Asynchronous | Yes |
Parameters | None |
Returns | Promise<undefined> |
A recording that is started and then paused can be resumed using the following API,
await meeting.resumeRecording();
Asynchronous | Yes |
Parameters | None |
Returns | Promise<undefined> |
A recording that is started can be stopped as shown below,
await meeting.stopRecording();
Asynchronous | Yes |
Parameters | None |
Returns | Promise<undefined> |
A meeting is locked means that no other participant could join the meeting unless let in. A meeting can be locked only by a participant who is also a moderator of the meeting.
To lock a meeting, the following API needs to be invoked,
await meeting.lockMeeting();
Asynchronous | Yes |
Parameters | None |
Returns | Promise<undefined> |
To unlock a locked meeting, the following API needs to be invoked,
await meeting.unlockMeeting();
Asynchronous | Yes |
Parameters | None |
Returns | Promise<undefined> |
The meetings SDK can be used to send DTMF tones in a meeting and it can be done as follows,
await meeting.sendDTMF(DTMFStringToBeSent);
Asynchronous | Yes | ||||||||||
Parameters |
|
||||||||||
Returns | Promise<undefined> |
The meeting transcription happens at the backend and this transcription can be received by listening to an event on the meetings SDK. To start receiving transcription,
- The user should initialise the Webex object with
enableAutomaticLLM
meeting config - The meeting host should have the Webex Assistant enabled
- The user should have joined the meeting
When a user initializes the Webex object with the provided configuration, the Webex Meetings SDK will automatically establish this socket connection between the browser and the transcription backend
webex.init({
meetings: {
enableAutomaticLLM: true
}
});
Listen for the following event to determine if the socket connection has been successfully established.
meeting.on('meeting:transcription:connected', () => {
console.log('Transcription Websocket is connected');
});
Apps that wish to enable transcription by default can utilize this event. (i.e) The app can enable transcription right after receiving this event.
To start receiving transcription while in a meeting, use the code snippet below.
await meeting.startTranscription(options);
This table covers details about the options.
Parameter Name | Description | Required | Sample value | Type |
---|---|---|---|---|
options | Configuration object to be provided while starting transcription | No | { spokenLanguage?: String} |
Object |
When you start the transcription, you will receive the event highlighted below. The payload contains a list of supported Spoken and Caption Languages.
meeting.on('meeting:receiveTranscription:started', (payload) => {
console.log(payload.captionLanguages);
console.log(payload.spokenLanguages);
});
The captionLanguages and spokenLanguages are two arrays that contain the language code for the supported languages. The Language Codes conform with the ISO 639 language code.
Listen to the event highlighted below. The payload contains the captions from the meeting audio.
meeting.on('meeting:caption-received', (payload) => {
//use payload to display captions
});
Here's the example of the payload received from the meeting audio.
{
"captions": [
{
"id": "88e1b0c9-7483-b865-f0bd-a685a5234943",
"isFinal": true,
"text": "Hey, everyone.",
"currentSpokenLanguage": "en",
"timestamp": "1:22",
"speaker": {
"speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
"name": "Name"
}
},
{
"id": "e8fd9c60-1782-60c0-92e5-d5b22c80df2b",
"isFinal": true,
"text": "That's awesome.",
"currentSpokenLanguage": "en",
"timestamp": "1:26",
"speaker": {
"speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
"name": "Name"
}
},
{
"id": "be398e11-cf08-92e7-a42d-077ecd60aeea",
"isFinal": true,
"text": "आपका नाम क्या है?",
"currentSpokenLanguage": "hi",
"timestamp": "1:55",
"speaker": {
"speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
"name": "Name"
}
},
{
"id": "84adc1a7-b3c3-5a49-0588-aa787b1437eb",
"isFinal": true,
"translations": {
"en": "What is your name?"
},
"text": "आपका नाम क्या है?",
"currentSpokenLanguage": "hi",
"timestamp": "2:11",
"speaker": {
"speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
"name": "Name"
}
},
{
"id": "84c89387-cd5d-ce15-1867-562c0a91155f",
"isFinal": true,
"translations": {
"hi": "तुम्हारा नाम क्या है?"
},
"text": "What's your name?",
"currentSpokenLanguage": "en",
"timestamp": "2:46",
"speaker": {
"speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
"name": "Name"
}
}
],
"interimCaptions": {
"88e1b0c9-7483-b865-f0bd-a685a5234943": [],
"e8fd9c60-1782-60c0-92e5-d5b22c80df2b": [],
"be398e11-cf08-92e7-a42d-077ecd60aeea": [],
"84adc1a7-b3c3-5a49-0588-aa787b1437eb": [],
"84c89387-cd5d-ce15-1867-562c0a91155f": []
}
}
During the meeting, if you'd like to change the spoken language, use the function below.
const currentSpokenLanguage = await meeting.setSpokenLanguage(selectedLanguage);
You choose the selectedLanguage as the language code when you start the transcription. If you select the spoken language and speak in that language, the system displays the caption in the same language. If you set the caption language to a different language at any point, a user speaking in this new language will see the caption in that different language.
During the meeting, if you'd like to change the caption language, use the function below.
const currentCaptionLanguage = await meeting.setCaptionLanguage(selectedLanguage);
In this API, the user chooses the selectedLanguage as the language code, which the system receives at the start of the transcription. When you choose the caption language, the system will translate any speech, no matter the language, into this selected language.
To stop receiving the transcription from the SDK, use the function below.
meeting.stopTranscription();
In a meeting, when the audio is not clear or there are troubles in using the device audio (Desktop app, Mobile app, or Web app), Webex offers to dial-in to the meeting via PSTN calls. The following are the available controls,
- Use Phone Audio
- Disconnect Phone Audio
More information about this control and usage can be found at: Webex Web SDK Wiki: Use phone audio for SDK meeting Dial IN OUT
On the Webex Meetings, there are three effects offered right now:
- Background Noise Removal for Webex Audio
- Background Blur for Webex Video
- Virtual Background for Webex Video
To enable these in a meeting, one needs a valid Webex access token. More information about these features and their usage can be found here: Webex Web SDK Wiki: Audio & Video Effects.
Caution
- Introducing the Webex Web Calling SDK
- Core Concepts
- Quickstart guide
- Authorization
- Basic Features
- Advanced Features
- Introduction
- Quickstart Guide
- Basic Features
- Advanced Features
- Multistream
- Migrating SDK version 1 or 2 to version 3