Skip to content

Advanced Meeting Controls

Sreekanth Narayanan edited this page Feb 26, 2024 · 15 revisions

Introduction

This wiki discusses some of the advanced meeting controls available in the Webex Web SDK for Meetings.

Available Controls

The advanced meeting controls can be classified as follows,

  • Screen Recording
  • Lock a Meeting
  • Send DTMF tones
  • Transcription
  • Using a Phone audio
  • Effects for Audio and Video.

Screen Recording

Once the meeting is created and joined, the screen recording controls shall be invoked.

Start a Recording

To start a recording, the following API available in a meeting object needs to be called,

await meeting.startRecording();
Asynchronous Yes
Parameters None
Returns Promise<undefined>

Pause a Recording

When the recording has already started, one can pause and resume the recording. Below is a code example of how to do that,

await meeting.pauseRecording();
Asynchronous Yes
Parameters None
Returns Promise<undefined>

Resume a Recording

A recording that is started and then paused can be resumed using the following API,

await meeting.resumeRecording();
Asynchronous Yes
Parameters None
Returns Promise<undefined>

Stop a Recording

A recording that is started can be stopped as shown below,

await meeting.stopRecording();
Asynchronous Yes
Parameters None
Returns Promise<undefined>

Lock Meeting

A meeting is locked means that no other participant could join the meeting unless let in. A meeting can be locked only by a participant who is also a moderator of the meeting.

Locking a meeting

To lock a meeting, the following API needs to be invoked,

await meeting.lockMeeting();
Asynchronous Yes
Parameters None
Returns Promise<undefined>

Unlocking a meeting

To unlock a locked meeting, the following API needs to be invoked,

await meeting.unlockMeeting();
Asynchronous Yes
Parameters None
Returns Promise<undefined>

Sending DTMF Tones

The meetings SDK can be used to send DTMF tones in a meeting and it can be done as follows,

await meeting.sendDTMF(DTMFStringToBeSent);
Asynchronous Yes
Parameters
Sl. No Parameter Name Parameter Type Mandatory Description
1 DTMFStringToBeSent String Yes

This String represents the DTMF tone to be sent

Returns Promise<undefined>

Transcription

The meeting transcription happens at the backend and this transcription can be received by listening to an event on the meetings SDK. To start receiving transcription,

  • The meeting host should have the Webex Assistant enabled
  • The user should have joined the meeting

Enable Transcription

To start receiving transcription while in a meeting, use the code snippet below.

await meeting.startTranscription(options);

This table covers details about the options.

Parameter Name Description Required Type Value
options Configuration object to be provided while starting transcription No Object { spokenLanguage?: String}

Set Caption & Spoken Languages

When you start the transcription, you will receive the event highlighted below. The payload contains a list of supported Spoken and Caption Languages.

meeting.on('meeting:receiveTranscription:started', (payload) => {
      console.log(payload.captionLanguages);
      console.log(payload.spokenLanguages);
});

The captionLanguages and spokenLanguages are two arrays that contain the language code for the supported languages. The Language Codes conform with the ISO 639 language code.

Start Receiving Transcription

Listen to the event highlighted below. The payload contains the captions from the meeting audio.

meeting.on('meeting:caption-received', (payload) => {
      //use payload to display captions
});

Example of a payload

Here's the example of the payload received from the meeting audio.

{
    "captions": [
        {
            "id": "88e1b0c9-7483-b865-f0bd-a685a5234943",
            "isFinal": true,
            "text": "Hey, everyone.",
            "currentSpokenLanguage": "en",
            "timestamp": "1:22",
            "speaker": {
                "speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
                "name": "Name"
            }
        },
        {
            "id": "e8fd9c60-1782-60c0-92e5-d5b22c80df2b",
            "isFinal": true,
            "text": "That's awesome.",
            "currentSpokenLanguage": "en",
            "timestamp": "1:26",
            "speaker": {
                "speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
                "name": "Name"
            }
        },
        {
            "id": "be398e11-cf08-92e7-a42d-077ecd60aeea",
            "isFinal": true,
            "text": "आपका नाम क्या है?",
            "currentSpokenLanguage": "hi",
            "timestamp": "1:55",
            "speaker": {
                "speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
                "name": "Name"
            }
        },
        {
            "id": "84adc1a7-b3c3-5a49-0588-aa787b1437eb",
            "isFinal": true,
            "translations": {
                "en": "What is your name?"
            },
            "text": "आपका नाम क्या है?",
            "currentSpokenLanguage": "hi",
            "timestamp": "2:11",
            "speaker": {
                "speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
                "name": "Name"
            }
        },
        {
            "id": "84c89387-cd5d-ce15-1867-562c0a91155f",
            "isFinal": true,
            "translations": {
                "hi": "तुम्हारा नाम क्या है?"
            },
            "text": "What's your name?",
            "currentSpokenLanguage": "en",
            "timestamp": "2:46",
            "speaker": {
                "speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
                "name": "Name"
            }
        }
    ],
    "interimCaptions": {
        "88e1b0c9-7483-b865-f0bd-a685a5234943": [],
        "e8fd9c60-1782-60c0-92e5-d5b22c80df2b": [],
        "be398e11-cf08-92e7-a42d-077ecd60aeea": [],
        "84adc1a7-b3c3-5a49-0588-aa787b1437eb": [],
        "84c89387-cd5d-ce15-1867-562c0a91155f": []
    }
}

Change the Spoken Language

During the meeting, if you'd like to change the spoken language, use the function below.

const currentSpokenLanguage = await meeting.setSpokenLanguage(selectedLanguage);

You choose the selectedLanguage as the language code when you start the transcription. If you select the spoken language and speak in that language, the system displays the caption in the same language. If you set the caption language to a different language at any point, a user speaking in this new language will see the caption in that different language.

Change the Caption Language

During the meeting, if you'd like to change the caption language, use the function below.

const currentCaptionLanguage = await meeting.setCaptionLanguage(selectedLanguage);

In this API, the user chooses the selectedLanguage as the language code, which the system receives at the start of the transcription. When you choose the caption language, the system will translate any speech, no matter the language, into this selected language.

Stop Receiving Transcription

To stop receiving the transcription from the SDK, use the function below.

meeting.stopTranscription();

Using Phone Audio

In a meeting, when the audio is not clear or there are troubles in using the device audio (Desktop app, Mobile app, or Web app), Webex offers to dial-in to the meeting via PSTN calls. The following are the available controls,

  • Use Phone Audio
  • Disconnect Phone Audio

More information about this control and usage can be found at: Webex Web SDK Wiki: Use phone audio for SDK meeting Dial IN OUT

Effects: Meeting Audio & Video

On the Webex Meetings, there are three effects offered right now:

  • Background Noise Removal for Webex Audio
  • Background Blur for Webex Video
  • Virtual Background for Webex Video

To enable these in a meeting, one needs a valid Webex access token. More information about these features and their usage can be found here: Webex Web SDK Wiki: Audio & Video Effects.

Clone this wiki locally