This repo contains two simple demos showing how to use the Asterisk ARI externalMedia resource and another one using res_ari_stream to get a live transcription of a call.
The ARI demo creates an application that starts a bridge, the voice in that bridge will be translated.
The res_ari_stream demo can listen to an arbitrary channel
- clone this repository
- python setup.py install
- configure Asterisk such that calls enter the
stt
stasis applicationsame = n,Stasis(stt)
- create credentials for the Google Speech to Text API
- create a ARI user with username/password
demo/2b34c141-0ca9-44a7-95ca-570302f069c0
See the git repo README for installation instructions https://github.com/sboily/wazo-hackathon-asterisk-stream-module
This demo is made of 3 processes
- The stasis application which receives the incoming call and puts everyone in a bridge.
- The server which create the external media channel, receives the RTP from Asterisk sends it to Google Speech API and write the result to an html file.
- An HTTP server to serve the generated transcript
call-transcript-ari-stasis
call-transcript-ari-server
cd /tmp/translation && python -m SimpleHTTPServer
Then visit the dispayed address in your browser
- When a call enters the stasis application it will be added to the bridge
- When the server starts listening on the configured port and create the external media channel
- When RTP is received the payload is sent to Google Speech to text API and an HTML file is generated
- exucute
call-transcript-wazo <channel uniqueid>
- An HTTP server to serve the generated transcript
replace PJSIP/twilio with the peer you wish to listen to
call-transcript-wazo $(asterisk -rx 'core show channels concise' | grep 'PJSIP/twilio' | awk -F'!' '{ print $NF}')