Video, audio, and canvas sharing app using WebRTC, Socket.io, React.js, and Node.js
-
SFU — Instead of using a P2P mesh network, where all peers are streaming data to each other, a SFU (Selective Forwarding Unit) is used as a proxy to forward data between each other. An SFU enables communication connections that far exceed regular P2P mesh networks in terms of bandwidth optimization.
-
Frontend — Created using Typescript, Next.js, Mediasoup Client, Socket.io, and DaisyUI. Enables sharing of audio, cideo, and data sharing of up to 400 concurrent users (one healthy backend server's CPU limit). Currently, only data related to coordinates of html canvas is being shared.
-
Backend API — Created using Typescript, Node.js, Socket.io, Mediasoup.
- add functionality to select devices
- put .d types in correct folder
- producer video/audio feed should be in the corner of the screen, and hideable
- audio to text translation
- dockerize frontend and backend
- sandbox to https://gospace.me
- auto room name generator, like "shivering mountain" or "volcanic ash" idk
- data producers and data consumers pause functionality?
- css TV on/off animation
- finishing implementing debug module instead of console logs
- add room elapsed time
cd backend
npm install
npm run dev
cd frontend
npm install
npm run dev
Navigate to http://localhost:3000 for frontend
Because of self generated ssl certs, you might have to navigate to https://localhost:4000 and make sure it (the backend) is not being blocked by the browser, as it most likely is.
Empty room
|
Drawing on a canvas, in sync
|
4 peer video conference
|
7 peer video conference, the browser is the limit here.
It's difficult to test multiple peers without browser testing scripts.
|
MIT License