-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Impact of Stereo input and output on metrics #686
Comments
E.g. if you have one or two channels, does this affect the rate of increase of...
Or are the counters the same regardless of channel count |
@o1ka doesn't think that that we count stereo as twice as many samples or duration... do we have a repro? |
even source mic is stereo, webrtc default will not use stereo audio(opus), user should mung sdp to activate stereo audio. |
From what I've tested, the channels in |
when channles is 2 , the call maybe also pseudo stereo, there is no way stats whether the mic is true stereo. |
the channels in 'RTCCodecStats' maybe wrong, it's always 2 from my test for opus codec, but it's not stereo. |
I think by default we negotiate mono and SDP munging is needed to enabled stereo. My memory is hazy, I haven't re-read all of this, but I would imagine that RTCCodecStats should reflects what we negotiated (currently mono-by-default), not what the mic is capable of. Is getStats() lying to us? |
it seems the channels stats in RTCCodecStats is get from opus codec sdp fmt line, according opus rfc 7587 , the number of channels MUST be 2. |
The codec stats should reflect the codec information from the SDP. For current usage, there is media-source where MediaStreamTrack information is exposed and outbound-rtp where encoder and RTP information is exposed. (Or inbound-rtp for decoder/RTP/track on the receive side) It's not clear to me if we need more metrics to reflect current usage or if w3c/webrtc-extensions#63 (comment) should be resolved and then you don't need metrics because you can just look at track.getSettings().channelCount |
https://w3c.github.io/webrtc-stats/#terminology already says that when we talk about audio samples what we're really talking about are audio frames |
What happens if the source is stereo? We should be consistent in all the audio metrics on how stereo audio streams are handled.
The text was updated successfully, but these errors were encountered: