-
Notifications
You must be signed in to change notification settings - Fork 0
Determine metric for "non-agreement" #5
Comments
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
Normalize to z-scores before comparing? |
I think maybe there are two separate things here that we might want to calculate. First, is a single number (like a correlation) describing how well two datasets "agree". The mantel statistic is The second thing is a statistic for each pixel across all the data products with the purpose of showing spatial variation in "agreement" across the datasets. Right now we are doing that with standard deviation, which shows where in the state there is more variation among the data products, but might not necessarily show "agreement". |
Closing for now as I've added correlation coefs to scatter plots and I think SD is the best option for the map for now |
What metric should be used to represent how well the datasets agree or don't agree with eachother?
Something like RMSE might get at it, but would be large for a situation where 5 datasets all overestimate AGB by the same amount and the other 5 all underestimate AGB by the same amount, right? All the datasets would be far from the mean even though there would be a high(?) level of "agreement". I'm sure there is a better statistic for this.
The text was updated successfully, but these errors were encountered: