Replies: 2 comments 2 replies
-
There are a few extra metrics that we could plot:
I've tried all of these on the standard OOA simulated dataset above with the following results:
|
Beta Was this translation helpful? Give feedback.
2 replies
-
I now think that measuring the quality of imputation is the best way to go for this. See #652 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
In the tsdate preprint, we show a few plots of what happens when you adjust the mismatch ratio values when running the new
tsinfer
version. We conclude that for human-like datasets, mismatch ratios of 1 in both match samples and match ancestors passes gives a reasonable result. However, this may not always be the optimal value: it will be dependent on the underlying error model, and may differ between organisms. This discussion is aimed at collecting extra information about these mismatch parameters. FYI, here's the most relevant plot in the current preprint, based on 1500 samples and 1000-genomes-project-like error:Which is created by running test_mismatch_rates.py
Beta Was this translation helpful? Give feedback.
All reactions