Skip to content

subjectivitylab/metadPy

 
 

Repository files navigation

pre-commit license codecov black mypy Imports: isort


metadPy

metadPy is an open-source Python package for cognitive modelling of behavioural data with a focus on metacognition. It is aimed to provide simple yet powerful functions to compute standard index and metric of signal detection theory (SDT) and metacognitive efficiency (meta-d’ and hierarchical meta-d’) [1, 2, 3]. The only input required is a data frame encoding task performances and confidence ratings at the trial level.

metadPy is written in Python 3. It uses Numpy, Scipy and Pandas for most of its operation, comprizing meta-d’ estimation using maximum likelihood estimation (MLE). The (Hierarchical) Bayesian modelling of meta-d’ and m-ratio [4] is based on JAX and Numpyro. Single subject modelling is also possible with pymc.

Installation

The package can be installed using pip:

pip install git+https://github.com/embodied-computation-group/metadPy.git

For most of the operations, the following packages are required:

For Bayesian modelling you will either need:

  • Numpyro (>=0.8.0) - also requiers JAX

    or

  • PyMC (>=3.10.0) - only support non hierarchical modelling.

Why metadPy?

metadPy stands for meta-d' (meta-d prime) in Python. meta-d' is a behavioural metric commonly used in consciousness and metacognition research. It is modelled to reflect metacognitive efficiency (i.e the relationship between subjective reports about performances and objective behaviour).

metadPy first aims to be the Python equivalent of the hMeta-d toolbox (Matlab and R). It tries to make these models available to a broader open-source ecosystem and to ease their use via cloud computing interfaces. One notable difference is that While the hMeta-d toolbox relies on JAGS for the Bayesian modelling of confidence data (see [4]) to analyse task performance and confidence ratings, metadPy is based on JAX and Numpyro, which can easily be parallelized, flexibly uses CPU, GPU or TPU and offers a broader variety of MCMC sampling algorithms (comprising NUTS).

For an extensive introduction to metadPy, you can navigate the following notebooks that are Python adaptations of the introduction to the hMeta-d toolbox written in Matlab by Olivia Faul for the Zurich Computational Psychiatry course.

Examples

Notebook Colab nbViewer
1. Estimating meta-d' using MLE (subject and group level) Open In Colab View the notebook
2. Estimating meta-d' (single subject) using Bayesian modelling - Numpyro Open In Colab View the notebook
3. Estimating meta-d' (single subject) using Bayesian modelling - pymc Open In Colab View the notebook
4. Estimating meta-d' (group level) using Bayesian modelling - Numpyro Open In Colab View the notebook

Tutorials

Notebook Colab nbViewer
1. What metacognition looks like? Open In Colab View the notebook
2. Fitting the model Open In Colab View the notebook
3. Hierarchical Bayesian models of metacognition (in prep) Open In Colab View the notebook
4. Comparison with the HMeta-d toolbox Open In Colab View the notebook

Or just follow the quick tour below.

Importing data

Classical metacognition experiments contain two phases: task performance and confidence ratings. The task performance could for example be the ability to distinguish the presence of a dot on the screen. By relating trials where stimuli are present or absent and the response provided by the participant (Can you see the dot: yes/no), it is possible to obtain the accuracy. The confidence rating is proposed to the participant when the response is made and should reflect how certain the participant is about his/her judgement.

An ideal observer would always associate very high confidence ratings with correct task-I responses, and very low confidence ratings with an incorrect task-1 response, while a participant with a low metacognitive efficiency will have a more mixed response pattern.

A minimal metacognition dataset will therefore consist in a data frame populated with 5 columns:

  • Stimuli: Which of the two stimuli was presented [0 or 1].
  • Response: The response made by the participant [0 or 1].
  • Accuracy: Was the participant correct? [0 or 1].
  • Confidence: The confidence level [can be continuous or discrete rating].
  • ntrial: The trial number.

Due to the logical dependence between the Stimuli, Responses and Accuracy columns, in practice only two of those columns are necessary, the third being deduced from the others. Most of the functions in metadPy will accept DataFrames containing only two of these columns, and will automatically infer the missing information. Similarly, as the metacognition models described here does not incorporate the temporal dimension, the trial number is optional.

metadPy includes a simulation function that will let you create one such data frame for one or many participants and condition, controlling for a variety of parameters. Here, we will simulate 200 trials from participant having d=1 and c=0 (task performances) and a meta-d=1.5 (metacognitive sensibility). The confidence ratings were provided using a 1-to-4 rating scale.

from metadPy.utils import responseSimulation
simulation = responseSimulation(d=1, metad=1.5, c=0, 
                                nRatings=4, nTrials=200)
simulation
Stimuli Responses Accuracy Confidence nTrial Subject Condition
0 1 1 1 3 0 0 0
1 0 1 0 1 1 0 0
2 1 1 1 4 2 0 0
3 1 0 0 3 3 0 0
4 1 1 1 2 4 0 0
... ... ... ... ... ... ... ...
195 0 0 1 2 195 0 0
196 0 0 1 3 196 0 0
197 0 0 1 3 197 0 0
198 0 0 1 1 198 0 0
199 1 1 1 3 199 0 0

200 rows × 7 columns

from metadPy.utils import trials2counts
nR_S1, nR_S2 = trials2counts(
    data=simulation, stimuli="Stimuli", accuracy="Accuracy",
    confidence="Confidence", nRatings=4)

Data visualization

You can easily visualize metacognition results using one of the plotting functions. Here, we will use the plot_confidence and the plot_roc functions to visualize the metacognitive performance of our participant.

import arviz as az
import matplotlib.pyplot as plt
import seaborn as sns
from metadPy.plotting import plot_confidence, plot_roc
sns.set_context('talk')
fig, axs = plt.subplots(1, 2, figsize=(13, 5))
plot_confidence(nR_S1, nR_S2, ax=axs[0])
plot_roc(nR_S1, nR_S2, ax=axs[1])
sns.despine()

png

Signal detection theory (SDT)

from metadPy.sdt import criterion, dprime, rates, roc_auc, scores

All metadPy functions are registred as Pandas flavors (see pandas-flavor), which means that the functions can be called as a method from the result data frame. When using the default columns names (Stimuli, Response, Accuracy, Confidence), this significantly reduces the length of the function call, making your code more clean and readable.

simulation.criterion()
5.551115123125783e-17
simulation.dprime()
0.9917006946949065
simulation.rates()
(0.69, 0.31)
simulation.roc_auc(nRatings=4)
0.5797055057618438
simulation.scores()
(69, 31, 31, 69)

Estimating meta dprime using Maximum Likelyhood Estimates (MLE)

from metadPy.mle import metad

metad = metad(data=simulation, nRatings=4, stimuli='Stimuli',
              accuracy='Accuracy', confidence='Confidence', verbose=0)
print(f'meta-d\' = {str(metad["meta_da"])}')
meta-d' = 0.5223485447196857

Estimating meta-dprime using hierarchical Bayesian modeling

Subject level

import pymc as pm
from metadPy.hierarchical import hmetad
model, trace = hmetad(data=simulation, nRatings=4, stimuli='Stimuli',
                      accuracy='Accuracy', confidence='Confidence')
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Sequential sampling (2 chains in 1 job)
NUTS: [cS2_hn, cS1_hn, metad, d1, c1]
<style> /* Turns off some styling */ progress { /* gets rid of default border in Firefox and Opera. */ border: none; /* Needs to be in here for Safari polyfill so background images work as expected. */ background-size: auto; } .progress-bar-interrupted, .progress-bar-interrupted::-webkit-progress-bar { background: #F44336; } </style> 100.00% [2000/2000 00:07<00:00 Sampling chain 0, 1 divergences]
<style> /* Turns off some styling */ progress { /* gets rid of default border in Firefox and Opera. */ border: none; /* Needs to be in here for Safari polyfill so background images work as expected. */ background-size: auto; } .progress-bar-interrupted, .progress-bar-interrupted::-webkit-progress-bar { background: #F44336; } </style> 100.00% [2000/2000 00:07<00:00 Sampling chain 1, 0 divergences]
Sampling 2 chains for 1_000 tune and 1_000 draw iterations (2_000 + 2_000 draws total) took 15 seconds.
/usr/local/lib/python3.6/dist-packages/arviz/data/io_pymc.py:314: UserWarning: Could not compute log_likelihood, it will be omitted. Check your model object or set log_likelihood=False
  warnings.warn(warn_msg)
There was 1 divergence after tuning. Increase `target_accept` or reparameterize.
There was 1 divergence after tuning. Increase `target_accept` or reparameterize.
pm.traceplot(trace, var_names=['metad', 'cS2', 'cS1']);

png

pm.summary(trace)
<style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; }
.dataframe tbody tr th {
    vertical-align: top;
}

.dataframe thead th {
    text-align: right;
}
</style>
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_mean ess_sd ess_bulk ess_tail r_hat
metad 0.534 0.245 0.018 0.960 0.006 0.004 1779.0 1779.0 1810.0 1376.0 1.00
cS1[0] -1.488 0.139 -1.755 -1.239 0.003 0.002 1871.0 1846.0 1879.0 1615.0 1.01
cS1[1] -0.928 0.109 -1.125 -0.725 0.002 0.002 2161.0 2121.0 2155.0 1813.0 1.00
cS1[2] -0.429 0.092 -0.596 -0.259 0.002 0.001 1987.0 1909.0 1988.0 1742.0 1.00
cS2[0] 0.486 0.093 0.317 0.664 0.002 0.001 2200.0 2197.0 2188.0 1710.0 1.00
cS2[1] 0.904 0.106 0.711 1.103 0.002 0.002 2051.0 2034.0 2049.0 1702.0 1.00
cS2[2] 1.408 0.131 1.179 1.663 0.003 0.002 1784.0 1772.0 1786.0 1598.0 1.00

Group level

simulation = responseSimulation(d=1, metad=1.5, c=0, nRatings=4,
                                nTrials=200, nSubjects=10)
model, trace = hmetad(
    data=simulation, nRatings=4, stimuli='Stimuli', accuracy='Accuracy',
    confidence='Confidence', subject='Subject')
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Sequential sampling (2 chains in 1 job)
NUTS: [cS2_hn, cS1_hn, epsilon_logMratio, delta_tilde, sigma_delta, mu_logMratio, d1_tilde, c1_tilde, sigma_d1, sigma_c2, sigma_c1, mu_d1, mu_c2, mu_c1]
<style> /* Turns off some styling */ progress { /* gets rid of default border in Firefox and Opera. */ border: none; /* Needs to be in here for Safari polyfill so background images work as expected. */ background-size: auto; } .progress-bar-interrupted, .progress-bar-interrupted::-webkit-progress-bar { background: #F44336; } </style> 100.00% [2000/2000 00:45<00:00 Sampling chain 0, 13 divergences]
<style> /* Turns off some styling */ progress { /* gets rid of default border in Firefox and Opera. */ border: none; /* Needs to be in here for Safari polyfill so background images work as expected. */ background-size: auto; } .progress-bar-interrupted, .progress-bar-interrupted::-webkit-progress-bar { background: #F44336; } </style> 100.00% [2000/2000 00:38<00:00 Sampling chain 1, 11 divergences]
Sampling 2 chains for 1_000 tune and 1_000 draw iterations (2_000 + 2_000 draws total) took 84 seconds.
There were 13 divergences after tuning. Increase `target_accept` or reparameterize.
There were 24 divergences after tuning. Increase `target_accept` or reparameterize.
The estimated number of effective samples is smaller than 200 for some parameters.
az.plot_posterior(trace, var_names=['mu_logMratio'], kind='hist', bins=20)

png

References

[1] Maniscalco, B., & Lau, H. (2014). Signal Detection Theory Analysis of Type 1 and Type 2 Data: Meta-d′, Response-Specific Meta-d′, and the Unequal Variance SDT Model. In The Cognitive Neuroscience of Metacognition (pp. 25–66). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-45190-4_3

[2] Maniscalco, B., & Lau, H. (2012). A signal detection theoretic approach for estimating metacognitive sensitivity from confidence ratings. Consciousness and Cognition, 21(1), 422–430. doi:10.1016/j.concog.2011.09.021

[3] Fleming, S. M., & Lau, H. C. (2014). How to measure metacognition. Frontiers in Human Neuroscience, 8. https://doi.org/10.3389/fnhum.2014.00443

[4] Fleming, S.M. (2017) HMeta-d: hierarchical Bayesian estimation of metacognitive efficiency from confidence ratings, Neuroscience of Consciousness, 3(1) nix007, https://doi.org/10.1093/nc/nix007

About

Metacognitive efficiency modelling in Python

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 93.4%
  • Python 6.5%
  • R 0.1%