Skip to content
#

rct

Here are 35 public repositories matching this topic...

We employed pre-trained BERT models (distillBERT, BioBert, and SciBert) for text-classifications of the titles and abstracts of clinical trials in medical psychology. The average score of AUC is 0.92. A stacked model was then built by featuring the probability predicted by distillBERT and keywords of search domains. The AUC improved to 0.96 with…

  • Updated Aug 18, 2021
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the rct topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the rct topic, visit your repo's landing page and select "manage topics."

Learn more