A curated list of awesome papers related to pre-trained models for information retrieval (a.k.a., pretraining for IR).
-
Updated
Jan 7, 2024
A curated list of awesome papers related to pre-trained models for information retrieval (a.k.a., pretraining for IR).
Efficient Inference for Big Models
On Transferability of Prompt Tuning for Natural Language Processing
The code for the ACL 2023 paper "Linear Classifier: An Often-Forgotten Baseline for Text Classification".
Code for the paper "Exploiting Pretrained Biochemical Language Models for Targeted Drug Design", to appear in Bioinformatics, Proceedings of ECCB2022.
A Keras-based and TensorFlow-backend NLP Models Toolkit.
FusionDTI utilises a Token-level Fusion module to effectively learn fine-grained information for Drug-Target Interaction Prediction.
The official repository for AAAI 2024 Oral paper "Structured Probabilistic Coding"
This research examines Large Language Models in Bengali Natural Language Inference, comparing them with state-of-the-art models using the XNLI dataset.
Identified ADEs and associated terms in an annotated corpus with Named Entity Recognition (NER) modeling with Flair and PyTorch. Fine-tuned pre-trained transformer models such as XLM-RoBERTa, SpanBERT, and Bio_ClinicalBERT. Achieved F1 scores of 0.73 and 0.77 for BIOES and BIO tagging models, respectively.
A python tool for evaluating the quality of few-shot prompt learning.
This study focuses on political sentiment analysis during Bangladeshi elections, using the "Motamot" dataset to evaluate how Pre-trained Language Models (PLMs) and Large Language Models (LLMs) capture complex sentiment characteristics. The research explores the effectiveness of various models and learning strategies in understanding public opinion.
LSTM models for text classification on character embeddings.
Fine tuned BERT, mBERT and XLMRoBERTa for Abusive Comments Detection in Telugu, Code-Mixed Telugu and Telugu-English.
Add a description, image, and links to the pretrained-language-models topic page so that developers can more easily learn about it.
To associate your repository with the pretrained-language-models topic, visit your repo's landing page and select "manage topics."