We explored recent studies in Question Answering System. Then tried out 3 different QA models(BERT and DistilBERT) for the sake of learning.
-
Updated
Jul 26, 2021 - Jupyter Notebook
We explored recent studies in Question Answering System. Then tried out 3 different QA models(BERT and DistilBERT) for the sake of learning.
A Deep Learning Based Voice Analytics toolkit
Sentiment Analysis of movie reviews
Fine tune bert on a question answering dataset that is further finetuned on finance data to answer questions posed by senior leadership
The official repository for the PSYCHIC model
Successfully developed a fine-tuned DistilBERT transformer model which can accurately predict the overall sentiment of a piece of financial news up to an accuracy of nearly 81.5%.
Deep learning for Natural Language Processing
Multiclass classification on tweets about the coronavirus
Positive/negative sentiment model on cleaned text data using Distilbert NLP pre-trained model from Hugging Face
This paper describes Humor Analysis using Ensembles of Simple Transformers, the winning submission at the Humor Analysis based on Human Annotation (HAHA) task at IberLEF 2021.
This project involves analyzing and classifying the BoolQ dataset from the SuperGLUE benchmark. We implemented various classifiers and techniques, including rules-based logic, BERT, RNN, and GPT-3/4 data augmentation, achieving performance improvements.
Successfully fine-tuned a pretrained DistilBERT transformer model that can classify social media text data into one of 4 cyberbullying labels i.e. ethnicity/race, gender/sexual, religion and not cyberbullying with a remarkable accuracy of 99%.
This project centers on elevating customer satisfaction by conducting sentiment analysis on customer feedback for an online classes and video conferencing app. The aim is to decipher customer sentiments in their feedback, extract insights, and improve user experience while addressing any concerns.
Thesis Project
The data and code for my master's thesis for the MA Digital Text Analysis at the University of Antwerp
This repository contains my work on the prevention and anonymization of dox content on Twitter. It contains python code and demo of the proposed solution.
Classification, ADSA and Text Summarisation based project for BridgeI2I Task at Inter IIT 2021 Competition. Silver Medalists.
Add a description, image, and links to the distilbert-model topic page so that developers can more easily learn about it.
To associate your repository with the distilbert-model topic, visit your repo's landing page and select "manage topics."