Skip to content

Latest commit

 

History

History
25 lines (16 loc) · 1.67 KB

README.md

File metadata and controls

25 lines (16 loc) · 1.67 KB

Biomedical Language Processing with Instruction Tuning (Llama2)

Welcome to the QLora project, a biomedical language processing model based on instruction tuning. This project is inspired by the research paper "Exploring the Effectiveness of Instruction Tuning in Biomedical Language Processing".

Link to the article

Overview

Large Language Models (LLMs), particularly those similar to ChatGPT, have significantly influenced the field of Natural Language Processing (NLP). While these models excel in general language tasks, their performance in domain-specific downstream tasks such as biomedical and clinical Named Entity Recognition (NER), Relation Extraction (RE), and Medical Natural Language Inference (NLI) is still evolving.

Dataset

nlpie/Llama2-MedTuned-Instructions.

Schematic representation of the model :

model:

image

Training transformer concept :

image

Inference transformer concept :

image

Implementation

The implementation of the project was carried out using the Kaggle platform, which provides a remote and computational work environment for data analysis and machine learning. This platform allows us to perform the training and testing of the model remotely, while maintaining efficiency and speed.