Skip to content

Training and Evaluation for Llama2 (LLMs) in term of biomedical

Notifications You must be signed in to change notification settings

almog2290/Instruction_Tuning_MedLlama2

Repository files navigation

Biomedical Language Processing with Instruction Tuning (Llama2)

Welcome to the QLora project, a biomedical language processing model based on instruction tuning. This project is inspired by the research paper "Exploring the Effectiveness of Instruction Tuning in Biomedical Language Processing".

Link to the article

Overview

Large Language Models (LLMs), particularly those similar to ChatGPT, have significantly influenced the field of Natural Language Processing (NLP). While these models excel in general language tasks, their performance in domain-specific downstream tasks such as biomedical and clinical Named Entity Recognition (NER), Relation Extraction (RE), and Medical Natural Language Inference (NLI) is still evolving.

Dataset

nlpie/Llama2-MedTuned-Instructions.

Schematic representation of the model :

model:

image

Training transformer concept :

image

Inference transformer concept :

image

Implementation

The implementation of the project was carried out using the Kaggle platform, which provides a remote and computational work environment for data analysis and machine learning. This platform allows us to perform the training and testing of the model remotely, while maintaining efficiency and speed.

About

Training and Evaluation for Llama2 (LLMs) in term of biomedical

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published