Skip to content

The official codes for "PMC-LLaMA: Towards Building Open-source Language Models for Medicine"

Notifications You must be signed in to change notification settings

chaoyi-wu/PMC-LLaMA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PMC-LLaMA

The official codes for "PMC-LLaMA: Towards Building Open-source Language Models for Medicine".

Arxiv Version

We prove that medical LLM should be first pretrained with domain corpus, and then tuned with instructions following dataset.

Hereby we present PMC_LLaMA's versions and briefs.

MedLLaMA_13B is pretrained on medical corpus, and PMC_LLaMA_13B is further finetuned based on that.

Version Link Brief Release Date
PMC_LLaMA_13B https://huggingface.co/axiong/PMC_LLaMA_13B Instruction Tuned 2023/09/01
MedLLaMA_13B https://huggingface.co/chaoyi-wu/MedLLaMA_13B Pre-training LLaMA on 4.8M PubmedCentral papers 2023/04/25
PMC_LLaMA_7B https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B LLaMA-7b finetuned with PMC papers for 5 epochs 2023/04/25
PMC_LLaMA_7B_10_epoch https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B_10_epoch Similar to PMC_LLaMA_7B but trained 10 epochs 2023/05/01

Latest News

We have release a new model PMC_LLaMA_13B finetuned on our instruction following dataset. It has shown better ability on following user instruction than MedLLaMA_13B.

Similarly it can be easily loaded with:

import transformers
import torch
tokenizer = transformers.LlamaTokenizer.from_pretrained('axiong/PMC_LLaMA_13B')
model = transformers.LlamaForCausalLM.from_pretrained('axiong/PMC_LLaMA_13B')

Environment

Simply set up the required environment as following:

conda install pytorch==1.13.0 torchvision==0.14.0 torchaudio==0.13.0 pytorch-cuda=11.6 -c pytorch -c nvidia
pip install transformers=4.28.1, sentencepiece, datasets

Quick Start

Check simple_test.py for quickly use PMC-LLaMA or you can follow this folowing simple sample.

import transformers
import torch
tokenizer = transformers.LlamaTokenizer.from_pretrained('axiong/PMC_LLaMA_13B')
model = transformers.LlamaForCausalLM.from_pretrained('axiong/PMC_LLaMA_13B')
sentence = 'Hello, doctor' 
batch = tokenizer(
            sentence,
            return_tensors="pt", 
            add_special_tokens=False
        )
with torch.no_grad():
    generated = model.generate(inputs = batch["input_ids"], max_length=200, top_k=50)
    print('model predict: ',tokenizer.decode(generated[0]))

Training

The training process can be divided as two phases: pretrain and instruction-tuning.

Pre-training

The script for pretraining locates at Pretrain/training.sh.

Our pretraining dataset sources from S2ORC. Only those papers with PubMed IDs are deemed as medical-related and used during pretraining.

More details about how to fine-tune LLaMA can refer to Finetune_LLAMA

Instruction Tuning

We also provide instruction tuning script at SFT/train.py. And you can find our instruction dataset at PMC LLaMA Instructions.

Results

QA Benchmark

Method Model Size USMLE(OOD/ID) MedMCQA(ID) PubMedQA(ID)
Human (pass) - 50.0 -- 60.0
Human (expert) - 87.0 90.0 78.0
ChatGPT 175B 57.0 44.7 63.9
LLaMA-2 13B 42.73 37.41 68.0
LLaMA-2 70B 43.68 35.02 74.3
Med-Alpaca 13B 30.85 31.13 53.2
Chat-Doctor 7B 33.93 31.10 54.3
PMC_LLaMA_13B 13B 56.36 56.04 77.9

Note that, the manual and zero-shot results with * are referred from LMFLow.

Zero-shot Cases

We demonstrate PMC_LLaMA_13B's responses with out of domain queries.

Note that, due to train on the papers, MedLLaMA_13B may generate some citation numbers (LLaMA somtimes will do this as well) and we dismiss them in the cases to show the main contents. While for PMC_LLaMA_13B, it's much easier to extract the correct answer as the output result is structured.

Acknowledge

Minimal LLaMA -- https://github.com/zphang/minimal-llama

alpaca -- https://github.com/tatsu-lab/stanford_alpaca

LMFLow -- https://github.com/OptimalScale/LMFlow/tree/main/src/lmflow

LLaMA: Open and Efficient Foundation Language Models -- https://arxiv.org/abs/2302.13971

Contact

If you have any question, please feel free to contact [email protected].

About

The official codes for "PMC-LLaMA: Towards Building Open-source Language Models for Medicine"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •