Skip to content

ntlm1686/LLaVA-Surg

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

LLaVA-Surg: Towards Multimodal Surgical Assistant via Structured Surgical Video Learning

This is code for Paper LLaVA-Surg: Towards Multimodal Surgical Assistant via Structured Surgical Video Learning. We will release the full code soon.

Contents

Data Generation

See INSTRUCT/README.md. surg-qa surg-qa-stat

Qualitative Analysis

eg1 eg2 eg3 eg4

LICENSE

Data License Code License The data, code, and model checkpoints are intended and licensed for research use only. They are also subject to additional restrictions dictated by the Terms of Use: LLaMA, WebSurg. The instruction tunning data is made available under CC BY NC 4.0. The data, code, and model checkpoints may be used for non-commercial purposes and any models trained using the dataset should be used only for research purposes. It is expressly prohibited for models trained on this data to be used in clinical care or for any clinical decision making purposes.

Acknowledgement

If you find LLaVA-Surg useful for your your research and applications, please cite using this BibTeX:

@article{li2024llava,
  title={LLaVA-Surg: Towards Multimodal Surgical Assistant via Structured Surgical Video Learning},
  author={Li, Jiajie and Skinner, Garrett and Yang, Gene and Quaranto, Brian R and Schwaitzberg, Steven D and Kim, Peter CW and Xiong, Jinjun},
  journal={arXiv preprint arXiv:2408.07981},
  year={2024}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published