Skip to content
/ STAR Public

[Preprint] Structured and Abstractive Reasoning on Multi-modal Relational Knowledge Images

Notifications You must be signed in to change notification settings

zjukg/STAR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Structured and Abstractive Reasoning on Multi-modal Relational Knowledge Images

  • Authors: Yichi Zhang, Zhuo Chen, Lingbing Guo, Lei Liang, Wen Zhang, Huajun Chen

Understanding and reasoning with abstractive information from the visual modality presents significant challenges for current multi-modal large language models (MLLMs). Among the various forms of abstractive information, Multi-Modal Relational Knowledge (MMRK), which represents abstract relational structures between multi-modal entities using node-edge formats, remains largely under-explored. In particular, STructured and Abstractive Reasoning (STAR) on such data has received little attention from the research community. To bridge the dual gaps in large-scale high-quality data and capability enhancement methodologies, this paper makes the following key contributions: (i). An automatic STAR data engine capable of synthesizing images with MMRK to build multi-modal instruction data with reliable chain-of-thought thinking for various STAR tasks and (ii). A comprehsive two-stage capability enhancement training framework, accompanied by a suite of evaluation protocols tailored to different STAR tasks. Based upon these contributions, we introduce STAR-64K, a dataset comprising 64K high-quality multi-modal instruction samples, and conduct experiments across 5 open-source MLLMs. Experimental results show that our two-stage enhancement framework enables smaller 3B/7B models to significantly outperform GPT-4o in STAR. Additionally, we provide in-depth analysis regarding the effectiveness of various designs, data transferability, and scalability.

  • The full paper, data, and code would be released in the future.

🎨 Introduction

introduction

Many images contain abstractive high-level semantic information that is artificially defined and does not exist in nature. Teaching MLLMs to understand and reason about this abstractive information is a significant challenge. In this work, we introduce a novel type of abstractive image data: multi-modal relational knowledge images.

πŸ“Œ Synthesis Pipeline Overview

introduction

Here is an overview of our data engine, the training pipeline, the seed tasks, and the CoT prompts.

πŸ“ˆ Training and Inference

  • First, you should install LLaMA-Factory and vLLM in your python environment.
  • Second, you need to download the MLLMs used in the experiments including Qwen2.5-VL-3B/7B/32B, LLaVA-1.5-7B, and LLaVA-NEXT-34B
  • Next, run bash train.sh to fine-tune MLLMs with LLaMA-Factory.
  • Finally, use vLLM to conduct inference on the trained MLLMs to obtain the results and calculate the metrics.
  • The full code would be released in the future.

About

[Preprint] Structured and Abstractive Reasoning on Multi-modal Relational Knowledge Images

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published