Instruction-guided Multi-Granularity Segmentation and Captioning with Large Multimodal Model

1TAO Technology, Alibaba Group
2The Hong Kong Polytechnic University
3Peng Cheng Laboratory

*Indicates Equal Contribution

Indicates Corresponding Author

{pengye.zl,zenghui.szh,jingsonglang.ljs}@taobao.com, xuyuan127@gmail.com, zhouzikunhit@gmail.com
capability of MGLMM

MGLMM is a versatile and sophisticated LMM, which can handle various tasks involving textual and pixel-level mask responses. We show its visualization results in the following scenarios: multi-granularity segmentation and captioning, referring segmentation, multiple/empty segmentation, panoptic segmentation, reasoning segmentation, image-level captioning, and conversation.

Abstract

Large Multimodal Models (LMMs) have achieved significant progress by extending large language models. Building on this progress, the latest developments in LMMs demonstrate the ability to generate dense pixel-wise segmentation through the integration of segmentation models.Despite the innovations, the textual responses and segmentation masks of existing works remain at the instance level, showing limited ability to perform fine-grained understanding and segmentation even provided with detailed textual cues.To overcome this limitation, we introduce a Multi-Granularity Large Multimodal Model (MGLMM), which is capable of seamlessly adjusting the granularity of Segmentation and Captioning (SegCap) following user instructions, from panoptic SegCap to fine-grained SegCap. We name such a new task Multi-Granularity Segmentation and Captioning (MGSC). Observing the lack of a benchmark for model training and evaluation over the MGSC task, we establish a benchmark with aligned masks and captions in multi-granularity using our customized automated annotation pipeline. This benchmark comprises 10K images and more than 30K image-question pairs. We will release our dataset along with the implementation of our automated dataset annotation pipeline for further research.Besides, we propose a novel unified SegCap data format to unify heterogeneous segmentation datasets; it effectively facilitates learning to associate object concepts with visual features during multi-task training. Extensive experiments demonstrate that our MGLMM excels at tackling more than eight downstream tasks and achieves state-of-the-art performance in MGSC, GCG, image captioning, referring segmentation, multiple and empty segmentation, and reasoning segmentation tasks. The great performance and versatility of MGLMM underscore its potential impact on advancing multimodal research.

Qualitative Results

Motivation

motivation

Figure left shows such a case where GLaMM overlooks the tennis racket, tennis ball and microphone in both mask and text responses. Besides, these models only possess the ability to describe the image at the instance level and produce corresponding instance masks aligned with the output texts. Hence, these models can hardly perceive the fine-grained objects, such as the hat, wristband, and skirt of the player in Figure right, even provided with detailed textual cues. The missing of the above abilities would limit the universality and comprehension of the LMMs.

Model Framework

framework of MGLMM

Left: The model architecture of MGLMM. Right: The proposed unified data format for multi-task learning.

Experiments