IMIS-Benchmark

Interactive Medical Image Segmentation: A Benchmark Dataset and Baseline

Junlong Cheng¶1,2, Bin Fu1, Jin Ye1,3, Guoan Wang1,4, Tianbin Li1, Haoyu Wang1,5,
Ruoyu Li2, He Yao2, Junren Chen2, JingWen Li6, Yanzhou Su1, Min Zhu§2, Junjun He‡1
1. Shanghai AI Laboratory, General Medical Artificial Intelligence
2. Sichuan University, School of Computer Science
3. Monash University
4. East China Normal University, School of computer science and technology
5. Shanghai Jiao Tong University, School of biomedical engineering
6. Xinjiang University, School of Computer Science and Technology
¶ Main technical contribution, ‡ Corresponding authors, § Project lead

Abstract

Header Image

Interactive Medical Image Segmentation (IMIS) has long been constrained by the limited availability of large-scale, diverse, and densely annotated datasets, which hinders model generalization and consistent evaluation across different models. In this paper, we introduce the IMed-361M benchmark dataset, a significant advancement in general IMIS research. First, we collect and standardize over 6.4 million medical images and their corresponding ground truth masks from multiple data sources. Then, leveraging the strong object recognition capabilities of a vision foundational model, we automatically generated dense interactive masks for each image and ensured their quality through rigorous quality control and granularity management. Unlike previous datasets, which are limited by specific modalities or sparse annotations, IMed-361M spans 14 modalities and 204 segmentation targets, totaling 361 million masks—an average of 56 masks per image. Finally, we developed an IMIS baseline network on this dataset that supports high-quality mask generation through interactive inputs, including clicks, bounding boxes, text prompts, and their combinations. We evaluate its performance on medical image segmentation tasks from multiple perspectives, demonstrating superior accuracy and scalability compared to existing interactive segmentation models. To facilitate research on foundational models in medical computer vision, we release the IMed-361M and model at https://github.com/uni-medical/IMIS-Bench.

IMIS Benchmark Dataset

The IMed-361M dataset is the largest publicly available, multimodal, interactive medical image segmentation dataset to date. (a) Illustrates the scale of the dataset, comprising 6.4 million images, 87.6 million GT masks, and 273.4 million interactive masks, averaging 56 masks per image. (b) Highlights the diversity of the dataset, covering 14 imaging modalities and 204 segmentation targets, categorized into six groups: Head and Neck, Thorax, Skeleton, Abdomen, Pelvis, and Lesions. (c) Shows that over 83% of the images have resolutions between 256×256 and 1024×1024, ensuring broad applicability. (d) Describes the fine-grained nature of the dataset, with most masks covering less than 2% of the image area, while (e) demonstrates that IMed-361M significantly outperforms other datasets such as MedTrinity-25M and COSMOS in terms of mask quantity, providing 14.4 times more masks than MedTrinity-25M.

Header Image

IMIS Baseline

Header Image

We simulate continuous interactive segmentation training. For a given segmentation task and medical image \(x_t\), we first simulate a set of initial interactions \(u^{g}_{1}\) and \(u^{i}_{1}\) based on the corresponding ground truth \(y^g\) and interactive mask \(y^i\), which include clicks, bboxes, and text input. The click points are uniformly sampled from the foreground regions of \(y^g\) or \(y^i\), while the bboxes are defined as the smallest bounding box around the target, with an offset of 5 pixels added to each coordinate to simulate slight user bias during the interaction process. The entire training process involves \(K\) interactive training iterations (with \(K=8\) in this paper). The model's initial predictions are \(\hat{y}^{g}_{1}\) and \(\hat{y}^{i}_{1}\). After the first prediction, we simulate subsequent corrections based on the previous predictions \(\hat{y}^{g}_{k}\) and \(\hat{y}^{i}_{k}\), as well as the error region \(\varepsilon_{k}\) between the \(y^g\) and \(y^i\), where \(k\in \{1,...,K\}\). Additionally, we provide the low-resolution predicted mask from the previous prediction as an extra cue to the model. As can be seen, the image encoder only needs to encode the image once during the training, and subsequent interactive training only updates the prompt encoder and mask decoder parameters.

Experiment Results

Header Image Header Image

We compared the performance of IMIS-Net with other vision foundation models on the single-interaction segmentation task. The results show that IMIS-Net outperforms other models in both image and mask-level statistics. The bounding box (bbox) interaction consistently outperforms the click interaction, as bbox provides more boundary information. Despite being pretrained on large-scale medical image datasets, MedSAM and SAM-Med2D still exhibit significant performance differences, primarily due to the scale and diversity of their pretraining datasets. SAM-Med2D performs poorly on anatomical structures such as bones due to the lack of skeletal structure samples. Additionally, SAM and SAM-2, pretrained without medical knowledge, achieve only 60.26% and 59.57% Dice scores under the single-point prompt condition, limited by the pretraining data and interaction constraints. Increasing the number of interactions from 1 to 9 improves model performance, and the gap between models narrows. Performance also depends on click position and bbox offset. When the prompt is closer to the centroid, SAM-2’s Dice score increases by 2.84%. However, all models experience a performance drop (0.85%-3.94%) due to bbox offset, with IMIS-Net showing the smallest decline, demonstrating its robustness.

One model for multiple modalities and segmentation tasks

GIF 1
GIF 2
GIF 3
GIF 4
GIF 5
GIF 6
GIF 7
GIF 8

This website is adapted from GMAI-MMBench and MathVista, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.