
Language-Guided Image Registration
Using LLMs to Establish Spatial Correspondence Between Images
Tell2Reg introduces a novel approach that leverages large multimodal models to establish spatial correspondence between images using identical language prompts.
- Eliminates the need for conventional displacement fields or transformation parameters
- Utilizes GroundingDINO and SAM to identify corresponding regions across images
- Provides a fully automated, training-free registration algorithm
- Demonstrates particular effectiveness with medical imaging data
This innovation significantly advances medical image registration capabilities, enabling more accurate alignment of patient images over time or across modalities - crucial for diagnosis, treatment planning, and disease monitoring in clinical settings.
Tell2Reg: Establishing spatial correspondence between images by the same language prompts