Categories: ImagesNews

New AI Tool Revolutionizes Medical Image Segmentation for Clinical Research

The painstaking process of segmenting – or annotating – specific areas in medical scans plays a critical role in many biomedical research projects. Whether studying changes in brain structures, like your hippocampus, or tracking the progression of a disease, researchers often find themselves laboriously outlining these regions by hand. This method can be particularly grueling, especially when the structures they’re trying to highlight in the complex medical imagery are tough to differentiate.

Imagine what it would be like, for instance, conducting a study on how the brain’s hippocampus changes with age. A researcher would generally need to sift through and painstakingly outline the hippocampus on countless brain scans. Thankfully, a group of researchers from MIT have come up with an exciting solution to this problem.

In response to these challenges, the MIT team has developed MultiverSeg, an ingenious AI-based system designed to make the image segmentation process much quicker and user-friendly. Using inputs like clicks, scribbles, and bounding boxes, users can annotate images interactively. As more images get annotated, the AI model learns from these interactions, reducing the need for further input until, eventually, it can segment new pictures independently.

Unlike previous tools, such as ScribblePrompt, which needed repeated manual input for every fresh image, MultiverSeg stores each segmented image in a unique “context set.” So, when a new image is uploaded, the model uses this set for more accurate predications. Researchers won’t have to keep repeating the segmentation process with every new image.

Another huge plus with MultiverSeg is that it doesn’t demand a pre-segmented dataset or any technical know-how in machine learning. Researchers can get going with it immediately, without any need for retraining or specific hardware.

“Many scientists might only have time to segment a few images per day for their research because manual image segmentation is so time-consuming. We believe this system will enable new science by allowing clinical researchers to undertake studies they were previously unable to do due to the lack of an efficient tool,” says Hallee Wong, the study’s lead author and a graduate student in electrical engineering and computer science at MIT.

Historically, researchers have relied on either interactive segmentation, guiding an AI model through inputs like scribbles, or training a task-specific AI model using hundreds of manually segmented images. Both approaches come with their own issues – either needing repetitive input or an extensive, error-prone training process. MultiverSeg combines the best aspects of these methods, learning from previous examples stored in its context set while using user interactions to predict segmentations.

During testing, MultiverSeg outshone other cutting-edge tools for both interactive and in-context segmentation. By the time users got to the ninth image, the model needed just two clicks to produce a segmentation more accurate than task-specific models.

Looking forward, the research team plans to collaborate with clinicians to trial MultiverSeg in real-world environments and gather user feedback for further improvements. They’re also keen on expanding the tool’s capabilities to include 3D biomedical images. This ongoing work receives generous support from Quanta Computer, Inc., the National Institutes of Health, and the Massachusetts Life Sciences Center.

To find out more about this remarkable project, you can check out the original article here on the MIT News website.

Max Krawiec

Share
Published by
Max Krawiec

This website uses cookies.