Sevia MedSAM V3 (LESION-ONLY)

MedSAM fine-tuned for cervical lesion-only segmentation.

Key Improvement

  • LESION-ONLY: Trained exclusively on lesion annotations (339 annotations)
  • Previous versions incorrectly included all annotation types (2140 total)

Performance

Metric Score
IoU 69.13%
Dice 75.65%

Model Details

  • Base Model: MedSAM (flaviagiammarino/medsam-vit-base)
  • Fine-tuned: Mask decoder + last 2 encoder blocks + neck
  • Loss: Focal + Dice
  • Epochs: 100
  • Dataset: Godsonntungi2/sevia-cervical-segmentation

Usage

from transformers import SamModel, SamProcessor
import torch
from PIL import Image

# Load model
processor = SamProcessor.from_pretrained("Godsonntungi2/sevia-lesion-segmentation-medsam-v3")
model = SamModel.from_pretrained("Godsonntungi2/sevia-lesion-segmentation-medsam-v3")

# Load image
image = Image.open("cervical_image.jpg")

# Define bounding box prompt [x1, y1, x2, y2]
bbox = [[100, 100, 400, 400]]

# Process
inputs = processor(image, input_boxes=[bbox], return_tensors="pt")

# Predict
with torch.no_grad():
    outputs = model(**inputs, multimask_output=False)

# Get mask
mask = outputs.pred_masks.sigmoid() > 0.5

Two-Stage Pipeline

Use with YOLO detector for fully automatic segmentation:

from ultralytics import YOLO
from transformers import SamModel, SamProcessor

# Stage 1: Detect lesions
detector = YOLO("Godsonntungi2/sevia-lesion-detection-yolo11-v3/best.pt")
detections = detector.predict(image, conf=0.5)

# Stage 2: Segment each detection
for box in detections[0].boxes.xyxy:
    bbox = box.tolist()
    inputs = processor(image, input_boxes=[[bbox]], return_tensors="pt")
    outputs = model(**inputs, multimask_output=False)
    mask = outputs.pred_masks.sigmoid() > 0.5
Downloads last month
74
Safetensors
Model size
93.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support