SynCABEL: Synthetic Contextualized Augmentation for Biomedical Entity Linking

SynCABEL

SynCABEL is a novel framework that addresses data scarcity in biomedical entity linking through synthetic data generation. The method, introduced in our [paper]

SynCABEL (MedMentions Edition)

This is a finetuned version of LLaMA-3-8B trained on MedMentions using SynthMM (our synthetic dataset generated via the SynCABEL framework).

Base Model meta-llama/Meta-Llama-3-8B-Instruct
Training Data MedMentions (real) + SynthMM (synthetic)
Fine-tuning Supervised Fine-Tuning

Training Data Composition

The model is trained on a mix of human-annotated and synthetic data:

MedMentions (human)   : 203,282 mentions
SynthMM (synthetic)  : 460,878 mentions

To ensure balanced learning, human data is upsampled during training so that each batch contains:

50% human-annotated data
50% synthetic data

In other words, although SynthMM is larger, the model always sees a 1:1 ratio of human to synthetic examples, preventing synthetic data from overwhelming human supervision.

Usage

Loading

import torch
from transformers import AutoModelForCausalLM

# Load the model (requires trust_remote_code for custom architecture)
model = AutoModelForCausalLM.from_pretrained(
    "AnonymousARR42/SynCABEL_MedMentions",
    trust_remote_code=True,
    device_map="auto"
)

Unconstrained Generation

# Let the model freely generate concept names
sentences = [
    "[TCA]{Chemicals & Drugs} was prescribed for depression",
    "Patient diagnosed with [AS]{Disorders}"
]

results = model.sample(
    sentences=sentences,
    constrained=False,
    num_beams=2,
)

for i, beam_results in enumerate(results):
    print(f"Input: {sentences[i]}")

    mention = beam_results[0]["mention"]
    print(f"Mention: {mention}")

    for j, result in enumerate(beam_results):
        print(
            f"Beam {j+1}:\n"
            f"Predicted concept name:{result['pred_concept_name']}\n"
            f"Predicted code: {result['pred_concept_code']}\n"
            f"Beam score: {result['beam_score']:.3f}\n"
        )

Output:

Input: [TCA]{Chemicals & Drugs} was prescribed for depression
Mention: TCA
Beam 1:
Predicted concept name:TCA - Tricyclic antidepressant
Predicted code: NO_CODE
Beam score: 0.959

Beam 2:
Predicted concept name:TCA - Tricyclo(3.3.1.13,7)decane-4,5-dione, 1,3-dihydro-1-methyl-5-oxo-
Predicted code: NO_CODE
Beam score: 0.844

Input: Patient diagnosed with [AS]{Disorders}
Mention: AS
Beam 1:
Predicted concept name:AS - Ankylosing spondylitis
Predicted code: C0038013
Beam score: 0.968

Beam 2:
Predicted concept name:AS - Aortic stenosis
Predicted code: C0003507
Beam score: 0.851

Constrained Decoding (Recommended for Entity Linking)

# Constrained to valid biomedical concepts
sentences = [
    "[TCA]{Chemicals & Drugs} was prescribed for depression",
    "Patient diagnosed with [AS]{Disorders}"
]

results = model.sample(
    sentences=sentences,
    constrained=True,
    num_beams=2,
)

for i, beam_results in enumerate(results):
    print(f"Input: {sentences[i]}")

    mention = beam_results[0]["mention"]
    print(f"Mention: {mention}")

    for j, result in enumerate(beam_results):
        print(
            f"Beam {j+1}:\n"
            f"Predicted concept name:{result['pred_concept_name']}\n"
            f"Predicted code: {result['pred_concept_code']}\n"
            f"Beam score: {result['beam_score']:.3f}\n"
        )

Output:

Input: [TCA]{Chemicals & Drugs} was prescribed for depression
Mention: TCA
Beam 1:
Predicted concept name:Tricyclic antidepressant
Predicted code: C0003290
Beam score: 0.566

Beam 2:
Predicted concept name:Tricyclic Antidepressants
Predicted code: C0003290
Beam score: 0.287

Input: Patient diagnosed with [AS]{Disorders}
Mention: AS
Beam 1:
Predicted concept name:AS - Ankylosing spondylitis
Predicted code: C0038013
Beam score: 0.968

Beam 2:
Predicted concept name:AS - Aortic stenosis
Predicted code: C0003507
Beam score: 0.851

Scores

Entity linking performance (Recall@1) on biomedical benchmarks. The best results are shown in bold, the second-best results are underlined, and the "Average" column reports the mean score across the four benchmarks.

Model MM-ST21PV
(english)
QUAERO-MEDLINE
(french)
QUAERO-EMEA
(french)
SPACCC
(spanish)
Avg.
SciSpacy 53.8 40.5 37.1 13.2 36.2
SapBERT 51.1 50.6 49.8 33.9 46.4
CODER-all 56.6 58.7 58.1 43.7 54.3
SapBERT-all 64.6 74.7 67.9 47.9 63.8
ArboEL 74.5 70.9 62.8 49.0 64.2
mBART-large 65.5 61.5 58.6 57.7 60.8
+ Guided inference 70.0 72.8 71.1 61.8 68.9
+ SynCABEL (Our method) 71.5 77.1 75.3 64.0 72.0
Llama-3-8B 69.0 66.4 65.5 59.9 65.2
+ Guided inference 74.4 77.5 72.9 64.2 72.3
+ SynCABEL (Our method) 75.4 79.7 79.0 67.0 75.3

Here, we provide the source repositories for the baselines:

Speed and Memory

Model Model (GB) Cand. (GB) Speed (/s)
SapBERT 2.1 20.1 575.5
ArboEL 1.2 7.1 38.9
mBART 2.3 5.4 51.0
Llama-3-8B 28.6 5.4 19.1

Measured on single H100 GPU, constrained decoding

Downloads last month
7
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AnonymousARR42/SynCABEL_MedMentions_st21pv

Finetuned
(1018)
this model

Collection including AnonymousARR42/SynCABEL_MedMentions_st21pv

Evaluation results