DenseNet ONNX (exported from TorchVision)

This repository hosts an ONNX export of TorchVision DenseNet ImageNet classifiers. Models are sourced from Torch Hub (pytorch/vision:v0.10.0) and exported to ONNX for easy deployment with ONNX Runtime.

  • Source weights: TorchVision v0.10.0 (ImageNet-1k)
  • Supported variants: densenet121, densenet161, densenet169
  • Default input: NCHW 1 x 3 x 224 x 224 RGB, normalized with ImageNet

If you are browsing on the Hugging Face Hub, you’ll find the ONNX file(s) under the Files tab. You can also reproduce the export yourself using the script below.

Quick start: Inference with ONNX Runtime

import numpy as np
import onnxruntime as ort
import torch
from PIL import Image
from torchvision.transforms import Compose, Resize, CenterCrop, PILToTensor, ConvertImageDtype, Normalize

# 1) Load ONNX model
session = ort.InferenceSession("densenet121.onnx", providers=["CPUExecutionProvider"])
input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name

# 2) Preprocess (ImageNet)
img = Image.open("your_image.jpg").convert("RGB")
transform = Compose([
    Resize(256),
    CenterCrop(224),
    PILToTensor(),
    ConvertImageDtype(torch.float32),
    Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
inp = transform(img).unsqueeze(0).numpy()  # NCHW float32

# 3) Run inference
scores = session.run([output_name], {input_name: inp})[0]  # (1, 1000)
pred = int(scores.argmax(axis=1)[0])
print("Top-1 class index:", pred)

To map indices to ImageNet-1k labels, you can fetch: https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt

Export from Torch Hub to ONNX (script)

Use the provided export script to reproduce an ONNX file from Torch Hub:

python export_to_onnx.py --variant densenet121 --output densenet121.onnx --opset 17

Options:

  • --variant: one of densenet121, densenet161, densenet169
  • --opset: ONNX opset version (default: 17)
  • --static-batch: make batch size static (by default, export uses dynamic batch)
  • --no-sample: export with a dummy input instead of a sample image

Minimal export

import torch
from PIL import Image
from torchvision.transforms import Compose, Resize, CenterCrop, PILToTensor, ConvertImageDtype, Normalize

variant = "densenet121"  # or 161/169
onnx_path = "densenet121.onnx"

torch_model = torch.hub.load("pytorch/vision:v0.10.0", variant, pretrained=True).eval()

img = Image.open("your_image.jpg").convert("RGB")
transform = Compose([
    Resize(256),
    CenterCrop(224),
    PILToTensor(),
    ConvertImageDtype(torch.float32),
    Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
img_tensor = transform(img).unsqueeze(0).contiguous()

torch.onnx.export(
    torch_model,
    img_tensor,
    onnx_path,
    opset_version=17,
    input_names=["input"],
    output_names=["logits"],
    dynamic_axes={"input": {0: "batch_size"}, "logits": {0: "batch_size"}},
)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support