Kimi-Linear-48B-A3B-Instruct BNB - FP4

Model Details

  • Quantization Method: BNB
  • Bits: 4
  • Compute Dtype: bfloat16
Downloads last month
4
Safetensors
Model size
49B params
Tensor type
F32
F16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for JasmineBBB/Kimi-Linear-48B-A3B-Instruct-bnb-4bit

Quantized
(23)
this model