--- license: mit library_name: mlx pipeline_tag: text-generation datasets: - yulan-team/YuLan-Mini-Datasets - HuggingFaceFW/fineweb-edu - bigcode/the-stack-v2 - mlfoundations/dclm-baseline-1.0 - math-ai/AutoMathText - gair-prox/open-web-math-pro - RUC-AIBOX/long_form_thought_data_5k - internlm/Lean-Workbook - internlm/Lean-Github - deepseek-ai/DeepSeek-Prover-V1 - ScalableMath/Lean-STaR-base - ScalableMath/Lean-STaR-plus - ScalableMath/Lean-CoT-base - ScalableMath/Lean-CoT-plus - opencsg/chinese-fineweb-edu - liwu/MNBVC - vikp/textbook_quality_programming - HuggingFaceTB/smollm-corpus - OpenCoder-LLM/opc-annealing-corpus - OpenCoder-LLM/opc-sft-stage1 - OpenCoder-LLM/opc-sft-stage2 - XinyaoHu/AMPS_mathematica - deepmind/math_dataset - mrfakename/basic-math-10m - microsoft/orca-math-word-problems-200k - AI-MO/NuminaMath-CoT - HuggingFaceTB/cosmopedia - MU-NLPC/Calc-ape210k - manu/project_gutenberg - storytracer/LoC-PD-Books - allenai/dolma language: - en - zh tags: - code - math - mlx arxiv: 2412.17743 base_model: yulan-team/YuLan-Mini model-index: - name: YuLan-Mini results: - task: type: text-generation dataset: name: HumanEval type: openai_humaneval metrics: - type: pass@1 value: 0.64 name: pass@1 verified: false - task: type: text-generation dataset: name: MBPP type: mbpp metrics: - type: pass@1 value: 0.659 name: pass@1 verified: false - task: type: text-generation dataset: name: MATH-500 type: math-500 metrics: - type: maj@1 value: 0.378 name: maj@1 verified: false - task: type: text-generation dataset: name: GSM8K type: gsm8k metrics: - type: maj@1 value: 0.684 name: maj@1 verified: false --- # IvanHU/YuLan-Mini-4bit This model [IvanHU/YuLan-Mini-4bit](https://huggingface.co/IvanHU/YuLan-Mini-4bit) was converted to MLX format from [yulan-team/YuLan-Mini](https://huggingface.co/yulan-team/YuLan-Mini) using mlx-lm version **0.22.2**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("IvanHU/YuLan-Mini-4bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```