How to use with Ollama

  1. Install Ollama
  2. Download the tinyllama-exam-gen.gguf file from this repo.
  3. Create a file named Modelfile and paste the following:
    FROM ./tinyllama-exam-gen.gguf
    TEMPLATE """<|user|>\n{{ .Prompt }}</s>\n<|assistant|>\n"""
    SYSTEM """You are an AI that generates exam questions from documents."""
    
Downloads last month
3
GGUF
Model size
1B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for Katt-IV/tinyllama-TorF-quiz-generator-gguf

Quantized
(127)
this model