MuXodious/LFM2-8B-A1B-absolute-heresy

#1651
by MuXodious - opened

https://huggingface.co/MuXodious/LFM2-8B-A1B-absolute-heresy

Please and thank you in advance πŸ™

It's queued!

You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#LFM2-8B-A1B-absolute-heresy for quants to appear.

I think there's an error with the summary page, alerting "LFM2-8B-A1B-absolute-heresy: unsupported repository name"

The model unfortionately failed with the following error:

LFM2-8B-A1B-absolute-heresy     INFO:hf-to-gguf:Set model quantization version
LFM2-8B-A1B-absolute-heresy     INFO:hf-to-gguf:Set model tokenizer
LFM2-8B-A1B-absolute-heresy     Traceback (most recent call last):
LFM2-8B-A1B-absolute-heresy       File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 10932, in <module>
LFM2-8B-A1B-absolute-heresy         main()
LFM2-8B-A1B-absolute-heresy       File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 10926, in main
LFM2-8B-A1B-absolute-heresy         model_instance.write()
LFM2-8B-A1B-absolute-heresy       File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 691, in write
LFM2-8B-A1B-absolute-heresy         self.prepare_metadata(vocab_only=False)
LFM2-8B-A1B-absolute-heresy       File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 827, in prepare_metadata
LFM2-8B-A1B-absolute-heresy         self.set_vocab()
LFM2-8B-A1B-absolute-heresy       File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 799, in set_vocab
LFM2-8B-A1B-absolute-heresy         self._set_vocab_gpt2()
LFM2-8B-A1B-absolute-heresy       File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 1269, in _set_vocab_gpt2
LFM2-8B-A1B-absolute-heresy         tokens, toktypes, tokpre = self.get_vocab_base()
LFM2-8B-A1B-absolute-heresy       File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 971, in get_vocab_base
LFM2-8B-A1B-absolute-heresy         tokenizer = AutoTokenizer.from_pretrained(self.dir_model)
LFM2-8B-A1B-absolute-heresy       File "/llmjob/share/python/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 1122, in from_pretrained
LFM2-8B-A1B-absolute-heresy         raise ValueError(
LFM2-8B-A1B-absolute-heresy     ValueError: Tokenizer class TokenizersBackend does not exist or is not currently imported.
LFM2-8B-A1B-absolute-heresy     job finished, status 1
LFM2-8B-A1B-absolute-heresy     job-done<0 LFM2-8B-A1B-absolute-heresy noquant 1>
LFM2-8B-A1B-absolute-heresy
LFM2-8B-A1B-absolute-heresy     NAME: LFM2-8B-A1B-absolute-heresy
LFM2-8B-A1B-absolute-heresy     TIME: Mon Dec 29 14:06:21 2025
LFM2-8B-A1B-absolute-heresy     WORKER: rich1

I have had created some GGUF quantisations locally earlier and been using them, so model files should be fine. Installing requirements for convert_hf_to_gguf.py on latest llama.cpp on a new python environment however, resulted in the script failing with the same error message. It fails on transformers==v4.57.3 but works as intended on transformers==5.0.0rc1.

MuXodious changed discussion status to closed

Aight, It was due to some misconfigured tokenizers.json and couple other configuration files. It should work as stipulated in the original model readme. Can you try again whenever you find convenient?

MuXodious changed discussion status to open

2AM, sounds pretty convenient for me, it's queued!

That, indeed, was the issue. Thanks again and enjoy your night!

MuXodious changed discussion status to closed

Sign up or log in to comment