Skip to content

Conversation

cpumaxx
Copy link
Contributor

@cpumaxx cpumaxx commented Sep 26, 2025

Add Stockmark.
I tested conversion script before/after PR to verify correctness.

Make sure to read the contributing guidelines before submitting a PR

@cpumaxx cpumaxx requested a review from CISC as a code owner September 26, 2025 14:04
@github-actions github-actions bot added the python python script changes label Sep 26, 2025
@CISC
Copy link
Collaborator

CISC commented Sep 26, 2025

Did you mean to submit a draft? This PR is quite lacking (as in it does nothing).

@cpumaxx
Copy link
Contributor Author

cpumaxx commented Sep 27, 2025

I had intended to modify convert_hf_to_gguf_update.py with stockmark, as was indicated by #6920 (which was output by the main convert_hf_to_gguf.py script)
Once the line was added and convert_hf_to_gguf_update.py was run, I was able to successfully convert the stockmark2-100b safetensors to gguf q8_0.

Was extension for support of new BPE tokenizers not the intention of the script?

@CISC
Copy link
Collaborator

CISC commented Sep 27, 2025

I had intended to modify convert_hf_to_gguf_update.py with stockmark, as was indicated by #6920 (which was output by the main convert_hf_to_gguf.py script) Once the line was added and convert_hf_to_gguf_update.py was run, I was able to successfully convert the stockmark2-100b safetensors to gguf q8_0.

You may have been able to convert it, but it won't run as you've then added the new stockfish2 pre-tokenizer value, but none of the code to support it. You need to figure out if the pre-tokenizer (in tokenizer.json) uses a new regex or an already supported one, and what settings to use (mainly clean_spaces (clean_up_tokenization_spaces from tokenizer_config.json)). This all needs to be added in llama-vocab.cpp (look for LLAMA_VOCAB_PRE_TYPE_).

Once that is done you should be able to run the test-tokenizer-0 test with the files generated by convert_hf_to_gguf_update.py to verify the tokenizer.

You also need to commit convert_hf_to_gguf.py that was updated by convert_hf_to_gguf_update.py with the new hash.

Was extension for support of new BPE tokenizers not the intention of the script?

Yes, but a little more work is required. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
python python script changes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants