Full BF16 version

#141
by jerrydev11 - opened

I'm assuming you're saving the final merge as an 8 bit version, but the actual merge is done with the full bf16.

Can we get the full bf16 merge for those of us that have the resources to run it? I'm assuming the quality of the bf16 merge would be way better. Thanks for your great work Phr00t!

The recipe for making the safetensors is in the metadata. fp8 conversion nodes and dtypes could be removed/changed to generate an bf16 version for whomever wants to make it.

The recipe for making the safetensors is in the metadata. fp8 conversion nodes and dtypes could be removed/changed to generate an bf16 version for whomever wants to make it.

Hello! Could you please share how you performed the merge—was it through ComfyUI or directly via Python command line?
The metadata in the file isn’t enough for me to reproduce the results on my own, so I’d like to build a similar model without any “acceleration” LoRAs.
Thank you for your great work, and I look forward to your reply!

Sign up or log in to comment