aguitauwu
Add llamafile variants
19e6fe7 - 1.61 kB Add llamafile variants
- 737 Bytes 🌸 Initial Yuuki v0.1 setup - Training in progress (Step 1,417)
- 723 Bytes 🌸 Initial Yuuki v0.1 setup - Training in progress (Step 1,417)
- 15.8 kB Update README.md
- 971 Bytes 🌸 Initial Yuuki v0.1 setup - Training in progress (Step 1,417)
- 119 Bytes Subiendo modelo Yuuki
- 456 kB 🌸 Initial Yuuki v0.1 setup - Training in progress (Step 1,417)
- 328 MB Subiendo modelo Yuuki
- 655 MB Subiendo modelo Yuuki
rng_state.pth Detected Pickle imports (7)
- "_codecs.encode",
- "torch._utils._rebuild_tensor_v2",
- "numpy._core.multiarray._reconstruct",
- "numpy.dtype",
- "torch.ByteStorage",
- "numpy.ndarray",
- "collections.OrderedDict"
How to fix it?
14.5 kB Subiendo modelo Yuuki - 1.47 kB Subiendo modelo Yuuki
- 131 Bytes 🌸 Initial Yuuki v0.1 setup - Training in progress (Step 1,417)
- 3.56 MB 🌸 Initial Yuuki v0.1 setup - Training in progress (Step 1,417)
- 507 Bytes 🌸 Initial Yuuki v0.1 setup - Training in progress (Step 1,417)
- 24.2 kB Subiendo modelo Yuuki
training_args.bin Detected Pickle imports (10)
- "transformers.trainer_pt_utils.AcceleratorConfig",
- "transformers.training_args.OptimizerNames",
- "torch.device",
- "transformers.trainer_utils.IntervalStrategy",
- "transformers.trainer_utils.SaveStrategy",
- "accelerate.state.PartialState",
- "transformers.training_args.TrainingArguments",
- "accelerate.utils.dataclasses.DistributedType",
- "transformers.trainer_utils.HubStrategy",
- "transformers.trainer_utils.SchedulerType"
How to fix it?
5.84 kB Subiendo modelo Yuuki - 798 kB 🌸 Initial Yuuki v0.1 setup - Training in progress (Step 1,417)
- 329 MB Add GGUF quantized models (Q4_0, Q4_K_M, Q5_K_M, Q8_0, F32)
- 329 MB Add llamafile variants
- 60.7 MB Add GGUF quantized models (Q4_0, Q4_K_M, Q5_K_M, Q8_0, F32)
- 60.7 MB Add llamafile variants
- 63.3 MB Add GGUF quantized models (Q4_0, Q4_K_M, Q5_K_M, Q8_0, F32)
- 63.3 MB Add llamafile variants
- 68.1 MB Add GGUF quantized models (Q4_0, Q4_K_M, Q5_K_M, Q8_0, F32)
- 68.1 MB Add llamafile variants
- 91.3 MB Add GGUF quantized models (Q4_0, Q4_K_M, Q5_K_M, Q8_0, F32)
- 91.3 MB Add llamafile variants