Can you share the fine tuning notebook/Code?
I have had issues with running notebooks from Huggingface repos, on whisper fine tuning. Could you share the fine tuning code?
Hey, I used this notebook from huggingface, https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/fine_tune_whisper.ipynb. I had to change a few things like the tokenizer since Whisper was not trained on Luganda. I used the Swahili tokenizer instead.
Hey, I used this notebook from huggingface, https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/fine_tune_whisper.ipynb. I had to change a few things like the tokenizer since Whisper was not trained on Luganda. I used the Swahili tokenizer instead.
I have reinstalled, restarted, I still get this error on the 'training_args = Seq2SeqTrainingArguments()'
ImportError: Using the Trainer with PyTorch requires accelerate>=0.21.0: Please run pip install transformers[torch] or pip install accelerate -U
no worries, this worked
https://github.com/huggingface/peft/blob/main/examples/int8_training/peft_bnb_whisper_large_v2_training.ipynb
Hey, I used this notebook from huggingface, https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/fine_tune_whisper.ipynb. I had to change a few things like the tokenizer since Whisper was not trained on Luganda. I used the Swahili tokenizer instead.
I have reinstalled, restarted, I still get this error on the 'training_args = Seq2SeqTrainingArguments()'
ImportError: Using theTrainerwithPyTorchrequiresaccelerate>=0.21.0: Please runpip install transformers[torch]orpip install accelerate -U