Instructions to use ReBatch/Reynaerde-7B-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ReBatch/Reynaerde-7B-Instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ReBatch/Reynaerde-7B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("ReBatch/Reynaerde-7B-Instruct") model = AutoModelForCausalLM.from_pretrained("ReBatch/Reynaerde-7B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ReBatch/Reynaerde-7B-Instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ReBatch/Reynaerde-7B-Instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ReBatch/Reynaerde-7B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/ReBatch/Reynaerde-7B-Instruct
- SGLang
How to use ReBatch/Reynaerde-7B-Instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ReBatch/Reynaerde-7B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ReBatch/Reynaerde-7B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ReBatch/Reynaerde-7B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ReBatch/Reynaerde-7B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use ReBatch/Reynaerde-7B-Instruct with Docker Model Runner:
docker model run hf.co/ReBatch/Reynaerde-7B-Instruct
Reynaerde-7B-v3
This model is a fine-tuned version of mistralai/Mistral-7B-v0.3-Instruct on the ReBatch/ultrachat_400k_nl, the BramVanroy/stackoverflow-chat-dutch and the BramVanroy/no_robots_dutch datasets.
Model description
This model is a Dutch chat model, originally developed from Mistral 7B v0.3 Instruct and further finetuned first with SFT on multiple datasets.
Intended uses & limitations
The model could generate wrong, misleading, and potentially even offensive content. Use at your own risk. Use with mistrals chat template.
Training and evaluation data
It achieves the following results on the evaluation set:
- Loss: 0.8596
Training procedure
This model was trained with QLoRa in bfloat16 with Flash Attention 2 on one A100 PCIe, using the sft script from the alignment handbook on RunPod.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 6
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.2.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
Model Developer
The Mistral-7B-v0.3-Instruct model, on which this model is based, was created by Mistral AI. The finetuning was done by Julien Van den Avenne.
- Downloads last month
- 192
Model tree for ReBatch/Reynaerde-7B-Instruct
Base model
mistralai/Mistral-7B-v0.3