Gemma-2B (IT) โ€” NIRF Lookup 2025 (Merged FP16)

Base: google/gemma-2-2b-it This repository contains merged full weights (LoRA baked into base).

Intended use: Short factual lookup answers about NIRF 2025 (Indian institutes).

How to use (summary): Load with Transformers AutoTokenizer and AutoModelForCausalLM from this repo id. Use bfloat16 on NVIDIA L4. Provide an instruction (and optional context), then generate.

Training summary: QLoRA (4-bit) on Gemma-2-2b-it. LoRA r=16, alpha=64, dropout=0.1. Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj. bf16 on NVIDIA L4. Data: 100 NIRF 2025 lookup samples.

License & notice: This model is a Model Derivative of google/gemma-2-2b-it and is distributed under Googleโ€™s Gemma Terms of Use. See the NOTICE file in this repo.

Downloads last month
9
Safetensors
Model size
3B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for coderop12/gemma2b-nirf-lookup-2025

Base model

google/gemma-2-2b
Finetuned
(740)
this model
Quantizations
1 model