Phi-3 Mini SQL Generator (QLoRA Fine-tuned)

Fine-tuned version of Phi-3-mini-4k-instruct for natural language β†’ SQL generation using QLoRA on a T4 GPU (Google Colab, ~20 min).

Evaluation β€” Base vs Fine-tuned

Evaluated on 200 held-out examples from b-mc2/sql-create-context.

Model Exact Match
Phi-3-mini-4k-instruct (base) 2.0%
This adapter (fine-tuned) 73.5%

Training Details

  • Dataset: b-mc2/sql-create-context β€” 1,000 train / 200 validation examples
  • Epochs: 3
  • Effective batch size: 8
  • Learning rate: 0.0002
  • Hardware: NVIDIA T4 (Google Colab free tier)
  • Training time: 21.2 min
  • Final train loss: 0.6526
  • Best checkpoint: step 250 (by eval loss)

LoRA Config

Parameter Value
Rank (r) 16
Alpha 32
Dropout 0.05
Target modules ['down_proj', 'qkv_proj', 'gate_up_proj', 'o_proj']
Quantization 4-bit NF4 (QLoRA)

How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

tokenizer = AutoTokenizer.from_pretrained("Shizu0n/phi3-mini-sql-generator", trust_remote_code=True)
base_model = AutoModelForCausalLM.from_pretrained(
    "microsoft/Phi-3-mini-4k-instruct",
    torch_dtype=torch.float16, device_map="auto", trust_remote_code=True,
)
model = PeftModel.from_pretrained(base_model, "Shizu0n/phi3-mini-sql-generator")
model.eval()

prompt = "Given the following SQL table, write a SQL query.\n\n"\
         "Table: employees (id, name, department, salary)\n\n"\
         "Question: What is the average salary per department?\n\nSQL:"

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.inference_mode():
    outputs = model.generate(**inputs, max_new_tokens=100, do_sample=False)
prompt_len = inputs["input_ids"].shape[-1]
print(tokenizer.decode(outputs[0][prompt_len:], skip_special_tokens=True))

Limitations

  • Fine-tuned on 1,000 examples β€” best suited for simple to medium complexity SELECT queries
  • Not tested on dialect-specific SQL (PostgreSQL/MySQL-specific functions)
  • May struggle with multi-table JOINs and nested subqueries
Downloads last month
34
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Shizu0n/phi3-mini-sql-generator

Adapter
(833)
this model