Foundation-Sec-Cybersecurity-8B-Merged

Fine-tuned fdtn-ai/Foundation-Sec-8B specialized for cybersecurity tasks. This is a merged model (LoRA weights merged into base) for easy deployment.

Model Description

This model was trained on ~50,000 cybersecurity instruction-response pairs from:

  • Trendyol Cybersecurity Dataset (35K samples)
  • Fenrir v2.0 Dataset (12K samples)
  • Primus-Instruct (3x upsampled)

Training Details

Parameter Value
Base Model fdtn-ai/Foundation-Sec-8B
Training Samples ~50,000
Epochs 2
LoRA Rank 16
LoRA Alpha 32
Learning Rate 2e-4
Max Sequence Length 1024
Target Modules 7 (q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj)

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model = AutoModelForCausalLM.from_pretrained(
    "sainikhiljuluri2015/Foundation-Sec-Cybersecurity-8B-Merged",
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("sainikhiljuluri2015/Foundation-Sec-Cybersecurity-8B-Merged", trust_remote_code=True)

prompt = "What are the indicators of a ransomware attack?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

API Usage

import requests

API_URL = "https://YOUR_ENDPOINT_URL/v1/chat/completions"

response = requests.post(API_URL, json={
    "model": "sainikhiljuluri2015/Foundation-Sec-Cybersecurity-8B-Merged",
    "messages": [{"role": "user", "content": "What is SQL injection?"}],
    "max_tokens": 300
})
print(response.json()["choices"][0]["message"]["content"])

Cybersecurity Capabilities

  • 🔍 Threat analysis and classification
  • 🚨 Security alert triage
  • 📋 Incident response guidance
  • 🦠 Malware analysis
  • 📊 MITRE ATT&CK mapping
  • 🔐 Vulnerability assessment
  • 💉 SQL injection detection
  • 🎣 Phishing analysis
  • 🔑 CVE knowledge
  • 🛡️ Security best practices
Downloads last month
21
Safetensors
Model size
8B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sainikhiljuluri2015/Foundation-Sec-Cybersecurity-8B-Merged

Finetuned
(6)
this model

Datasets used to train sainikhiljuluri2015/Foundation-Sec-Cybersecurity-8B-Merged