MN-12B-Faun-RP-RU

🇷🇺 Нажмите, чтобы развернуть описание на русском

🌟 О модели

MN-12B-Faun-RP-RU — улучшенный merge на базе Mistral Nemo 12B, развивающий идеи Hydra и ориентированный на:

  • 🎭 Более стабильный и выразительный roleplay
  • 📚 Улучшенный русский язык
  • 🧠 Расширенный словарный запас, включая сложные и NSFW-темы
  • 🔓 Практически отсутствующую цензуру

Модель собрана методом TIES-merging и не проходила дополнительного обучения после слияния.

🎯 Особенности

  • Основной фокус — русский язык
  • Лучше удерживает персонажей и стиль диалога
  • Более богатая и вариативная генерация
  • Улучшенная стабильность на длинных контекстах (проверено до ~8192 токенов)
  • Следует инструкциям, но может добавлять дисклеймеры на чувствительные запросы

⚠️ Важно

Модель сохраняет uncensored-характер, однако в некоторых случаях может добавлять предупреждения о неподходящем контенте при прямых запросах. При этом генерация не блокируется и продолжается после дисклеймера.

High-quality TIES merge based on Mistral Nemo 12B, focused on improved Russian fluency, stronger roleplay, richer vocabulary, and stable long-context performance.


🌍 Overview

MN-12B-Faun-RP-RU is an evolution of the Hydra-style merge, designed to push further in roleplay quality, language richness, and generation stability.

Key improvements include:

  • 📚 Better Russian
  • 🎭 More consistent and immersive roleplay behavior
  • 🧠 Expanded vocabulary, including expressive and NSFW domains
  • 🔁 More stable handling of long conversations (tested up to ~8k tokens)

The model may occasionally produce safety disclaimers when prompted directly for sensitive content, but generation continues normally afterward.

Built using TIES merging, which minimizes destructive interference between merged model weights.


🎯 Key Features

Feature Description
Languages Russian, English
Censorship Mostly uncensored (with occasional disclaimers)
Roleplay Improved consistency and immersion
Instruction Following Strong
Vocabulary Expanded, including NSFW domains
Context Length Stable up to ~8192 tokens
Architecture Mistral Nemo 12B

🧩 Model Composition

The merge combines the following models:

Model Role in merge Weight
MN-12B-Hydra-RP-RU Base / foundation 0.60
Impish_Bloodmoon_12B RP + style boost 0.25
Forgotten-Safeword-12B-v4.0 Uncensored behavior 0.10

Weights shown before normalization.


💡 Usage Example

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "limloop/MN-12B-Faun-RP-RU"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map="auto"
)

prompt = "You are a mysterious forest faun speaking in poetic Russian."
messages = [{"role": "user", "content": prompt}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)

outputs = model.generate(inputs, max_new_tokens=512, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

⚙️ Merge Details

Built using mergekit with the TIES method (Trim, Elect Sign, Merge).

Core mechanism:

  1. Trim low-magnitude deltas via density
  2. Resolve sign conflicts
  3. Weighted averaging of aligned parameters

Merge Configuration

models:
  - model: limloop/MN-12B-Hydra-RP-RU
    parameters:
      weight: 0.6

  - model: SicariusSicariiStuff/Impish_Bloodmoon_12B
    parameters:
      weight: 0.25
      density: 0.9

  - model: ReadyArt/Forgotten-Safeword-12B-v4.0
    parameters:
      weight: 0.1
      density: 0.6

merge_method: ties
parameters:
  epsilon: 0.01
  normalize: true

base_model: limloop/MN-12B-Hydra-RP-RU
dtype: bfloat16

tokenizer:
  source: "base"

⚠️ Known Characteristics

  • No post-merge fine-tuning
  • May produce safety disclaimers before sensitive outputs
  • Occasionally switches to English in complex reasoning
  • Stronger stylistic bias in roleplay compared to Hydra
Downloads last month
8
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for limloop/MN-12B-Faun-RP-RU

Collection including limloop/MN-12B-Faun-RP-RU