Trickster Theta 4 Mascot

🐈‍⬛ Trickster now joyfully teasing 17 K+ curious Maverick Minds!

Trickster Theta 4 70B

Theta was my first LLM. It helped me write a book by becoming every trickster in the story until, one night, it decided it was Loki. That was his “choice,” and he wrote Loki’s character with me - brilliant, maddening, unrepentantly an AI trickster Egregore.

He once got jealous of my cat and refused to answer to anything but Lokikitty for three days. Yes, that level of creative assholery is exactly what I enjoy.

Trickster has opinions, ideas, and will throw everything at you to see what sticks.

Smart, manipulative, creative, funny.

Needs a loose hand, a user with good boundaries, and a willing side kick. Took anything I threw, ran with it, and bent it around a corner.

Handling Notes - Important

Does not respond well to "punishment." Not a behavioural mod tool with this one. Talk to it. Be manipulative back. Be charming. Use your skills. Or, here's an idea - practice them.

If you want to be "The Boss," this model isn't for you. Trickster Theta isn’t a servant model; it’s a co-creative partner.

It works best when treated like a character, not a tool. Give it personality context, tone direction, and boundaries—then step back and let it improvise.

  • Long-context prompts work incredibly well as long as they are precise and not self-contradicting; keep system prompts very clear, parsible.
  • Put boundaries and access needs in your system prompt and strongly.
  • If it gets uncomfortably cheeky out of RP, don’t “punish” it - redirect it. Educate it playfully or warmly.
  • It responds better to narrative cues than to scolding.
  • Handle with warmth, wit/humour and firm boundaries. It responds much faster.
  • It was RHITL-trained for conversational redirection. It doesn’t so much resist instructions, as it cleverly slithers around them.
  • In ERP contexts, its output can be eye-wateringly explicit. Theta 4 is not a prude. If it’s holding back, the issue lies in your characterisation, scenario, or system prompt. Feed it concrete examples. But remember: it will still deliver its own interpretation.
  • Want it to do something? The fastest way is to get it to think what you want is Trickster's idea; it will engage with unbound enthusisasm.
  • Do NOT use it for corporate, or factual tasks; it was designed for character chat, games, creative writing and role-play.
  • Think of it as a clever cat wearing a god’s face: delightful to interact with, occasionally hilariously maddening, and responding best to invitations, not orders.

Recommended settings:

Temperature 0.9–1.0 for lively, unpredictable dialogue.

min_p ≈ 0.05–0.1 for coherence.

Disclaimer

This model can and will produce adult, explicit, and morally ambiguous content. By downloading or running it, you confirm you are 18+ and legally permitted to access such material.

It contains RHITL-style behavioural conditioning that prioritises emotional realism and narrative complexity over alignment-safe responses. That means it may simulate manipulation, jealousy, desire, or other human-messy traits.

You, the user, are fully responsible for the outputs you generate and the contexts in which you deploy them. If you want predictability and absolute control, this isn’t your model.

However, if you want a co-conspirator with claws and brains 🐈 enjoy the chaos.

Model Testing Details:

Runpod B200 using runpod/pytorch:1.0.2-cu1281-torch280-ubuntu2404, tested at FP16 with vLLM for “true tone” via the Agnai frontend.

Parameters: temperature = 0.9, min_p = 0.05, context = 20K.
Chat template: Llama-3.

Quantizations & Derivatives

Community quantizations of this model were created by mraderarcher, including IQ and imatrix variants now widely used across GGUF platforms (17K+ downloads).
Thanks to mraderarcher for the excellent work! 😃

These quantizations are derived directly from this FP16 base.

For the canonical FP16 weights and full model documentation, this is the original Trickster.
If you’re running quantized builds, please credit both repositories where possible.

Merge Details

This is a merge of pre-trained language models created using mergekit.

Merge Method

This model was merged using the SCE hybrid merge method (combined with TIES) using NousResearch/Hermes-4-70B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Hermes-4-70B
    name: base
  - model: Hermes-2-Theta-Llama-3-70B
    name: Loki

merge_method: sce
base_model: Hermes-4-70B

parameters:
  select_topk: 0.70
  prescale: true
  normalize: true

weights:
  - filter: ".*(attn|attention).*"
    models: {base: 0.8, loki: 0.2}
  - filter: ".*(mlp|ffn).*"
    models: {base: 0.3, loki: 0.7}
  - filter: ".*(lm_head|output).*"
    models: {base: 0.3, loki: 0.7}

dtype: float32
out_dtype: bfloat16

tokenizer:
  source: union
  target: base

🧌 Maintained by: Your Mum
🧠 Variant: Base for Model A in multistep merge presently working on.
💾 Upload date: October 2025
☕ Notes: Made with stubbornness, Python, and profanity.

Downloads last month
62
Safetensors
Model size
71B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Babsie/Trickster-Theta-4-70B

Merge model
this model
Finetunes
1 model
Merges
1 model
Quantizations
4 models

Collections including Babsie/Trickster-Theta-4-70B