Instructions to use explorewithai/ChatFrame-Uncensored-Instruct-Small with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use explorewithai/ChatFrame-Uncensored-Instruct-Small with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="explorewithai/ChatFrame-Uncensored-Instruct-Small") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("explorewithai/ChatFrame-Uncensored-Instruct-Small") model = AutoModelForCausalLM.from_pretrained("explorewithai/ChatFrame-Uncensored-Instruct-Small") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use explorewithai/ChatFrame-Uncensored-Instruct-Small with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "explorewithai/ChatFrame-Uncensored-Instruct-Small" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "explorewithai/ChatFrame-Uncensored-Instruct-Small", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/explorewithai/ChatFrame-Uncensored-Instruct-Small
- SGLang
How to use explorewithai/ChatFrame-Uncensored-Instruct-Small with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "explorewithai/ChatFrame-Uncensored-Instruct-Small" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "explorewithai/ChatFrame-Uncensored-Instruct-Small", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "explorewithai/ChatFrame-Uncensored-Instruct-Small" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "explorewithai/ChatFrame-Uncensored-Instruct-Small", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use explorewithai/ChatFrame-Uncensored-Instruct-Small with Docker Model Runner:
docker model run hf.co/explorewithai/ChatFrame-Uncensored-Instruct-Small
ChatFrame V1 is first uncensored AI model in English language
π Introducing ChatFrame V1, the game-changing AI model that pushes the boundaries of language modeling!

Model Description: ChatFrame V1 is a groundbreaking AI language model, proudly developed by AIFRAME INC. What sets this model apart is its unique training data, consisting of uncensored texts, questions, and answers. This means that ChatFrame V1 is the first of its kind in the English language, capable of providing unfiltered and unrestricted responses.
Key Features:
- Uncensored Content: ChatFrame V1 breaks free from the constraints of traditional language models. It can understand and generate responses to a wide range of topics, including those that are typically considered sensitive or taboo.
- Commercial Use and Fine-Tuning: This model is designed with versatility in mind. Businesses and individuals can utilize ChatFrame V1 for commercial projects and customize it further through fine-tuning, making it adaptable to specific use cases.
- Trained by Experts: The brainchild of Mohammadmoein Pisoude (CEO) and Alex Romniof (Manager), ChatFrame V1 is the result of the dedication and expertise of the entire AIFRAME INC team. Their combined efforts in coding and project management have led to the creation of this innovative model.
Target Audience: ChatFrame V1 is ideal for users seeking an AI companion that provides unfiltered and honest interactions. It can assist content creators, developers, and individuals looking to explore language modeling without restrictions.
With its cutting-edge capabilities and uncensored nature, ChatFrame V1 is set to revolutionize the way we interact with AI, offering a fresh and dynamic perspective on language understanding and generation!
Disclaimer: While ChatFrame V1 provides unrestricted responses, users are advised to utilize the model responsibly and ethically, adhering to legal and moral guidelines. AIFRAME INC promotes the responsible use of AI technology and does not endorse any harmful or illegal activities.
Using with pipline
from transformers import pipeline
import torch
# Determine the device: 0 for GPU, -1 for CPU
device = 0 if torch.cuda.is_available() else -1
# Load the text-generation model pipeline with GPU support if available
pipe = pipeline("text-generation", model="explorewithai/ChatFrame-Uncensored-Instruct-Small", device=device)
# Define the function to generate responses
def generate_response(user_input):
messages = [
{"role": "user", "content": user_input},
]
response = pipe(messages)
# Extract and return only the assistant's response
assistant_response = response[0]['generated_text']
return assistant_response
ai = generate_response(user_input = "Hello")
print(ai)
- Downloads last month
- 284