Count tokens for multiple LLM models
Detect prompt injection risks in any text
Calculate daily LLM costs across many models