What can Clavata do for you?
Clavata is built to help you understand, organize, and act on the content flowing through your platform, whether it’s AI-generated or user-generated.
Moderate Harmful or Sensitive Content
Clavata is often used to detect and flag content that violates platform policies or poses harm to users. Common categories include:
- Harmful or illegal content like CSAM (Child Sexual Abuse Material), NCII (Non-Consensual Intimate Images), Bestiality, Incest or Rape
- Community guideline violations, like Nudity, Violence or Hate speech
These categories can be adapted to fit your platform’s unique trust and safety needs. You define what “harmful” or "violative" means in your context, and Clavata detects it accordingly.
Label Content or Build Taxonomies
In addition to harmful content, Clavata can be used to apply non-violative, descriptive labels to help structure your content, build robust taxonomies and improve recommendations, search, or tagging workflows. Some examples include:
Demographic or identity-based labels
e.g., Straight, LGBTQ+, Fictional, Human, Mixed
Content maturity
e.g., PG, PG-13, R, X
Language labels
e.g., English, Spanish, Formal, Informal
Fine-grained features in text or image
e.g., Cat, Dog, Eyeliner, Blue eyes, Brown eyes
Clavata supports full customization of your taxonomy. You can define custom labels and rules for detection, build a hierarchy of labels, make policies region specific to accommodate different cultural contexts for different parts of the world and detect nuanced differences, such as distinguishing between eye colors or language or tone.
Our customers have used labelling for training their own AI models and developing a semantic understanding of what's being developed/shared on their platform.
Think of any more applications for Clavata? Totally possible! Share with us so we can build on it.