Content Moderation Policy
Last updated: January 6, 2026.
Warm Start Labs LLC d/b/a Heartthrob ("we," "us," or "our") is committed to fostering a respectful, safe, and creative environment for all users of heartthrob.ai (the "Website"). This Content Moderation Policy describes how we monitor, restrict, and respond to content that violates our community standards or legal obligations.
All interactions on the platform involve AI-generated characters and media. While we support a wide range of creative expression, we do not tolerate misuse of the Services to create or share harmful, illegal, or exploitative content.
Scope of Moderation
This policy applies to:
- AI-generated messages, images, videos, and other outputs
- Prompts submitted by users
- Metadata (for example: character names and descriptions)
- Usernames and profile content
- Behavior or actions conducted through the platform
Prohibited Content
You may not use the Services to generate, promote, or simulate:
- Real-world harm or violence, including sexual violence
- Human trafficking or exploitation
- Content involving minors in any sexual or romantic context
- Non-consensual activity or coercion
- Harassment, threats, or targeted abuse of real individuals
- Hate speech or extremist content
- Self-harm or suicide promotion
- Fraud, scams, or illegal financial activity
- Attempts to bypass safety systems or filters (for example: jailbreaking prompts)
Content that violates these rules may be removed without notice, and accounts may be suspended or terminated.
Pre-Screening and Post-Screening
We use a dual-layer approach to content moderation:
- Pre-screening: We use automated safety systems to help detect and block some prohibited or high-risk content before it is displayed. We may apply filters that restrict prompts or outputs that are strongly associated with violations.
- Post-screening: We may review content after it is generated or displayed, including content that is reported by users or flagged by automated systems. We may remove or restrict content that violates this policy, our Terms of Service, or applicable law even after it has been available.
We continuously update our safeguards based on new risk patterns, user reports, and observed misuse.
Reporting Content
If you encounter content that may violate this policy, please report it using our contact form:
To help us investigate quickly, please include:
- URL of the relevant content (or specific location/identifier on our site)
- Date and time you observed the content
- A detailed description of what you observed and why it violates this policy
- Any additional context that may help us review the report
Reports are reviewed within 7 business days or sooner.
Account Actions
Depending on the severity and frequency of violations, we may:
- Remove content or characters
- Issue warnings or require prompt edits
- Temporarily suspend account access
- Permanently ban an account
- Escalate to law enforcement where required
Appeals
If you believe a moderation decision was made in error, submit an appeal through our contact form. Please include any relevant context and reference the original report if available.
Appeals are reviewed within 7 business days.