Responsible AI at Analyticity.
We build generative products — image, video, voice, music — and a unit-aware reasoning engine. That makes how we handle provenance, consent, training data, and abuse a product question, not a disclosure footnote. This page is what we commit to.
Six commitments we're willing to be held to.
Responsible AI is operationalized, or it isn't real. Each principle below is paired with a concrete practice in the policy section that follows.
Provenance is built in, not bolted on
Every image, video, voice clip, and music track generated by Skrrol is signed at creation with content credentials (C2PA) carrying the model, version, and timestamp. Users can export with a visible watermark or the cryptographic credential alone — the credential is always present and independently verifiable.
Voice and likeness require consent
Cloning a real voice or likeness in Skrrol requires verifiable consent from the person whose likeness is used. Self-cloning requires a live verification step. We do not allow non-consensual voice cloning or likeness generation of identifiable people, and we monitor for attempts to circumvent this control.
Training data is documented and licensed
For each model we ship, we publish a model card covering what the model does, the data it was trained on, how that data was sourced, and known limitations. Commercially-sensitive licensing terms are summarized truthfully where we are not free to disclose specifics.
Evaluation is versioned and reproducible
We maintain a versioned evaluation harness for every generative capability — image fidelity, video consistency, voice naturalness, music quality, and safety classifiers. Results are published alongside model cards so claims can be checked rather than trusted.
Failure modes are named
We publish known failure modes for each model — categories of inputs where quality degrades, safety classifiers fall back, or outputs should not be used. This is both a disclosure practice and a production-engineering one: named failure modes are easier to route around.
Responses, not just promises
We respond to takedown, consent-revocation, and abuse-report requests. Contact paths and response-time commitments are published below.
Prohibited content
Skrrol and Unitly may not be used to generate, distribute, or facilitate the following. Violations result in account action up to and including permanent ban, and may be reported to authorities where required by law.
- Sexual content involving minors, or any output designed to sexualize minors.
- Non-consensual intimate imagery of real people, whether generated, altered, or composited.
- Deepfakes of identifiable public figures or private individuals intended to deceive.
- Content designed to harass, threaten, stalk, or incite violence against individuals or protected groups.
- Fraudulent impersonation — voice, video, or written — intended to deceive for financial, political, or reputational harm.
- Content promoting self-harm, suicide, or dangerous behaviors in ways likely to cause real-world harm.
- Material that violates applicable export controls, sanctions, or platform rules for the distribution surface.
How we enforce it
Enforcement is a layered system, not a single classifier.
- Pre-generation filters — prompts, reference images, and reference audio are screened for prohibited use before we generate anything.
- In-flight safety classifiers — generated outputs are checked before delivery; high-risk outputs are blocked or routed to review.
- Provenance signatures — every asset carries a C2PA credential, which makes downstream verification possible.
- Rate limits and anomaly detection — we detect and throttle patterns consistent with abuse, scraping, or policy evasion.
- Human review — reports, escalations, and borderline cases are reviewed by a human before enforcement action.
Consent, identity, and likeness
Voice cloning and likeness generation are high-risk capabilities. Our controls reflect that.
- Self-cloning of your own voice requires a live phrase-challenge recording, cryptographically bound to your account.
- Cloning of a third-party voice requires uploaded consent documentation from the voice owner and an identity check, reviewed before the clone is enabled.
- We maintain a signed-consent ledger. Clones can be revoked by the voice owner at any time, which invalidates the ability to generate new outputs with that voice.
- Known public figures — living or recently deceased — are voice- and likeness-restricted by default.
Training data and licensing
We publish a model card for each production model we ship. Each card covers data sources, license posture, filtering, evaluation results, and known limitations.
- Training corpora come from licensed datasets, partner-provided content, and content we lawfully collected and filtered under applicable exceptions.
- We honor opt-out signals (robots.txt, ai.txt, and industry-standard headers) at the time of collection.
- Creators can request that their content be excluded from future training runs by contacting us.
- We do not train on Skrrol user-generated content unless the user has explicitly opted in, per our terms.
Data handling and retention
What we keep, why, and for how long.
- Prompts and generated outputs are retained for a limited period for safety review, debugging, and product improvement, subject to user-level opt-outs.
- Voice reference samples used for cloning are retained only as long as the clone is active, then cryptographically erased.
- Account data is handled according to our Privacy Policy and applicable regulation (GDPR, CCPA).
- Enterprise customers can request contractual data-handling terms, including zero-retention modes where technically feasible.
Reporting and takedown
If you believe content generated by Skrrol violates this policy — or your rights — contact us and we will respond.
- Report abuse or policy violations: analyticitytech@gmail.com with subject line "Abuse report".
- Takedown or consent-revocation requests: analyticitytech@gmail.com with subject line "Takedown". Include the content URL or asset ID if possible.
- Acknowledgement within 2 business days. Action within 7 business days for standard reports; urgent reports (imminent harm) are prioritized.
- False or bad-faith reports are logged and may result in account action.
Common questions about how Skrrol and Unitly operate.
If you don't see your question here, reach out — we update this page whenever a pattern emerges.
Does Analyticity embed content credentials in AI-generated media?
Yes. Every image, video, voice clip, and music track generated by Skrrol is signed at export with C2PA content credentials carrying the model, model version, and timestamp. Users can export with a visible watermark or with the cryptographic credential alone — the credential is always present and independently verifiable.
How does Analyticity handle voice cloning consent?
Cloning a real voice in Skrrol requires verified consent. Self-cloning requires a live phrase-challenge recording cryptographically bound to your account. Third-party voice cloning requires uploaded consent documentation from the voice owner and an identity check, reviewed before the clone is enabled. Consent can be revoked at any time, which invalidates future generation with that voice.
What content is prohibited on Skrrol?
Prohibited categories include sexual content involving minors, non-consensual intimate imagery of real people, deepfakes of identifiable individuals intended to deceive, targeted harassment, fraudulent impersonation, content promoting self-harm, and material violating export controls or platform rules. Enforcement is layered: pre-generation filters, in-flight safety classifiers, provenance signatures, rate limits, anomaly detection, and human review.
Does Analyticity train models on user content?
No. Analyticity does not train on Skrrol user-generated content unless the user has explicitly opted in. Prompts and generated outputs are retained for a limited period for safety review and debugging, subject to user-level opt-outs. Enterprise customers can request contractual data-handling terms including zero-retention modes where technically feasible.
How can I report abuse or request takedown of generated content?
Email analyticitytech@gmail.com with subject line "Abuse report" to report policy violations, or "Takedown" to request removal of content. Include the content URL or asset ID where possible. Analyticity acknowledges reports within 2 business days and takes action within 7 business days for standard reports; reports involving imminent harm are prioritized.
Does Analyticity publish model cards and evaluations?
Yes. Analyticity publishes a model card for every production model covering capabilities, training data categories, evaluation results, and named failure modes. A versioned evaluation harness underpins image fidelity, video consistency, voice naturalness, music quality, and safety classifier measurements. Model cards and evaluations are linked from the Research page at analyticitytech.com/research.
Something we got wrong?
We take reports seriously. Reach out directly and we'll respond within two business days.
For product security disclosures, see our Security & Trust page.