Safety and abuse prevention
How DECYD handles prohibited use, serious safety categories, appeals, and product safeguards without exposing sensitive moderation details.
Why safety matters here
A platform that brokers multiple AI systems needs clear safety boundaries. Abuse handling is part of product quality, not a separate legal afterthought.
That is why DECYD maintains an acceptable-use policy, moderation workflows, and founder/admin review tools for the highest-risk cases.
How enforcement works
Not every violation is treated equally. Some categories can result in warnings or temporary suspensions, while others can trigger immediate escalation.
For the most serious classes of abuse, DECYD follows a documented review and reporting path rather than an improvised response.
Appeals and reporting
If you think an enforcement decision was wrong, DECYD supports appeals. If you need to report abusive use, the public policy pages include contact points for that too.
Public trust depends on clear reporting paths and fair review, not opaque moderation decisions.
More trust topics
Privacy and data controls
What DECYD stores, where it is processed, and the controls you have when you want to review, export, or delete your data.
Security and backups
How DECYD approaches access control, recovery readiness, and the operational safeguards behind a production AI platform.
Reliability and incidents
How DECYD handles degraded systems, incident communication, and the difference between customer-facing health and internal operational signals.