• Terms and Policies

    Comitium AI Responsible AI Use, Transparency & Human Oversight Policy

    Last Updated: 1/1/2025

    Comitium AI builds AI systems designed to augment human judgment, improve policy intelligence, and support responsible engagement across government, business, and civil society.

    This policy explains how we expect our AI to be used, our transparency commitments, and the role of human oversight in all applications of our technology.

    By using Comitium AI’s products, APIs, and services (“Services”), you agree to follow this policy.

    ________________________________

    1. Guiding Principles

    Comitium AI’s approach to responsible AI is grounded in five principles:

    Human-centered design — AI augments, not replaces, human judgment.

    Transparency — Users should understand capabilities, limitations, and appropriate use.

    Safety & integrity — Preventing harmful, abusive, or high-risk misuse.

    Accountability — Users and organizations remain responsible for decisions made using AI.

    Inclusivity & fairness — Policy intelligence should consider multiple perspectives and reduce, not amplify, inequities.

    ________________________________

    2. Required Human Oversight

    AI-generated insights must be reviewed by qualified humans—especially in domains that affect:

    public policy

    regulation and compliance

    financial decisions

    legal strategy

    government affairs

    public health, safety, or well-being

    critical communications

    stakeholder engagement

    environmental or climate-related assessments

    Users must:

    validate the accuracy of AI-generated content,

    ensure context-appropriate interpretation,

    apply professional or organizational judgment before taking action,

    seek subject-matter expert review where high stakes are involved.

    Comitium AI systems must not be used for autonomous decision-making in sensitive or regulatory domains.

    ________________________________

    3. Transparency Requirements (for All Users)

    When using Comitium AI systems:

    3.1 Disclosure in External Communication

    If AI-generated content is shared externally (clients, regulators, policymakers, media, or the public), you must:

    disclose that AI assistance was used, and

    ensure the content has undergone human review.

    3.2 No Misrepresentation

    Users may not:

    present AI-generated material as authoritative analysis without qualification,

    imply that Comitium AI endorses a position,

    misrepresent Outputs as official policy, legal advice, or professional judgment.

    3.3 Explainability

    Users should be prepared to explain:

    how AI was used,

    how final decisions were made,

    how human oversight and validation were applied.

    ________________________________

    4. Responsible Use Requirements

    Users must follow these standards when using Comitium AI’s Services.

    4.1 Allowed Use

    Comitium AI supports uses that:

    elevate policy analysis and foresight,

    improve institutional effectiveness,

    strengthen stakeholder inclusion,

    support evidence-based public affairs work,

    accelerate responsible innovation,

    enhance transparency and accountability in governance.

    4.2 Prohibited Use

    Users may not use the Services for:

    a. Illegal or harmful activity

    violating laws or regulations

    fraud, deception, or illicit surveillance

    exploitation or harassment

    generating defamatory or malicious content

    b. High-risk, unsupported, or sensitive decisions

    medical or clinical decision-making

    legal advice without attorney oversight

    financial advice without licensed review

    safety-critical or autonomous decision systems

    law enforcement or intelligence surveillance

    weapons-related design or optimization

    c. Political manipulation or disinformation

    targeted political persuasion

    influence operations

    covert lobbying communications

    deceptive engagement with policymakers

    manipulating public opinion or elections

    Comitium AI may allow transparent, non-deceptive political analysis or scenario exploration when explicitly permitted through enterprise agreements.

    d. Misuse of Outputs

    using Output to train competing AI systems

    attempting to reverse-engineer or extract model parameters

    generating synthetic identities or impersonation

    creating misleading datasets or forged documents

    ________________________________

    5. Safety, Monitoring & Abuse Prevention

    Comitium AI implements safeguards to:

    monitor for harmful or abusive use,

    detect anomalous patterns,

    protect system integrity,

    maintain user and societal safety.

    We may suspend or terminate accounts that violate this policy or present safety risks.

    ________________________________

    6. Accuracy, Limitations & Appropriate Use

    AI systems have inherent limitations.
    Output may be:

    incomplete or outdated

    inaccurate or inconsistent

    sensitive to phrasing

    limited by underlying data

    insufficient for high-stakes decisions

    Users are responsible for:

    verifying facts and citations,

    evaluating assumptions and implications,

    ensuring Outputs reflect up-to-date information,

    avoiding over-reliance on model-generated reasoning.

    ________________________________

    7. Data Use, Privacy & Security

    Comitium AI’s use of personal data is governed by our Privacy Policy.
    We do not use User Content to train models without explicit permission.

    Users remain responsible for:

    ensuring lawful handling of data,

    protecting confidential or regulated content,

    complying with organizational or jurisdictional requirements.

    ________________________________

    8. Enterprise-Specific Obligations

    Organizations using Comitium AI must:

    train their teams on appropriate AI use,

    implement internal oversight or review processes,

    maintain safeguards around mis/disinformation,

    ensure compliance with sector-specific regulations,

    designate responsible personnel (“AI stewards”) when required.

    ________________________________

    9. Reporting Misuse or Safety Concerns

    Users should report suspected misuse, vulnerabilities, or safety issues to:

    security@comitiumai.com

    safety@comitiumai.com

    We will review all reports promptly.

    ________________________________

    10. Changes to This Policy

    We may update this policy to reflect:

    regulatory developments,

    safety advancements,

    new product capabilities,

    evolving best practices.

    Material updates will be communicated through our website.

    ________________________________

    11. Contact

    Comitium AI, Inc.
    Website: https://www.comitiumai.com
    Email: policy@comitiumai.com

  • Insights

    News

    Research

    Applications

    Solutions

    p(AI)

    stepw(AI)se

    Company

    About

    Our Vison for AI

    Governance

    Responsibility

    Careeers

    Foundation

    Terms & Policies

    Terms of Use

    Privacy Policy

    Terms of Service

    Responsible Disclosure Policy

    AI Governance Policy