Back to Gronic

AI Policy

Last updated: May 7, 2026

Policy Purpose

This AI Policy explains how Gronic operates AI-assisted features with transparency, safety, and user protection in mind.

Transparency and User Notice

When AI features are used in products, we aim to provide clear user-facing notice.

  • AI-assisted outputs are identified in product UI where reasonably possible
  • Generated or transformed outputs may include explicit AI-generated notices
  • For exported/generated media content, labeling methods may include visible tags or technical identifiers

Human Oversight

AI features are designed to support user workflows, not to replace user judgment.

  • Final decisions remain with the user or operator
  • Users should review outputs for factual and contextual accuracy
  • For sensitive contexts, additional human review is strongly recommended

Safety and Abuse Prevention

We operate safeguards to reduce harmful, deceptive, or rights-infringing AI use.

  • No intentional support for illegal exploitation, fraud, or impersonation
  • No intentional support for deceptive deepfake misuse
  • We may limit or suspend accounts that attempt prohibited AI use

Regulatory Alignment

We track Korean AI regulatory developments, including trust and transparency obligations, and update implementation as needed.

  • Policy and product controls may be revised based on evolving guidance
  • Where legally required, additional notices, logs, or assessment procedures may be introduced

Contact

Questions about AI operation and safeguards: contact@gronic.io