STARTO, a Seoul‑based startup accelerator, has announced a strict policy that prohibits its portfolio companies from using personal likenesses—photos, videos, or voice recordings of identifiable individuals—as training data for generative‑AI models. The ban aims to protect copyright and privacy rights while preparing startups for emerging AI regulations.
Reasons Behind the Policy
STARTO’s decision is driven by growing concerns over copyright infringement and privacy violations when personal likenesses are used without explicit permission. By blocking unlicensed data, the accelerator seeks to reduce legal exposure and align with upcoming regulatory requirements.
Alignment with South Korean AI Legislation
South Korea’s new AI framework mandates consent for any personal data used in model training and imposes penalties for non‑compliance. STARTO’s ban positions its cohort to meet these obligations from day one.
Impact on Portfolio Companies
Startups that rely on large, uncurated datasets must now adjust their data pipelines. The shift encourages the use of licensed likenesses or synthetic data generation techniques that do not involve real individuals.
- Increased Development Costs: Acquiring licensed data or building synthetic alternatives may raise expenses.
- Longer Time‑to‑Market: Additional compliance steps can extend product development cycles.
- Strategic Advantage: Companies that respect creator rights may gain trust and avoid costly litigation.
Areas Most Affected
Ventures focused on avatar creation, deep‑fake mitigation, and personalized voice assistants are likely to feel the greatest impact, as these applications heavily depend on authentic human likenesses.
Broader Industry Implications
The ban sets a precedent for other accelerators and venture funds, potentially prompting a wave of similar safeguards across the AI investment community. As more investors demand proper licensing, the pressure on AI firms to secure consent for personal data will intensify.
Potential Benefits for Creators
By limiting the ingestion of personal likenesses without consent, STARTO supports a model where creators retain control over how their image and voice are used, fostering fair and equitable sharing of AI‑generated benefits.
Future Outlook and Compliance Support
STARTO has emphasized a commitment to responsible AI development and will monitor evolving legal standards. While specific guidance tools have not been disclosed, the accelerator’s stance signals a proactive approach to compliance and risk mitigation.
