In March 2026 the Turkish data protection authority (KVKK Board) released the long-awaited guidance on AI applications. The document is broadly aligned with the EU AI Act’s risk-based approach but leaves its own marks on Türkiye-specific topics: explicit consent interpretation, cross-border data transfers and sectoral carve-outs. A pragmatic view of the five themes that should shape your AI projects through the second half of 2026.
1) Risk classification
The EU AI Act crystallised the four-tier classification (prohibited, high, limited, minimum risk). The KVKK guidance refers to the same categories and explicitly flags hiring, credit scoring, healthcare triage and biometric identification in public spaces as “high risk”. For these, a data protection impact assessment (DPIA) and an audit trail are now mandatory.
2) Explicit consent and the LLM context
Sending customer data to an LLM provider was a grey area in 2025; in 2026 it is clear: if personal data leaves the controller for an LLM provider, explicit consent is required. Furthermore, consent cannot be granted “for general AI use”; the specific purpose (“to summarise your support request”) must be stated. Operating in-house or VPC-isolated models reduces this burden materially.
3) Automated decision-making and the right to object
KVKK Article 11 already gave individuals the right to object to fully-automated decisions adversely affecting them; the new guidance gives that right operational form. If a model produces an automated decision, the reasoning must be reasonably explainable, human oversight must be evidenceable, and the objection workflow must be visible to the user.
4) Training data and the lawful source
You must be able to document the legal source of every dataset used to train a model. Open data / licensed data / contractual data / consent-based data is the breakdown the supervisor expects. Convergence with EU positions on copyright-bearing text and images is a strong trend in 2026; provenance of generative AI outputs will become a standard expectation.
5) A practical compliance checklist
- DPIA completed? (mandatory for high-risk projects)
- Is the data inventory clear about what the model is trained on and what it queries?
- Are consent texts purpose-specific rather than a vague “for AI”?
- Is the DPA + sub-processor list + jurisdiction documented with the LLM provider?
- Is there a human-in-the-loop checkpoint in any automated decision flow?
- Audit log: who sent what data to which model and when?
- Is there a working PII-redaction layer on outputs?
