Legal Liability of AI and Automation Systems in Turkey

AI system in industrial automation setting in Turkey legal liability framework under Turkish Code of Obligations and KVKK

Turkey does not yet have a dedicated AI liability statute — and the absence of specific legislation does not mean the absence of legal exposure. Companies that develop, deploy, or operate artificial intelligence and automation systems in Turkey are subject to a fragmented but comprehensive liability framework assembled from the Turkish Code of Obligations (Türk Borçlar Kanunu, TBK, Law No. 6098) tort and contract provisions, the Product Liability Regulation (Ürün Güvenliği ve Teknik Düzenlemeler Kanunu), the Personal Data Protection Law (Kişisel Verilerin Korunması Kanunu, KVKK, Law No. 6698), the Consumer Protection Law (Tüketicinin Korunması Hakkında Kanun, TKHK, Law No. 6502), the Electronic Commerce Law (Law No. 6563), the Turkish Penal Code's cybercrime provisions (TCK Articles 243–245), and an expanding body of sector-specific regulatory requirements from the Banking Regulation and Supervision Agency (BDDK), the Capital Markets Board (SPK), the Information and Communication Technologies Authority (BTK), and the Ministry of Health. Understanding which framework applies to a specific AI system failure — and who within the development, deployment, and operation chain bears responsibility under that framework — is the foundational legal question for any technology company with an AI footprint in Turkey. This guide explains how Turkish law currently distributes AI liability across these frameworks and what legal structures effectively manage that exposure. Practice may vary by authority and year — verify current regulatory requirements and enforcement standards directly before designing any AI liability management structure in Turkey.

The Turkish Code of Obligations — tort and fault-based liability for AI

A lawyer in Turkey advising on the primary tort liability framework for AI systems must explain that TBK Article 49 establishes the general fault-based tort liability principle — a person who unlawfully causes damage to another through fault (kusur) is liable for compensation. When an AI system causes harm, the tort claim under TBK Article 49 requires the claimant to establish: that damage occurred; that the defendant's conduct (or the conduct of persons or systems for which they are responsible) was the cause; and that the defendant acted unlawfully and with fault (negligence or intent). The practical challenge in AI liability claims under TBK Article 49 is establishing both causation and fault — when an AI system makes a decision that causes harm, demonstrating that the developer, operator, or deployer acted negligently in the system's design, training, deployment, or monitoring requires expert evidence that Turkish courts are only beginning to develop frameworks for evaluating. Practice may vary by authority and year — verify current Turkish court expert evidence standards for AI causation analysis before designing any AI litigation or defense strategy.

An Istanbul Law Firm advising on strict liability provisions that may apply to AI systems must explain that TBK Article 71 (hazardous activity liability — tehlike sorumluluğu) provides a strict liability basis that does not require proof of fault — it applies where an activity creates an unusual danger to persons or property beyond ordinary life risk, and the person who operates the dangerous activity is liable for harm it causes regardless of negligence. The question of whether specific AI deployments — particularly autonomous vehicles, industrial robots, medical diagnostic systems, and financial trading algorithms — constitute "hazardous activities" under TBK Article 71 has not been definitively resolved by Turkish courts. Legal commentators have argued that AI systems with significant autonomous decision-making capacity in physical or high-stakes financial environments should be classified as hazardous activities, which would impose strict liability on the operator without requiring proof of fault. We advise clients deploying AI in high-risk environments to structure their liability exposure assuming TBK Article 71 could apply — because the consequence of an unexpected strict liability finding is significantly more serious than having structured liability management that proves unnecessary. Practice may vary — verify current Turkish legal commentary on TBK Article 71 applicability to specific AI deployment contexts before finalizing any liability allocation structure for autonomous or high-stakes AI systems.

A law firm in Istanbul advising on employer liability for AI system failures must explain that TBK Article 66 (employer liability for employees — adam çalıştıranın sorumluluğu) applies where the AI system failure involves a human actor within the defendant's organizational structure who bears some responsibility for the failure — a developer who made a coding error, a deployment engineer who misconfigured the system, or a monitoring analyst who failed to respond to a flagged anomaly. TBK Article 66 imposes strict liability on the employer for damage caused by their employees in the performance of their duties unless the employer can demonstrate they took all appropriate precautions to prevent the damage. In AI contexts, the "appropriate precautions" standard requires establishing what a reasonably prudent organization deploying that category of AI system should have done — in terms of testing, validation, monitoring, and incident response — and demonstrating that those precautions were taken. We advise clients to document their AI development and deployment quality management processes specifically with TBK Article 66 liability defense in mind — because the employer who cannot demonstrate a systematic precaution program faces strict liability. Practice may vary — verify current Turkish court standards for what constitutes adequate precautions for the specific AI system type and deployment context before designing any quality management program with TBK Article 66 defense implications. The general product liability framework for technology systems is analyzed in the resource on legal protection of SaaS in Turkey.

Product liability — AI as a defective product

An English speaking lawyer in Turkey advising on product liability exposure for AI systems must explain that Turkey's product liability framework is established by the Product Safety and Technical Regulations Law (Law No. 7223) and the implementing Product Liability Regulation, which impose liability on producers, importers, and distributors for damage caused by defective products. A product is defective when it does not meet the safety expectations that a reasonable person would have — and the defect can be in the product's design, manufacturing, or the warnings and instructions provided with it. For AI systems, the product liability framework creates three potential defect categories: design defects (where the algorithm or training methodology produces systematically unsafe outputs); manufacturing defects (where a specific deployment instance of the AI system diverges from its intended design due to implementation error); and instruction/warning defects (where the system is deployed without adequate disclosure of its limitations, error rates, and contexts in which it should not be relied upon). Practice may vary by authority and year — verify current Turkish product liability regulation applicability to software and AI systems and the specific defect standards applicable to the AI system type before designing any product liability defense structure.

A Turkish Law Firm advising on who bears product liability in the AI supply chain must explain that Turkey's product liability framework imposes liability on multiple parties in the AI system's supply chain — the developer of the underlying AI model, the company that integrates the model into a deployed product, the importer who brings a foreign AI system into Turkey, and in some cases the distributor who makes the system available to end users. These parties can face joint and several liability for product defect damages, with rights of contribution between them that must be addressed through contractual indemnity structures. A Turkish company that integrates a foreign AI model into a product it sells to Turkish consumers may find itself facing full product liability exposure as the "producer" of the Turkish product — even if the defect originated in the foreign AI component — with its only recourse being a contractual indemnity claim against the foreign component supplier. This supply chain liability exposure makes the contractual relationship between AI integrators and upstream AI model suppliers one of the most commercially significant legal documents in any Turkish AI product deployment. Practice may vary — verify current product liability supply chain liability standards applicable to AI system component structures and the specific indemnity mechanism requirements for effective contractual protection before finalizing any AI integration agreement.

A lawyer in Turkey advising on the interaction between product liability and AI explainability must explain that one of the most practically significant product liability issues for AI systems in Turkey is the "instruction defect" category — which requires that the product be accompanied by adequate warnings, instructions, and disclosures about its safe use. For AI systems that make decisions with material consequences for users — credit scoring systems, medical diagnostic tools, fraud detection algorithms, hiring screening tools — the adequacy of the disclosure about the system's limitations, error rates, and contexts in which human review is necessary is a product liability question, not merely an ethical question. A credit scoring AI that is deployed without adequate disclosure that it produces false negatives at a specific rate for a particular demographic group, or a medical AI that is deployed without adequate disclosure of its validation data limitations, has a potential instruction defect regardless of the technical sophistication of the underlying algorithm. We advise AI product companies to treat the user-facing documentation and limitation disclosures as a product liability document — reviewed by legal counsel with product liability expertise — rather than as marketing material or technical supplementary documentation. Practice may vary — verify current Turkish product liability instruction defect standards for AI-driven decision systems before finalizing any user documentation or deployment disclosure for an AI product in Turkey.

KVKK data protection obligations for AI systems

An Istanbul Law Firm advising on KVKK obligations for AI systems must explain that the Personal Data Protection Law (KVKK, Law No. 6698) creates a comprehensive set of obligations for any organization that processes personal data in Turkey or that processes the personal data of Turkish residents — including organizations operating AI systems that collect, analyze, or make decisions based on personal data. The core KVKK obligations that AI systems must satisfy include: identifying the legal basis for each personal data processing activity (consent, legitimate interest, contract performance, or legal obligation); providing transparency information to data subjects at the time of data collection; implementing security measures appropriate to the sensitivity of the data and the risk of unauthorized access; and — critically for AI systems that make significant automated decisions — providing data subjects with rights to challenge those decisions, request human review, and object to processing. The KVKK Board (Kişisel Verileri Koruma Kurulu) has issued decisions against companies with AI-driven data processing that failed to meet these standards, and has imposed administrative fines. Practice may vary by authority and year — verify current KVKK Board guidance on AI-specific data processing obligations and the specific legal basis requirements applicable to the AI system's data processing activities before any AI deployment that involves personal data processing.

A law firm in Istanbul advising on automated decision-making under KVKK must explain that KVKK Article 11 gives data subjects the right to object to automated processing of their personal data that produces decisions with significant consequences for them — including automated credit decisions, insurance underwriting, employment screening, and medical risk assessment. Unlike the EU GDPR's explicit prohibition on solely automated significant decisions (GDPR Article 22), KVKK's framework is less prescriptive — but the KVKK Board's guidance and enforcement practice have increasingly aligned with GDPR standards on automated decision-making, requiring that data subjects be informed when a significant decision affecting them has been made by an automated system, that they have the right to request human review of the decision, and that the organization can explain the factors that contributed to the automated decision. An AI system that makes significant decisions about Turkish data subjects without implementing these transparency and review mechanisms creates both a KVKK compliance risk and a potential basis for individual claims. We assess automated decision-making scope and design KVKK-compliant review mechanisms for every AI mandate involving data-driven decision systems. Practice may vary — verify current KVKK Board guidance on automated decision-making rights and the specific transparency and review mechanism standards expected for the AI system type before deployment. The KVKK compliance framework is analyzed in the resource on GDPR and KVKK compliance for international companies in Turkey.

An English speaking lawyer in Turkey advising on data breach liability for AI systems must explain that KVKK Article 12 requires data controllers to notify the KVKK Board of a personal data breach "as soon as possible" — with Board guidance indicating 72 hours for high-risk breaches. AI systems that are breached — through adversarial attacks, data poisoning, model inversion attacks, or conventional security failures — create data breach notification obligations that trigger independently of whether the breach was caused by a KVKK compliance failure or by a sophisticated external attack. The 72-hour notification window runs from the moment the data controller discovers or should have discovered the breach — and an organization that has AI systems actively monitoring for security incidents has a higher legal standard for "when it should have known" than an organization relying on passive monitoring. The notification must describe the breach's nature, the categories and approximate number of affected data subjects, the likely consequences, and the remediation measures taken. We prepare AI-specific incident response protocols for clients that integrate the technical detection and response processes with the KVKK notification timeline — so that when a breach is detected, the 72-hour notification preparation can begin immediately rather than after the technical response is complete. Practice may vary — verify current KVKK Board breach notification standards for AI-specific security incidents and the specific notification content requirements before finalizing any AI incident response protocol.

Consumer protection and AI — TKHK obligations

A Turkish Law Firm advising on consumer protection obligations for AI systems must explain that the Consumer Protection Law (TKHK, Law No. 6502) and its implementing regulations impose mandatory disclosure obligations on companies that provide AI-powered services to Turkish consumers — and that these obligations apply regardless of whether the underlying system is characterized as a product or a service. Specific TKHK obligations relevant to AI include: mandatory disclosure of the terms and conditions of service in a clear and understandable form (which for AI services must address the system's decision-making logic, limitations, and the consumer's rights when the AI makes an error); the 14-day right of withdrawal for distance contracts (which applies to subscription-based AI service contracts unless a specific exception applies); and the prohibition on unfair commercial practices (which includes the prohibition on misleading commercial communications about AI system capabilities). A company that markets an AI system with performance claims that the system does not reliably achieve — for example, claiming 99% accuracy for a medical diagnostic tool whose actual performance in the Turkish patient population is significantly lower — creates an unfair commercial practice exposure under TKHK independently of any product liability or tort claim. Practice may vary — verify current TKHK implementing regulation requirements for AI service disclosures and the specific unfair commercial practice standards applicable to AI capability marketing before finalizing any consumer-facing AI product or service documentation.

An Istanbul Law Firm advising on dispute resolution for consumer AI complaints must explain that Turkish consumers who have a dispute with an AI service or product provider have access to the Consumer Arbitration Committee (Tüketici Hakem Heyeti) for disputes below the applicable monetary threshold and to the Consumer Courts (Tüketici Mahkemeleri) for larger disputes — and both forums are available without requiring the consumer to first exhaust any internal complaint mechanism the company may have established. An AI company that receives a consumer complaint about an AI decision — a credit denial, an insurance premium that the consumer believes was calculated incorrectly, a content moderation decision that the consumer believes was erroneous — faces a formal consumer dispute proceeding if the complaint is not resolved to the consumer's satisfaction within the required response period. The Arbitration Committee and Consumer Courts do not have specialist technical expertise in AI systems, which means that the quality of the expert evidence submitted by the company to explain the AI system's operation and the reasonableness of its decision is critical to the proceeding's outcome. We advise AI companies to maintain decision-specific audit logs that can be extracted and presented as evidence in consumer dispute proceedings — so that when a complaint is filed, the factual basis for the AI's decision can be documented and defended. Practice may vary — verify current Consumer Arbitration Committee jurisdiction thresholds and the specific evidence standards applicable to AI decision complaints before designing any consumer complaint response process for an AI product or service.

A lawyer in Turkey advising on unfair contractual terms in AI service agreements must explain that TKHK Article 5 and the Regulation on Unfair Terms in Consumer Contracts prohibit contractual terms in consumer contracts that create a significant imbalance in the parties' rights and obligations to the consumer's detriment — and several AI service agreement provisions that are standard in international templates are potentially void as unfair terms under Turkish law. Specific examples include: broad exclusions of liability for AI system errors that leave the consumer with no remedy for damage caused by the system's failures; terms that allow the AI service provider to unilaterally modify the system's decision-making criteria without notice; and terms that require the consumer to waive their right to challenge automated decisions. A Turkish consumer contract for an AI service that contains these provisions is contractually defective — the unfair terms are void while the remainder of the contract continues in force — and the voidness can be raised by the consumer in any dispute, including before the Arbitration Committee. We review AI service terms and conditions for TKHK unfair term compliance as a standard step in every consumer-facing AI product legal review. Practice may vary — verify current TKHK unfair terms regulation standards applicable to AI service agreement provisions before finalizing any consumer-facing AI service contract for the Turkish market.

Sector-specific AI regulation — banking, healthcare, and telecommunications

An English speaking lawyer in Turkey advising on AI regulation in the Turkish banking sector must explain that the Banking Regulation and Supervision Agency (BDDK) has issued guidance and regulatory requirements that affect how Turkish banks and financial institutions deploy AI and automated decision-making systems — particularly in credit assessment, fraud detection, anti-money laundering screening, and customer interaction. BDDK's information security regulation and operational risk framework require banks to maintain explainability and auditability of significant algorithmic decisions, implement human oversight for decisions above defined risk thresholds, and validate AI model performance against defined benchmarks before deployment. For foreign fintech companies providing AI-powered services to Turkish banks under B2B arrangements, the BDDK's outsourcing regulation imposes requirements on the bank's vendor management that effectively regulate the AI service provider's practices — the bank must ensure that its AI vendors meet BDDK information security standards, maintain audit rights over the vendor's AI operations, and have contingency plans if the vendor's AI service is interrupted. Practice may vary by authority and year — verify current BDDK guidance on AI model governance and the specific outsourcing regulation requirements applicable to fintech AI service providers before any AI deployment in the Turkish banking sector. The fintech regulatory compliance framework is analyzed in the resource on legal advisory for AI and trading startups in Turkey.

A Turkish Law Firm advising on AI regulation in Turkish healthcare must explain that the Ministry of Health and the Public Health Institution regulate the use of AI in Turkish clinical settings through the Medical Devices Regulation (implementing EU MDR standards) and the Health Information Systems regulation framework. An AI system used for clinical decision support — diagnostic imaging analysis, drug interaction checking, patient risk stratification — that meets the regulatory definition of a medical device requires CE marking under the Medical Devices Regulation, registration with the Turkish Medicines and Medical Devices Agency (TITCK), and in some cases a clinical evaluation demonstrating performance in the Turkish patient population. A healthcare AI system that is marketed as a "clinical decision support tool" in an attempt to avoid the medical device regulatory classification — but that in practice produces outputs that clinicians rely on in making clinical decisions — faces the regulatory risk of the TITCK reclassifying it as an unregistered medical device. The civil liability consequences of a healthcare AI failure — misdiagnosis, incorrect drug dosing recommendation, incorrect surgical planning output — are assessed under TBK's tort framework and the medical malpractice standards developed by Turkish courts, where the healthcare provider bears the primary liability but may have contractual recourse against the AI system supplier. Practice may vary — verify current TITCK medical device classification standards for clinical AI systems and the specific clinical evaluation requirements applicable to the AI system's intended use before any healthcare AI deployment in Turkey.

A lawyer in Turkey advising on AI regulation by the Information and Communication Technologies Authority (BTK) must explain that BTK regulates electronic communication networks and services in Turkey — and AI systems deployed in telecommunications infrastructure (network optimization algorithms, automated customer service systems, fraud detection), in digital platforms subject to the Law on Regulation of Publications on the Internet (Law No. 5651), and in information society services subject to BTK's electronic communication regulations are subject to BTK supervisory jurisdiction. For digital platforms above defined user thresholds operating in Turkey, Law No. 5651 (as significantly amended in 2022) imposes obligations to respond to content removal requests within defined periods, appoint a Turkish representative, store Turkish user data in Turkey, and provide statistical reports on content moderation decisions — all of which have direct AI governance implications for platforms that use AI for content moderation, recommendation, and user management. BTK's enforcement powers include bandwidth throttling and in serious cases blocking of non-compliant platforms, creating operational exposure that is more immediately disruptive than a court judgment. Practice may vary by authority and year — verify current BTK and Law No. 5651 compliance requirements for AI-enabled digital platforms and the specific Turkish user data localization requirements before any AI deployment affecting Turkish internet users. The cybersecurity law and data protection framework is analyzed in the resource on cybersecurity law in Turkey: compliance obligations for companies.

Contractual liability allocation in AI deployments

An Istanbul Law Firm advising on AI contract structure must explain that the most effective risk management tool available to AI developers, integrators, and operators in Turkey is a well-designed contractual liability allocation framework — because the statutory liability framework distributes risk across multiple parties in ways that may not match the commercial understanding of where the risk actually lies. The key contractual provisions in an AI deployment agreement include: a precise scope of use clause defining the specific contexts in which the AI system is authorized to be used (which determines whether harm arising from out-of-scope use can be attributed to the deployer rather than the developer); performance specification and acceptance testing requirements (which establish the agreed standard against which defect claims are assessed); limitation of liability clauses structured to comply with TBK's restrictions on excluding liability for gross negligence and intent while limiting exposure for ordinary negligence; indemnity provisions allocating specific liability categories (third-party data protection claims, IP infringement claims, regulatory fines) between the parties; and a warranty structure that addresses what the developer warrants about the AI system's accuracy, reliability, and fitness for its intended purpose without making unenforceable performance guarantees. Practice may vary — verify current TBK limitations on contractual liability exclusion and the specific consumer protection implications of liability limitation clauses in AI service contracts before finalizing any AI deployment agreement for the Turkish market.

A law firm in Istanbul advising on service level agreements for AI systems must explain that SLA provisions for AI systems require specific design attention because standard IT SLA metrics — uptime, response time, throughput — do not capture the performance dimensions most relevant to AI liability. An AI system can be "available" (meeting uptime requirements) while simultaneously producing significantly degraded output quality (model drift, training data staleness, adversarial attack degradation) — and standard SLA provisions that focus only on availability will not give the deployer contractual recourse against the developer for quality degradation that does not constitute system unavailability. AI-specific SLA provisions should include: model performance benchmarks (accuracy, precision, recall, or other relevant metrics) measured against defined test data sets at defined intervals; data freshness requirements specifying how often the training data must be updated; monitoring and reporting obligations requiring the developer to proactively disclose performance degradation; and remediation timelines and consequences for performance metric failures. These provisions give the deployer contractual tools to respond to AI quality degradation before it causes harm rather than after. Practice may vary — verify current Turkish commercial court standards for SLA enforcement and the specific evidence requirements for AI performance benchmark disputes before finalizing any AI system SLA structure.

An English speaking lawyer in Turkey advising on open-source AI component liability must explain that Turkish companies deploying AI systems that incorporate open-source components — including open-source large language models, computer vision frameworks, and machine learning libraries — face a specific liability gap where the open-source license's disclaimer of warranty and limitation of liability is potentially unenforceable against Turkish consumers and third parties under TKHK's mandatory consumer protection standards and TBK's tort liability framework. An open-source software license that disclaims all warranties and excludes all liability may effectively disclaim liability between the developer and the open-source community — but it does not disclaim the liability of the Turkish company that integrates the open-source component into a commercial product or service offered to Turkish users. The Turkish company deploying the open-source AI component is the "producer" for product liability purposes and faces full liability for component defects, with its only contractual recourse being against the upstream supplier — which in the case of open-source software is typically unavailable. We advise clients to treat open-source AI components with the same supplier due diligence rigor as proprietary AI components — evaluating the component's security track record, vulnerability disclosure history, and community maintenance status — because the liability consequences of an open-source component failure fall entirely on the Turkish deployer. Practice may vary — verify current Turkish product liability standards for open-source software components before any commercial deployment of open-source AI systems to Turkish users.

Turkish National AI Strategy and evolving regulation

A Turkish Law Firm advising on Turkey's regulatory trajectory for AI must explain that Turkey published its National Artificial Intelligence Strategy (Ulusal Yapay Zeka Stratejisi) for 2021–2025, which articulates a policy framework emphasizing AI development and adoption across public services, industry, and research — alongside a commitment to developing an ethical and legal framework for AI governance. The Strategy's implementation has produced regulatory activities across multiple government departments, but as of the current date Turkey does not have a dedicated AI liability law or a comprehensive AI regulation framework comparable to the EU AI Act. The regulatory trajectory is toward increased sector-specific AI governance requirements — particularly in financial services (BDDK), healthcare (TITCK), and digital platforms (BTK) — rather than a horizontal AI liability statute in the immediate term. Foreign companies operating AI systems in Turkey should track these sector-specific developments rather than waiting for a single comprehensive AI law. Practice may vary by authority and year — the regulatory landscape for AI in Turkey is actively developing; verify current published regulatory guidance from BDDK, BTK, TITCK, and the Ministry of Industry and Technology before finalizing any AI compliance framework for Turkey.

An Istanbul Law Firm advising on EU AI Act spillover effects in Turkey must explain that while Turkey is not an EU member state and the EU AI Act does not directly apply to Turkish domestic operations, the EU AI Act has significant practical implications for Turkish companies in three specific contexts: Turkish companies exporting AI systems to EU markets (which must meet EU AI Act conformity requirements); Turkish subsidiaries of EU companies deploying AI systems that are governed by their parent company's EU AI Act compliance program; and Turkish companies whose AI systems process the personal data of EU residents (triggering GDPR obligations that interact with the EU AI Act's requirements for high-risk AI systems). For Turkish AI developers and exporters with EU market ambitions, the EU AI Act's risk classification framework — distinguishing between prohibited AI applications, high-risk AI systems requiring conformity assessment, and lower-risk systems — directly affects their product roadmap and certification requirements. We advise Turkish AI companies with EU market exposure on dual compliance strategies that satisfy both Turkish regulatory requirements and EU AI Act conformity requirements. Practice may vary — verify current EU AI Act implementation timeline and the specific conformity assessment requirements applicable to the AI system type before designing any dual-compliance framework for Turkish-EU AI deployment.

A lawyer in Turkey advising on AI governance frameworks for proactive compliance must explain that in the current Turkish regulatory environment — where AI-specific legislation is developing but not yet comprehensive — the most effective legal risk management strategy for AI companies is implementing a governance framework that anticipates the direction of regulation rather than waiting for specific compliance requirements to be enacted. Best practice AI governance frameworks include: a formal AI inventory documenting all AI systems in production, their intended use, the data they process, their decision scope, and their accuracy metrics; a risk classification system that applies heightened governance standards to high-stakes AI applications (credit decisions, medical diagnosis, employment screening, criminal justice, public safety); an AI model validation protocol requiring independent testing before deployment and periodic re-validation against current performance benchmarks; an incident response procedure specifically designed for AI failures distinguishing between model performance degradation, security incidents, and data protection breaches; and a transparency and explainability program providing affected individuals with meaningful information about AI decisions affecting them. These elements collectively position the company favorably in any regulatory investigation, consumer dispute, or litigation by demonstrating a systematic precaution program. Practice may vary — check current guidance before acting on any information on this page.

Insurance and risk transfer for AI exposure

An English speaking lawyer in Turkey advising on AI liability insurance must explain that the Turkish insurance market currently offers coverage for AI-related risks primarily through technology errors and omissions (E&O) policies, cyber liability policies, and product liability policies — and the coverage provided by each differs in ways that create potential gaps when an AI system failure gives rise to multiple simultaneous liability claims. A technology E&O policy typically covers claims arising from failures to perform professional services as contracted — which captures AI system output errors that cause client financial loss. A cyber liability policy typically covers costs arising from data security incidents — which captures KVKK breach notification costs, regulatory fines, and claims from individuals whose data was compromised. A product liability policy covers bodily injury and property damage caused by a defective product — which captures physical harm caused by an AI system embedded in a physical product. A company that deploys an AI system that simultaneously causes financial harm to a client (E&O), involves a data breach (cyber), and causes physical damage (product liability) may find that each policy's coverage applies to only part of the loss, and that the three policies' interaction creates coverage gaps. Practice may vary — verify current Turkish insurance market AI coverage availability and the specific policy terms and exclusions applicable to the AI system type before finalizing any AI insurance program.

A Turkish Law Firm advising on insurance coordination with contractual indemnity must explain that insurance coverage and contractual indemnity structures must be designed together — because a contractual indemnity obligation that is not backed by insurance coverage is only as valuable as the indemnifying party's financial capacity to pay, and insurance coverage for AI risks that is not aligned with the contractual indemnity structure may produce situations where the insurer denies coverage on the grounds that the loss was caused by a contractual liability assumed voluntarily rather than arising from statute. The contractual indemnity provisions between an AI developer and an AI deployer should be reviewed against each party's insurance coverage to confirm that the indemnity obligations are within the indemnifying party's policy limits, that the policy covers the specific categories of loss covered by the indemnity, and that the policy's notice and cooperation requirements are compatible with the contract's dispute resolution timeline. We review AI deployment contracts and the parties' insurance programs together to identify and address structural gaps between contractual risk allocation and insurance coverage. Practice may vary — verify current Turkish insurance law requirements for contractual liability policy coverage and the specific policy coordination requirements applicable to the AI deployment structure before finalizing any AI insurance and indemnity framework.

A lawyer in Turkey advising on critical infrastructure AI insurance requirements must explain that AI systems deployed in critical infrastructure contexts — energy, water, telecommunications, banking payment systems, healthcare — face mandatory insurance requirements under sector-specific regulation that are independent of the general product liability and E&O insurance market. These mandatory requirements vary by sector and are administered by different regulatory authorities: BDDK for banking, EPDK for energy, BTK for telecommunications, and TITCK for healthcare. A foreign company deploying AI in a Turkish critical infrastructure context must confirm the applicable mandatory insurance requirements before deployment — because operating without required insurance is both a regulatory violation and a factor that courts consider in assessing the company's general compliance posture in liability proceedings. We identify and verify mandatory insurance requirements across all applicable regulatory frameworks for every critical infrastructure AI deployment mandate. Practice may vary by authority and year — verify current sector-specific insurance requirements for AI deployments in the relevant critical infrastructure sector before any deployment in a regulated Turkish critical infrastructure context. Practice may vary — check current guidance before acting on any information on this page.

Litigation and dispute resolution for AI failures

An Istanbul Law Firm advising on AI dispute jurisdiction in Turkey must explain that when an AI system failure gives rise to legal proceedings, the choice of court or arbitration forum significantly affects both the speed of resolution and the quality of the technical analysis applied to the dispute. Turkish commercial courts (Ticaret Mahkemeleri) have jurisdiction over most AI-related commercial disputes, consumer courts have jurisdiction over consumer AI disputes, and administrative courts have jurisdiction over regulatory enforcement challenges. Turkish commercial courts do not have specialist AI divisions — they apply the same rules of evidence and procedure as all commercial disputes, relying on court-appointed experts (bilirkişiler) to analyze technical questions. The quality of the bilirkişi analysis varies significantly depending on the court's ability to identify qualified AI experts and the parties' ability to challenge inadequate expert reports. For high-value or technically complex AI disputes, international commercial arbitration (ISTAC, ICC, or LCIA) provides access to arbitrators with specific technical expertise and a more flexible evidence process — and Turkey's arbitration framework under the International Arbitration Law (Law No. 4686) generally supports enforcement of arbitration awards for B2B AI disputes. Practice may vary — verify current Turkish commercial court bilirkişi appointment procedures for AI disputes and the specific arbitration framework applicable to the AI contract's governing law before selecting any dispute resolution mechanism in AI deployment agreements.

A law firm in Istanbul advising on evidence in AI disputes must explain that AI disputes require specialized evidence collection and preservation that differs fundamentally from conventional commercial disputes. The relevant evidence in an AI liability case includes: the AI system's training data (or a representative sample, since full training data is often commercially sensitive); the model architecture and hyperparameter configuration at the time of the disputed decision; the specific input data that was provided to the model for the disputed decision; the model's output for that input, including confidence scores or probability distributions if available; the audit log showing when the model version was deployed and whether it had been updated since validation; and the monitoring reports showing the model's performance metrics in the period before the disputed decision. This evidence is typically in the control of the AI system's developer or operator, and a claimant who does not have contractual audit rights will face significant evidence collection challenges. We advise AI companies to implement decision-level logging systems that capture the inputs, outputs, and model version for each significant decision — both as a quality management tool and as an evidence preservation measure. Practice may vary — verify current Turkish commercial court evidence admissibility standards for AI audit logs and decision records before designing any AI logging architecture with litigation evidence implications. The commercial litigation framework is analyzed in the resource on commercial litigation in Turkey.

An English speaking lawyer in Turkey advising on the role of mandatory mediation in AI disputes must explain that since 2018, Turkish commercial law disputes meeting defined monetary thresholds are subject to mandatory mediation (zorunlu arabuluculuk) before a court lawsuit can be filed — and AI-related commercial disputes that are characterized as contractual disputes (breach of AI service agreement, warranty claims, SLA penalty disputes) are subject to this mandatory mediation requirement. The mandatory mediation session is a procedural prerequisite — failure to complete mediation before filing a commercial lawsuit results in the court dismissing the case on procedural grounds. For AI disputes, mandatory mediation has a practical advantage beyond procedure: a technical dispute about AI system performance is often more effectively resolved through a mediated settlement that includes technical remediation commitments, performance monitoring protocols, and compensation — than through a court judgment that simply awards monetary damages. We participate in AI dispute mediation sessions with technical advisors alongside legal counsel, because the most commercially valuable settlement outcomes in AI disputes typically combine financial and technical components. Practice may vary — verify current mandatory mediation scope for AI service contract disputes and the specific mediation application procedures applicable at the relevant court before filing any AI commercial dispute proceeding.

How we work in AI liability mandates

A best lawyer in Turkey managing an AI liability mandate begins with a legal framework mapping exercise: identifying which of the multiple overlapping legal frameworks (TBK tort, product liability, KVKK, TKHK, sector-specific regulation) applies to the specific AI system, in the specific deployment context, for the specific category of potential harm. This mapping exercise produces a risk matrix that identifies the liability exposure by framework, the party in the AI supply chain who bears that exposure under the applicable legal standard, and the contractual and governance measures that most effectively address that exposure. Only after this mapping is complete does the work of contract drafting, compliance program design, and risk allocation structuring begin — because without the framework mapping, the contractual and governance measures may address the wrong risks or leave the most significant exposures unaddressed.

ER&GUN&ER advises AI developers, integrators, deployers, and operators across the full spectrum of Turkish AI legal risk — TBK tort and strict liability framework analysis, product liability defect assessment and defense, KVKK automated decision-making compliance, consumer protection law compliance for AI services, sector-specific regulatory compliance (BDDK, TITCK, BTK), AI deployment contract drafting and negotiation, SLA design for AI performance obligations, open-source AI component liability assessment, insurance program design and contractual alignment, AI governance framework development, mandatory mediation representation, commercial court litigation, and administrative court regulatory enforcement challenges. We work in English throughout all international mandates. For the data protection compliance framework — covering KVKK obligations applicable to all personal data processing including AI systems — see the resource on personal data protection law in Turkey. For the cybersecurity law framework — covering BTK and sector-specific cybersecurity obligations — see the resource on cybersecurity law in Turkey. Practice may vary — check current guidance before acting on any information on this page.

Frequently Asked Questions

  • Does Turkey have a specific AI liability law? No — Turkey does not yet have a dedicated AI liability statute. Legal liability for AI system failures is currently distributed across the Turkish Code of Obligations (TBK) tort framework, the Product Liability Regulation, KVKK data protection law, consumer protection law (TKHK), and sector-specific regulatory frameworks in banking (BDDK), healthcare (TITCK), and telecommunications (BTK). Turkey's National AI Strategy for 2021–2025 commits to developing an AI governance framework, but comprehensive AI legislation had not been enacted as of the current date. Practice may vary — verify current legislative developments.
  • Who bears legal liability when an AI system causes harm in Turkey? Liability depends on which TBK framework applies and who in the AI supply chain is legally characterized as the responsible party. Potential liable parties include: the AI model developer (for design defects); the system integrator (as the "producer" of the deployed product); the operator (for deployment and monitoring failures under TBK Article 66 employer liability); and in some cases the user organization (for out-of-scope deployment or failure to implement required human oversight). These parties can face joint and several liability with contribution rights between them. Practice may vary — verify current Turkish court liability allocation standards for the specific AI system type.
  • Does TBK Article 71 strict liability apply to AI systems? TBK Article 71 (hazardous activity liability) can apply to AI systems that create unusual danger beyond ordinary life risk — autonomous vehicles, industrial robots, medical diagnostic AI, and high-frequency trading algorithms are the most likely candidates. Strict liability under Article 71 does not require proof of fault — it requires only demonstrating that the hazardous activity caused the damage. Turkish courts have not definitively ruled on the Article 71 classification for specific AI categories, but the risk of an unexpected strict liability finding justifies structuring AI liability management assuming Article 71 could apply for high-risk applications. Practice may vary — verify current Turkish legal commentary and court decisions on Article 71 applicability to AI.
  • What KVKK obligations apply specifically to AI systems? Core KVKK obligations for AI include: identifying the legal basis for all personal data processing activities; providing transparency information to data subjects at the time of collection; implementing security measures appropriate to data sensitivity and processing risk; notifying the KVKK Board within 72 hours of discovering a high-risk data breach; and providing data subjects with rights to challenge automated decisions, request human review, and object to profiling. The KVKK Board has increasingly aligned its guidance with GDPR standards on automated decision-making. Practice may vary — verify current KVKK Board guidance on AI-specific obligations before any AI deployment involving personal data processing.
  • What is an "instruction defect" in AI product liability? An instruction defect occurs when an AI system is deployed without adequate disclosure of its limitations, error rates, and contexts where it should not be relied upon — and this inadequate disclosure contributes to harm. For clinical AI, credit AI, or employment screening AI, the adequacy of the limitation disclosures is a product liability question. A system that is technically sophisticated but marketed and deployed without adequate disclosure of known performance limitations has an instruction defect under Turkey's product liability framework. Practice may vary — verify current product liability instruction defect standards for AI-driven decision systems before finalizing any user documentation for an AI product.
  • Do Turkish consumer protection laws apply to AI services? Yes — TKHK (Law No. 6502) applies to AI services provided to consumers, imposing mandatory disclosure obligations, the 14-day right of withdrawal for distance contracts, and the prohibition on unfair commercial practices (including misleading capability claims about AI performance). TKHK's unfair contractual terms regulation can void standard AI service agreement provisions that exclude all liability for AI errors or prevent consumers from challenging automated decisions. Practice may vary — verify current TKHK requirements for AI service contracts and the specific unfair term standards applicable to AI service agreements before finalizing any consumer-facing AI service contract for Turkey.
  • How does BDDK regulate AI in Turkish banking? BDDK's information security regulation and operational risk framework require Turkish banks to maintain explainability and auditability of significant algorithmic decisions, implement human oversight for decisions above defined risk thresholds, and validate AI models before deployment. BDDK's outsourcing regulation effectively regulates AI service providers to banks through the bank's vendor management obligations. Foreign fintech companies providing AI to Turkish banks must meet BDDK information security standards and maintain audit rights compliance. Practice may vary — verify current BDDK AI model governance guidance before any AI deployment in the Turkish banking sector.
  • Does the EU AI Act affect Turkish companies? The EU AI Act does not directly apply to Turkish domestic operations. It affects Turkish companies in three specific contexts: Turkish companies exporting AI to EU markets (must meet EU AI Act conformity requirements); Turkish subsidiaries of EU companies (governed by parent company's EU AI Act compliance program); and Turkish companies processing EU residents' personal data (GDPR obligations interact with EU AI Act requirements for high-risk AI). Turkish AI developers with EU market ambitions should incorporate EU AI Act conformity requirements into their product roadmap. Practice may vary — verify current EU AI Act implementation timeline and applicable requirements for the specific AI system type.
  • What contractual provisions are essential in AI deployment agreements? Essential provisions include: precise scope of use defining authorized deployment contexts; performance specifications and acceptance testing criteria establishing the agreed standard for defect claims; TBK-compliant limitation of liability clauses (TBK Article 115 prohibits exclusion of gross negligence liability); indemnity provisions allocating specific liability categories (KVKK claims, IP claims, regulatory fines); warranty structure addressing accuracy, reliability, and fitness for purpose without unenforceable performance guarantees; and AI-specific SLA metrics covering model performance benchmarks, data freshness requirements, and performance degradation notification obligations. Practice may vary — verify current TBK liability limitation requirements and TKHK unfair term standards applicable to AI contracts before finalizing any AI deployment agreement.
  • What AI-specific SLA metrics should be included in AI service agreements? Standard IT SLA metrics (uptime, response time) are insufficient for AI systems because a system can be "available" while producing significantly degraded output quality. AI-specific SLA provisions should include: model performance benchmarks (accuracy, precision, recall, or domain-specific metrics) measured at defined intervals against defined test data; data freshness requirements specifying training data update frequency; proactive performance degradation disclosure obligations; and remediation timelines and consequences for benchmark failures. These provisions give the deployer contractual tools to respond to AI quality degradation before it causes harm. Practice may vary — verify current Turkish commercial court SLA enforcement standards before finalizing any AI system SLA structure.
  • Can open-source AI liability be disclaimed under Turkish law? Open-source license disclaimers of warranty and liability may be effective between the developer and the open-source community but do not disclaim the liability of a Turkish company that integrates the open-source component into a commercial product or service for Turkish users. The Turkish deployer is the "producer" for product liability purposes and faces full liability for component defects regardless of the open-source license terms. Turkish consumer protection law can also override open-source license disclaimers in consumer contracts. Open-source AI components require the same supplier due diligence rigor as proprietary components. Practice may vary — verify current product liability standards for open-source AI components before any commercial deployment.
  • Is mandatory mediation required for AI disputes in Turkey? Commercial AI disputes meeting defined monetary thresholds are subject to mandatory mediation (zorunlu arabuluculuk) before a court lawsuit can be filed. Failure to complete mediation before filing results in procedural dismissal. For AI disputes, mediation can be more effective than litigation because it can incorporate technical remediation commitments alongside financial compensation — outcomes that a court monetary judgment cannot produce. Consumer AI disputes are handled through Consumer Arbitration Committees (below threshold) and Consumer Courts (above threshold). Practice may vary — verify current mandatory mediation scope and threshold before filing any AI commercial dispute proceeding.
  • What insurance covers AI liability in Turkey? AI liability exposure is currently covered across multiple policy types: technology E&O (AI output errors causing client financial loss), cyber liability (KVKK breach costs, regulatory fines, data subject claims), and product liability (bodily injury and property damage from defective AI products). Each policy type covers different parts of the AI liability exposure, and the interaction between multiple simultaneous claims can create coverage gaps. Insurance and contractual indemnity structures must be designed together to ensure alignment. Practice may vary — verify current Turkish insurance market AI coverage availability and policy terms before finalizing any AI insurance program.
  • What evidence should be preserved for AI dispute defense? Critical evidence includes: training data (or representative sample); model architecture and hyperparameter configuration at the time of the disputed decision; specific input data provided to the model; model output including confidence scores; audit log showing model version deployment history; and performance monitoring reports for the period before the disputed decision. Decision-level logging systems that capture inputs, outputs, and model version for each significant decision serve both as quality management tools and evidence preservation measures. Contemporaneous evidence created before any dispute arises is significantly more persuasive than reconstructed evidence. Practice may vary — verify current Turkish commercial court evidence admissibility standards for AI audit records.
  • Do you advise on AI governance frameworks for proactive compliance? Yes — we develop AI governance frameworks that include: AI system inventories documenting all production AI systems; risk classification systems applying heightened governance to high-stakes applications; model validation protocols for pre-deployment and periodic re-validation; AI-specific incident response procedures distinguishing performance degradation, security incidents, and data breaches; and transparency programs providing meaningful information about AI decisions to affected individuals. These elements collectively position clients favorably in regulatory investigations, consumer disputes, and litigation by demonstrating systematic precaution programs consistent with the TBK employer liability standard.

Author: Mirkan Topcu is an attorney registered with the Istanbul Bar Association (Istanbul 1st Bar), Bar Registration No: 67874. His practice focuses on cross-border and high-stakes matters where evidence discipline, procedural accuracy, and risk control are decisive.

He advises technology developers, integrators, operators, and regulated entities across AI Liability Law (TBK), Product Liability Regulation, KVKK Automated Decision-Making Compliance, Consumer Protection Law (TKHK), Sector-Specific AI Regulation (BDDK, TITCK, BTK), AI Deployment Contract Structuring, AI Governance Framework Development, Insurance Program Design, Mandatory Mediation, and Commercial Court AI Litigation matters where regulatory precision and liability management are decisive.

Education: Istanbul University Faculty of Law (2018); Galatasaray University, LL.M. (2022). LinkedIn: Profile. Istanbul Bar Association: Official website.