The increasing sophistication of artificial intelligence presents unprecedented legal challenges. Traditional legal frameworks, often designed for human actors, struggle to address the actions and outputs of these complex systems. A central difficulty arises from applying established legal principles, particularly concerning intent, to entities that lack human-like consciousness. This necessitates exploring innovative approaches to regulation, moving beyond individual instances of harm to focus on systemic risk management and proactive measures by those developing and deploying these powerful technologies. How can legal responsibility be fairly and effectively assigned in a world increasingly shaped by algorithms and automated decision-making?
How should ascribing intentions and applying objective standards be utilized in regulating artificial intelligence programs?
The regulation of AI programs presents a unique challenge given their lack of human-like intentions. Traditional legal frameworks often rely on concepts like mens rea or intentionality to assign liability. However, AI operates based on algorithms and data, raising the question of how to adapt the law to regulate entities that lack subjective intentions. The proposed solution involves adopting strategies already familiar in legal contexts dealing with non-human or collective entities, such as corporations. These strategies include ascribing intentions to AI programs under certain circumstances and holding them to objective standards of conduct, regardless of their actual internal states.
The first strategy, ascribing intentions, would involve treating AI programs as if they possessed a particular intent, even if they do not have one in reality. This approach is analogous to how the law treats actors in intentional torts, where the law may presume intentions based on the consequences of their actions. The second strategy focuses on objective standards of behavior, such as reasonableness. Instead of assessing an AI’s subjective intent, its actions would be measured against the standard of a hypothetical reasonable person or entity in a similar situation. This could involve applying principles of negligence, strict liability, or even the heightened duty of care associated with fiduciary obligations. The choice of standard would depend on the specific context and the level of risk involved in the AI’s operation.
Responsibility and Oversight
It is important to remember that AI programs are ultimately tools used by human beings and organizations. Therefore, the responsibility for AI-related harms should primarily fall on those who design, train, implement, and oversee these systems. These individuals and companies should be held accountable for ensuring that AI programs are used responsibly, ethically, and in compliance with the law. By using objective standards, regulators can more easily assign responsibility to human beings for issues related to AI program behavior. This approach avoids the complexities of trying to understand the internal workings of AI systems and instead focuses on the readily observable actions and decisions of those who control them.
How may actors be held responsible for the harmful consequences arising from the use of AI technologies?
The question of legal responsibility concerning AI-driven harms centers not on the AI itself, but on the human actors and organizations that design, implement, and deploy these technologies. Given that AI agents lack intentions in the human sense, traditional legal frameworks predicated on mens rea are often inadequate. Therefore, responsibility is often assigned to individuals and corporations through objective standards of behavior. This approach mirrors scenarios where principals are accountable for the actions of their agents, holding that those who employ AI are responsible for harms that materialize from realized risks associated with the technology. This responsibility also includes the obligation to exercise due care in the supervision and training of the AI, analogous to liability for defective design, using a risk-utility balancing test.
Three Pillars of Responsibility
The allocation of responsibility for AI-related harms can be structured around three primary pillars. First, human actors can be held responsible through a direct assignment of intention for the AI’s actions. Second, individuals and organizations can be liable for the negligent deployment of AI programs that are inadequately trained or regulated, resulting in harm. This extends to the design, programming, and training phases. Third, entities that host AI programs for public use can be held accountable for foreseeable risks of harm to users or third parties, judged by objective standards of reasonableness and risk reduction. This third party responsibility creates a duty to ensure the program is reasonably safe for ordinary use.
The core principle behind using objective standards is to facilitate assigning responsibility to human beings, who remain the pertinent parties. Technology operates within a social relationship between various human groups, mediated by the deployment and utilization of these technologies. Addressing legal questions within the rise of AI involves establishing which individuals and companies engaged in the design, and deployment of technology must be held responsible for its use. This mirrors the legal system’s long-standing approach to assigning responsibility to artificial entities such as corporations, and ensures that the costs of AI innovation are internalized by the responsible actors, leading to safer design, implementation, and usage. The issue of AI liability is ultimately about determining which human actors control the implementation of the technology, and who should bear the responsibility of the risks and rewards of said technologies.
What are the key distinctions between AI programs and human beings that justify different legal standards?
A fundamental distinction between AI programs and humans that warrants different legal standards lies in the presence or absence of intentions. Many legal doctrines, particularly in areas like freedom of speech, copyright, and criminal law, heavily rely on the concept of mens rea, or the intention of the actor causing harm or creating a risk of harm. AI agents, at least in their current state of development, lack intentions in the same way that humans do. This absence presents a challenge when applying existing legal frameworks designed for intentional actors, potentially immunizing the use of AI programs from liability if intention is a prerequisite. In essence, the law must grapple with regulating entities that create risks without possessing the requisite mental state for traditional culpability. Therefore, the best solution is to employ objective standards that either ascribe intention as a legal fiction or hold AI programs (and their creators/deployers) to objective standards of conduct.
Ascribing Intent vs. Objective Standards
The law traditionally employs two primary strategies when dealing with situations where intent is difficult to ascertain or absent altogether, which are particularly relevant for AI regulation. The first approach involves ascribing intention to an entity, as seen in intentional tort law where actors are presumed to intend the consequences of their actions. The second, and perhaps more pragmatic, strategy is to hold actors to an objective standard of behavior, typically reasonableness, irrespective of their actual intentions. This is exemplified in negligence law, where liability arises from acting in a way that a reasonable person would not, or in contract law, where intent is assessed based on how a reasonable person would interpret a party’s actions. In high-stake situations, such as fiduciary law, actors can be held to the highest standard of care, requiring special training, monitoring, and regulation of the AI program. This distinction between subjective intent and objective standards forms a critical justification for applying different legal frameworks to AI programs compared to human actors. Principals should not get a reduced level of care by substituting a human agent for an AI agent.
The legal principles that often justify less than objective standards for human beings mostly come back to the need to protect human freedom and a fear that objective standards might restrict harmless or valuable action and genuine wrongdoing. Arguments for subjective standards do not apply to AI programs, because they currently do not exercise any individual autonomy or political liberty, they cannot be chilled from self-expression and human freedom is uninvolved. Because AI does not have subjective intentions, applying such standards does not make sense. These objective standards allow us to shift the regulation to what the responsible parties – humans – do with the technology, focusing on design, training, and implementation decisions. The central aim is to ease the assignment of responsibility to those human entities that are truly central to the deployment and utilization of technology in these emerging scenarios.
How should defamation law be adapted to address potential harms caused by AI-generated content?
The emergence of AI-generated content, particularly from Large Language Models (LLMs), presents novel challenges to defamation law. These models can produce false statements, often termed “hallucinations,” which can be defamatory. However, traditional defamation law, built around notions of intent and malice, struggles to apply since LLMs lack intentions in the human sense. A key adaptation is to shift the focus from the AI itself to the human actors and organizations involved in designing, training, and deploying these models. This involves applying objective standards of reasonableness and risk reduction to these actors, analogous to product liability law where manufacturers are held responsible for defective designs.
Addressing Liability in the Generative AI Supply Chain
The legal responsibility for defamatory AI content should be distributed across the generative AI supply chain. Designers of AI systems have a duty to implement safeguards that reasonably reduce the risk of producing defamatory content. This includes exercising reasonable care in choosing training materials, designing algorithms that detect and filter out potentially harmful material, conducting thorough testing, and continually updating systems in response to new problems. These safeguards should also include duties to warn users about the model’s risk of generating defamatory content. Analogous to other product deployments that create foreseeable risks of harm, designers must account for potential misuse of their technologies to defame others. Similarly, those who use generative AI products to generate prompts can be liable for defamation based on the prompts and their publication of defamatory content.
For human prompters, the existing defamation rules can apply with specific qualifications. Prompters designing a prompt to generate defamatory content should be regarded as acting with actual malice. Furthermore, a reasonable person using an LLM should be aware that it might “hallucinate,” generating false information, particularly when prompted to produce defamatory content. End-users, especially those deploying these systems in contexts with a high risk of generating defamatory content, have a responsibility to exercise reasonable care in designing their prompts and in deciding whether to disseminate the results. A legal framework for defamation law must be adaptable to the evolution of the technology without chilling innovation, with a negligence-based approach offering the flexibility to respond to new challenges, holding designers and users responsible for failure to exercise reasonable care.
How can copyright law be modified to adapt to the challenges presented by AI-driven content creation?
Copyright law faces significant challenges due to the unique nature of AI-driven content creation. Traditional concepts like intent to infringe become problematic when applied to AI systems that lack human-like intentions. The core question shifts from the AI itself to the human actors (or organizations) who design, deploy, and use these technologies. One potential adaptation is shifting the focus from individual determinations of infringement to systemic requirements for risk reduction. This involves moving from a case-by-case analysis of whether a particular AI output infringes copyright, to assessing whether AI companies have implemented sufficient measures to minimize the risk of infringement. This approach necessitates a re-evaluation of the fair use doctrine, acknowledging that AI’s capacity to endlessly combine and recombine training data challenges traditional notions of substantial similarity and individualized assessment.
Safe Harbor and Risk Reduction
One proposed solution is a “safe harbor” rule linked to demonstrable efforts in risk reduction. This would involve establishing a set of mandatory steps that AI companies must take to qualify for a fair use defense. For example, the law could require companies to implement filters to prevent AI systems from generating outputs that directly copy substantial portions of copyrighted training materials or are substantially similar to them. Existing technologies like YouTube’s content identification system offer a precedent for this kind of automated detection. Furthermore, addressing prompts that induce copyright violations becomes crucial. This necessitates implementing filters to detect and disable prompts that incorporate unlicensed works from the training dataset, similar to how current AI systems filter for unsafe or offensive content. A final obligation could be mirroring the Digital Millennium Copyright Act (DMCA), focusing on updating filtering mechanisms and removing content in reaction to specific takedown demands from copyright holders.
Furthermore, licensing could play a role in the future of AI. LLM organizations could, with the help of the government, secure permmision from copyright holders to use their works in training the models and producing new works. This would come from bi-lateral negotiations or by means of an open-source license offered to the world by the dataset compiler. AI organizations might form a collective rights organization similiar to the collective performance rights organizations (ASCAP, BMI and SESAC) to facilitate blanket licenses of copyrighted material.
What specific actions can be taken to reduce the risk of copyright infringement in the context of AI?
In the context of copyright, the lack of intent from AI systems requires a shift in how infringement is addressed. Given that AI constitutes a perpetual risk of copyright infringement at scale, it is crucial to emphasize specific actions that AI companies can take to manage and mitigate these risks. One possible solution is that LLM companies might secure permission from copyright holders to use their works in training the models and producing new works. Some companies are already securing licenses “either through a bilateral negotiation or by means of an open-source license offered to the world by the dataset compiler.” AI companies might try to form a collective rights organization, analogous to the collective performance rights organizations (ASCAP, BMI and SESAC) to facilitate blanket licenses of copyrighted material. Government might help facilitate such a licensing organization.
The law should require that AI companies take a series of reasonable steps that reduce the risk of copyright infringement, even if they cannot completely eliminate it. A fair use defense tied to these requirements is akin to a safe harbor rule. Instead of litigating in each case whether a particular output of a particular AI prompt violated copyright, as we do today, this approach asks whether the AI company has put sufficient efforts into risk reduction. If it has, its practices constitute fair use. Such measures could be integrated into a safe harbor framework, wherein compliance with these risk-reduction protocols shields AI companies from liability. This fundamentally shifts the focus from assessing individual outputs to evaluating the proactive measures taken by AI developers. By emphasizing risk regulation, policymakers can foster innovation while effectively addressing the societal implications of AI-generated content.
Examples of Risk Reduction Measures
Several examples of these risk reduction measures can be implemented. These can include filtering the large language models output to prevent copying large chunks of training materials or produce outputs that come close to a substantial similarity to the source material protected by copyright. Some AI programs already implement such filters. For example, GitHub Copilot includes an option allowing users to block code suggestions that match publicly available code. Another example would be prompt filters, since systems like Claude allow users to post up to 75,000-word documents, AI design might automatically check and disable prompts that included unlicensed works contained in the dataset. Generative AI systems already implement a number of prompt filters for unsafe or offensive content. Finally, it might create an obligation to update filters and remove content in response to specific requests from copyright owners akin to the takedown requirements of the Digital Millennium Copyright Act. By adopting these measures, AI companies can contribute to an environment where innovation is encouraged, and the rights of copyright holders are respected.
Navigating the complex landscape of AI regulation requires innovative approaches that acknowledge the technology’s unique attributes. By strategically ascribing intentions and consistently applying objective standards, we can establish clear lines of responsibility. This allows us to address the ethical and legal challenges posed by AI systems without hindering their potential for progress. Ultimately, responsible innovation hinges on holding accountable those who develop and deploy these powerful tools, ensuring that their actions align with societal values and legal norms.
