Q&A: When Interactions with AI cause harm, who is responsible?

Credit: Just_Super

Artificial Intelligence has become a regular part of our daily life, with millions of people using the technology for everything from preparing grocery lists to seeking medical advice or therapy. It is something that people are relying on for help with decision making, problem solving, and learning. But it’s also become clear that the technology is far from perfect. And as people put more trust in these tools, new questions are arising about who bears the responsibility when they fail, or their use results in harmful or even tragic situations.

Litigation is beginning to bring greater clarity to the legal challenges posed by AI adoption. As there has been little regulation on the technology or companies that use it, experts suggest the courts may be the front line in answering the question of responsibility.

Anat Lior, JSD, an assistant professor at Drexel’s Kline School of Law, is an expert in AI governance and liability, intellectual property law, as well as insurance and emerging technology laws related to AI. To help unpack the legal issues surrounding this new technology, Lior shared her insights with the Drexel News Blog.

Who is currently held liable if an artificial intelligence program causes harm?

Because most current AI-related tort disputes settle before reaching judicial decisions, there remains no clear consensus on which liability framework should apply, or who should ultimately bear responsibility when AI causes harm. What is clear is that AI technology itself cannot be held liable; responsibility must rest with the human or any other legal entity behind it, and liability serves as a tool to shape their behavior and reduce risks. There is always a human in the background who can be incentivized via liability to mitigate potential harms.

Different scholars approach this question in very different ways. Some favor a strict liability model, placing responsibility on the manufacturers or deployers of AI, regardless of the level of care they exercised.

Others prefer a negligence-based framework, under which developers, deployers, or users of AI are liable only if they acted unreasonably under the circumstances, meaning they fell below the applicable standard of care.

Still others opt for a product liability regime, seeing AI as just another product on the market. Under strict liability, accountability is broader and can push companies to release only the safest versions of their systems. Liability under a negligence regime, by contrast, is narrower and may shield companies that acted as prudent entities, which appeals to scholars concerned that strict liability could hinder innovation.

Additional proposals include statutory safe-harbor regimes, where companies following designated guidelines would be insulated from liability.

How does the nature of AI — as a “black-box” technology — challenge the current tort law system when it comes to assigning responsiblity?

AI’s unique characteristics are putting pressure on longstanding tort concepts, like foreseeability, reasonableness, and causation. Because many AI systems lack explainability, it can be difficult to establish a clear causal link between the system’s behavior and the resulting harm, making negligence claims especially challenging, particularly when assessing whether the harm was truly foreseeable.

Even so, tort law has repeatedly shown its ability to evolve alongside new technologies, and it is likely to do so again in the context of AI.

How is AI being regulated?

Given the absence of federal regulation, many U.S. states are developing, or have already enacted, their own AI laws to address potential harms associated with the technology.

Colorado and California offer two leading examples, each taking a different path: Colorado has adopted a comprehensive, consumer-focused framework aimed at preventing discriminatory outcomes, while California has pursued a series of more targeted bills addressing issues such as transparency, deepfakes, and employment-related discrimination. Nearly every state has engaged in some level of discussion around AI regulation, but reaching agreement on the appropriate scope and structure of such laws remains difficult.

Some states prefer to give technology room to grow, allowing innovation to advance without the constraints of strict regulation. They view AI’s significant benefits as outweighing its potential risks. Others believe that existing legal frameworks may already be adequate to address harms associated with AI. In any case, the law often lags behind emerging technologies. In the meantime, softer regulatory tools, such as liability insurance and industry standards, can help bridge the gap until a broader consensus is reached on appropriate regulatory approaches.

What have we learned from AI copyright lawsuits?

Copyright law sits at the heart of one of the major legal debates surrounding AI. Numerous ongoing lawsuits against companies that train and deploy generative AI systems, such as Gemini and ChatGPT, are testing the limits of the current copyright framework. While it is still too early to draw firm conclusions, core doctrines, like fair use, direct and indirect infringement, and authorship, are all being reconsidered and reshaped as AI increasingly influences creative practices that were once understood to be solely human.

Reporters interested in speaking with Lior should contact Mike Tuberosa, assistant director, News & Media Relations at mt85@drexel.edu or 215.895.2705.

Tagged with: