Abstract
This paper proposes addressing algorithmic harms from digital products by expanding traditional legal liability principles of negligence and product liability to include harms that stem from algorithms that fail to prevent them. It argues that developers should have a duty of care (do not move fast and break things) to avoid foreseeable algorithmic harms when designing digital products (first, do no harm). Breaching this duty through defective digital goods and services that cause legally provable harm would incur liability complaints from regulators and through product liabiity litigation. Developers who demonstrate use of best practices to mitigate foreseeable harms during or post-development would be granted safe harbor and time to remediate unanticipated issues. Harms and penalties would be defined on a matrix proportional to severity, from individual to societal. Liability insurance could incentivize developers to implement standards for harm prevention in order to qualify for coverage. While likely slowing innovation in some areas, this approach would spark new solutions to prevent harms. It requires expertise across technology, policy, legal, insurance, and ethics to refine details and balance stakeholders. This accountability model provides an ethical imperative of "first, do no harm" for technology, acknowledging the real costs of unethical algorithms.