Vous êtes ici :

Prediction, human decision and liability rules

We study the design of optimal liability rules when the use of a prediction by a human operator (she) may generate an external harm. This setting is common when using artificial intelligence (AI) to make a decision. An AI manufacturer (he) chooses the level of quality with which the algorithm is developed and the price at which it is distributed. The AI makes a prediction about the state of the world to the human operator who buys it, who can then decide to exert a judgment effort to learn the payoffs in each possible state of the world. We show that when the human operator overestimates the algorithm’s accuracy (overestimation bias), imposing a strict liability rule on her is not optimal, because the AI manufacturer will exploit the bias by under-investing in the quality of the algorithm. Conversely, imposing a strict liability rule on the AI manufacturer may not be optimal either, since it has the adverse effect of preventing the human operator from exercising her judgment effort. We characterize the liability sharing rule that achieves the highest possible quality level of the algorithm, while ensuring that the human operator exercises a judgment effort. We then show that, when it can be used, a negligence rule generally achieves the first best optimum. To conclude, we discuss the pros and cons of each type of liability rule.
JEL : K13 ; K4
Prediction, Algorithm, Liability rules, Decision-making, Artificial intelligence, Cognitive bias, Judgment, Prediction