We study the design of optimal liability sharing rules when the use of an AI prediction by a human user may cause external damage. To do so, we set up a game-theoretic model in which an AI manufacturer chooses the level of accuracy with which an AI is developed (which increases the reliability of its prediction) and the price at which it is distributed. The user then decides whether to buy the AI. The AI’s prediction gives a signal about the state of the world, while the user chooses her effort to discover the payoffs in each possible state of the world. The user may be susceptible to an automation bias that leads her to overestimate the algorithm’s accuracy (overestimation bias). In the absence of an automation bias, we find that full user liability is optimal. However, when the user is prone to an overestimation bias, increasing the share of liability borne by the AI manufacturer can be beneficial for two reasons. First, it reduces the rent that the AI manufacturer can extract by exploiting the user’s overestimation bias by underinvesting or overinvesting in the AI accuracy. Second, due to the nature of the interaction between algorithm accuracy and the user effort, the user may be incentivized to increase her (too low) judgment effort.