Artificial intelligence (AI) has become increasingly used in healthcare, from detecting atrial fibrillation to predicting in-hospital mortality, and much more. However, with over 60 AI-based algorithms and medical devices now approved by the U.S. Food and Drug Administration (FDA) and regulatory policies that are still evolving, there are questions about who should be held liable for medical errors resulting in care delivered jointly by physicians and AI. In a new Journal of the American Medical Informatics Association study, Dr. Dhruv Khullar, assistant professor of population health sciences; Dr. Lawrence Casalino, professor of population health sciences; and Yuting Qian, population health sciences research coordinator; join Yale School of Medicine colleagues to find out how the U.S. public and physicians view these cases. The survey asked respondents if the physician making the clinical decision, the vendor or company licensing the algorithm, the healthcare organization purchasing the algorithm, or the FDA or other governmental entity approving the algorithm for clinical use should be held responsible. Overall, the public was significantly more likely to believe that physicians should be held responsible, while physicians were more likely to believe that the company or vendor selling the algorithm and the healthcare organization purchasing it should be liable. Future work should examine the reasons behind differences in perception and healthcare leaders should be aware that these differences exist while trying to bridge the disconnect.