In the third blog in our series on artificial intelligence (AI) and machine learning (ML)-driven predictive models (data analytics tool or software) in health care, we discussed some potential risks (sometimes referred to as model harms) related to these emerging technologies and how these risks could lead to adverse impacts or negative outcomes. Given these potential risks, some have questioned whether they can trust the use of these technologies in health care.
We are encouraged to see that some stakeholders are demonstrating that a predictive model is fair, appropriate, valid, effective, and safe (FAVES), rather than amplifying biases or harms. Some stakeholders are indicating this through descriptions of the processes used to develop the model and minimize risks, evaluation of the model’s performance (often described in peer-reviewed literature and according to nascent reporting guidelines), and clear description of how and when the model should be used. However, too often, this information is unavailable to purchasers, implementers, and users. As a result, the information necessary to assess the quality of predictive models is unavailable, including when these models are embedded or integrated with certified health IT…
There are three classic dynamics we’d expect to see in “a “market for lemons,” and we are watching for signs of each in the market for predictive models in health care:
- Purchaser or User Gets a Real Lemon: Potential purchasers or model users are unsure if a model is of good quality and so, they end up using bad models or using models in ways that are not appropriate (e.g., using a model outside the environment for which it was designed or ill-suited for a given task or context). Famously, the misuse of models and under-appreciation of model risks led to over-reliance on models to estimate risks of default for mortgage-backed securities and contributed directly to the 2008 financial crisis in the United States. In the last few years, we’ve seen high profile instances in health care in which users discovered, only, belatedly, that models they used or acquired were not accurate or were biased… Read the full blog post here.




