The practice of medicine is currently prioritizing the development, validation and implementation of machine learning algorithms. Measuring how well the algorithms function upon implementation in the practice is an area of critical need. When changes in model performance occur after implementation, a concept generally labeled model drift, careful examination of the putative reasons for the change is needed. This article presents two scenarios that illustrate how model drift can indicate favorable and unfavorable changes in performance.
ASJC Scopus subject areas
- Artificial Intelligence
- Medicine (miscellaneous)
- Health Informatics
- Computer Science Applications