Researchers at the Icahn School of Medicine at Mount Sinai have developed AEquity, a method to detect and mitigate bias in datasets used to train machine-learning algorithms. By addressing imbalances at the dataset level, AEquity helps ensure AI models are more accurate, fair, and representative across diverse patient populations.
AI in health care is only as reliable as the data it’s trained on. Without equitable data, algorithms risk amplifying inaccuracies and disparities in diagnosis and treatment. AEquity provides a way forward—offering developers, researchers, and health systems a practical tool to improve trust and outcomes in health care AI.
Read more about the study here: https://t.co/kPgrTe74sT