A new study had indicated that a widely used algorithm for prophesying which patients get additional care is disproportionately counting out pitch-black patients–and could have left tens of thousands without adequate medical care.
Artificial intelligence professionals have been alarming for years that bias in automation could cause unintentional trauma in the future. But it’s also happening with technology that’s being used right now. In a recent article published in Science , researchers lay out how a common algorithm be useful in infirmaries to assess whether a patient needs additional charge was originating vastly biased recommendations.
Read more: feedproxy.google.com