Artificial intelligence has the potential to revolutionize how medicine are found and alter how hospitals ship care to sufferers. However AI additionally comes with the chance of irreparable hurt and perpetuating historic inequities.
Would-be well being care AI regulators have been spinning in circles attempting to determine how you can use AI safely. Business our bodies, buyers, Congress, and federal companies are unable to agree on which voluntary AI validation frameworks will assist make sure that sufferers are secure. These questions have pitted lawmakers in opposition to the FDA and enterprise capitalists in opposition to the Coalition for Well being AI (CHAI) and its Huge Tech companions.
The Nationwide Academies on Tuesday zoomed out, discussing how you can handle AI danger throughout all industries. On the occasion — one in a sequence of workshops constructing on the Nationwide Institute of Requirements and Know-how (NIST)’s AI Threat Administration Framework — audio system largely rejected the notion that AI is a beast so totally different from different applied sciences that it wants completely new approaches.
This text is unique to STAT+ subscribers
Unlock this text — and get further evaluation of the applied sciences disrupting well being care — by subscribing to STAT+.
Have already got an account? Log in
Have already got an account? Log in
View All Plans