The use of artificial intelligence (AI) and machine learning algorithms in healthcare is poised to broaden significantly over the subsequent few years, however, beyond the investment methods and technological foundations lie critical questions circling the ethical and responsible use of AI.
In an attempt to clarify its own position and add to the controversy, the executive VP and chief technology officer (CTO) for Royal Philips, Henk van Houten, has revealed a list of five guiding principles for the design and responsible use of AI in healthcare and private health functions.
The five rules – well-being, oversight, robustness, fairness, and transparency – all arise from the basic viewpoint that AI-enabled options ought to complement and benefit clients, sufferers, and society as a whole.
Firstly, well-being must be front of mind when coining healthcare AI solutions, van Houten argues, helping to mitigate overstretched healthcare systems; however, more importantly, to behave as a means of supplying proactive care, informing and supporting healthy residing over the course of an individual’s entire life.
In relation to oversight, van Houten referred to as for proper validation and interpretation of AI-generated insights via the participation of AI engineers, data scientists, and medical experts.
A robust set of management mechanisms is seen as crucial not only to develop trust in AI among sufferers and clinicians but in addition to preventing unintended or intentional misuse.
The fourth principle, fairness, argues for guaranteeing bias and discrimination is prevented in AI-powered solutions–a problem currently being discussed in the U.S.
Philips sees the development of AI-enabled solutions in collaboration – between providers, payers, sufferers, researchers, and regulators – as a manner of guaranteeing optimal transparency.