This is the second part of a three-part series from OKRA.ai Chief of AI Science & Insights, Yahya Anvar
Artificial intelligence (AI) in medical affairs is not a novelty. It has evolved significantly from its inception decades ago, becoming a cornerstone in the fundamentally evidence-based healthcare sector. The historical journey of AI in healthcare is marked by numerous milestones, from its first use in diagnosing illnesses in the 1960s to the current integration of advanced machine learning algorithms for personalized treatment plans.
In healthcare, clarity and evidence are as crucial as understanding the ingredients in a recipe before attempting to cook a complex dish. As one might question, “Why does this recipe require this specific ingredient?” healthcare professionals seek to understand the “ingredients” behind AI predictions. This shows how much we value understanding the reasons behind things rather than just being sure they'll happen. So, when it comes to using technology in healthcare, the AI solutions we implement must be clear and understandable. As collaboration in healthcare is essential, AI programs need to work smoothly with other systems and provide insights based on solid evidence that can inform the decision-making process.
This brings us to the art of communication within the realm of AI. How do we ensure that the messages delivered by AI are not only received but understood and trusted by their intended audience? The challenge lies in not only diversifying and tailoring these messages, but also humanizing AI to ensure they deeply resonate with healthcare professionals’ and patients’ specific needs and contexts. By bringing explainability and interoperability to every AI prediction, we make the decision-making process understandable by humans, addressing the “why” through the generation of evidence and reasoning behind AI predictions, ensuring the messages delivered are received, trusted, and actionable.
With its ability to scale and process vast amounts of data, technology adds new dimensions to this task. It allows us to evaluate various outcomes and provide concise recommendations, addressing the “so what” question by suggesting practical actions to achieve desired goals within realistic and well-defined constraints. Yet, at the end of the day, it’s all about empowering people through the synergy of human and machine intelligence, with the human element ultimately guiding which insights to act upon and how to implement them effectively.
Envision Pharma Group fosters AI trust through stringent ethical guidelines, for example by contributing significantly to the Ethics guidelines for trustworthy AIreport published by the EU High-Level Expert Group in 2019. This commitment extends to ensuring AI in healthcare upholds privacy, security, and fairness, embodying ethical AI through robust data protection measures, bias mitigation, and adherence to laws like GDPR – Europe’s General Data Protection Regulation that sets guidelines for the collection and processing of personal information from EU citizens.
Trust-building is a dynamic journey reliant on constant feedback, enhancements, and stakeholder interaction, guided by ethical boards to maintain the integrity of AI and provide actionable insights, which include:
- Regular performance monitoring: Continuously assess and refine AI systems’ accuracy, reliability, and usability based on real-world outcomes and user feedback
- Transparent communication: Keeping stakeholders informed about updates, improvements, and the limitations of AI systems, fostering an environment of openness and continuous learning
- Stakeholder engagement: Encouraging ongoing dialogue between AI developers, healthcare professionals, and regulatory bodies to address concerns, share best practices, and explore new opportunities for AI to enhance patient care
AI in healthcare should complement, not replace, human skills, providing professionals with insights that align with best practices and patient needs. To accomplish this, all relevant stakeholders, like healthcare staff, patients, and AI developers, must collaborate to ensure AI programs meet their unmet needs.