Transforming healthcare through just, equitable and quality driven artificial intelligence solutions in South Asia
Introduction
The global healthcare landscape is undergoing a dynamic shift, with growing service needs evident worldwide, particularly in developing countries. Artificial Intelligence (AI) has emerged as a potential game-changer, attracting significant global interest. The World Health Organization’s (WHO) 2020–2025 digital healthcare strategy underscores the significance, advocating for international collaboration, national digital health plans, robust governance frameworks, and patient-centered care empowered by digital technologies1. However, a critical disparity exists. While AI solutions hold immense promise for revolutionizing healthcare delivery, especially in resource-constrained low- and middle-income countries (LMICs), their current development and deployment are largely concentrated in high-income nations2. This often overlooks the population needs and deployment requirements within LMICs. The WHO’s strategy itself acknowledges this gap, highlighting the importance of inclusive development to ensure AI serves the global population effectively and equitably3.
A range of critical factors could cause delays in the effective implementation of AI tools and platforms in health systems within LMICs. Firstly, the lack of established governance structures for AI development and deployment can lead to ethical concerns, bias in algorithms, and a dearth of transparency in decision-making. Additionally, existing infrastructure limitations in LMICs are a major barrier. The WHO highlights how government officials might resist change, embracing traditional methods instead of adapting newer AI technologies. Adding to and complicating the deployment further, limited funding and a shortage of digitally skilled healthcare professionals may delay AI use in LMICs4,5. The most significant challenge rests with availabity of data as AI algorithms rely heavily on large quantity of data to learn and improve reliability. Since health data systems in LMICs are often fragmented, limited patient records, and data privacy concerns subsequently hinder the creation of the necessary datasets for development and use of AI enabled tools. Security and privacy, though not unique to AI, are critical concerns with digital health solutions. Data breaches, unauthorized access to patient information, and potential algorithmic bias against marginalized populations are all potential pitfalls highlighted by the WHO5,6. Building robust data security measures and fostering trust with both healthcare providers and patients are crucial for successful development and deployment.
Given these challenges, proactive engagement from LMIC governments, healthcare institutions, private compaines and international development organizations is essential. Open discussions on responsible AI development, investment in digital infrastructure, workforce training, and collaboration in creating localized datasets will be important in having a digital environment in LMICs7,8. By understanding the challenges and their localized solutions, LMICs can leverage the power of AI to improve healthcare access, diagnostic accuracy, disease surveillance, and in due course, population health outcomes. This necessitates a collective and collaborative effort to ensure AI solutions are developed with inclusivity and equity at their core, ultimately empowering LMICs to participate and in many cases lead the global digital health revolution.
In this paper, we aim to explore the factors that enable or hinder the broader use of AI solutions in LMIC healthcare settings. We present the discussions and how factors may impact the development and deployment of AI expand upon these enablers and barriers, providing a South Asian perspective on responsible AI integration.
AI-Sarosh, as a part of the AI for Global Health initiative of the International Development Research Centre (IDRC), serves as a knowledge hub, specifically focused on harnessing the potential of AI to address critical sexual, reproductive, and maternal health (SRMH) conditions in South Asia. AI-Sarosh hosted a co-design workshop after the grants were awarded to nine projects from eight organizations. The workshop took place in Colombo, Sri Lanka from 13 to 17 September 2023 to bring together AI-Sarosh secretariat and all the grantees.
The Colombo workshop convened 40 participants representing diverse backgrounds, including AI researchers, healthcare professionals, Non-Governmental Oorganization (NGO) representatives, and policymakers from Bangladesh, India, Nepal, Pakistan, Sri Lanka, and beyond. The session structure involved keynote presentations, panel discussions, and breakout groups focusing on challenges in SRMH. We employed structured group dialogues to identify enablers and barriers to AI adoption. Discussions were facilitated by AI-Sarosh staff. Insights captured in these sessions informed the thematic headings and recommendations outlined in this manuscript. No ethical approval was required for the study as it was a co-design workshop. The consent was taken from the participants for publication.
Health system opportunities and Challenges
The challenge of infrastructure
The effectiveness and impact of healthcare interventions are significantly influenced by the scale of their adoption and coverage within health systems. However, in many LMICs, weak or nonexistent infrastructure poses a substantial barrier. This deficiency will attenuate the impact that development and implementation of AI tools that cater to population health needs, and health workforce capacity, may have. Given that AI heavily relies on data from diverse geographic and demographic populations, limited infrastructure not only hampers the design of these tools but also impedes their subsequent acceptance. To ensure the successful implementation of digital health platforms within these healthcare systems, several key considerations must be prioritized: seamless integration with existing infrastructure, adaptability to address future needs, robust data security, interoperability with other healthcare providers, user-friendliness, and cost-effectiveness9,10,11.
Emphasizing the need to find a digital platform within the public health system to host the AI solution for the SRMH is crucial. While much of the discussion on integrating and scaling of interventions revolves around finances and human resource allocation, a significant challenge also lies in providing a comprehensive education to a large healthcare workforce to ensure the efficient utilization of AI technology, albeit demanding substantial resources. As AI represents a new horizon in the health sector, significant advocacy efforts are needed to encourage the adoption of AI solutions, alongside questions about resource allocation for training and data system upgrades and once an effective intervention is developed, addressing sustainability is also crucial for long-term success AI champions within government systems, NGOs, or donor agencies are imperative to facilitate policy translation, adoption, implementation, and acceptance of AI policies. Undoubtedly, the successful development of AI solutions for the healthcare sector requires several key steps for effective integration into the complex, multilayered health system, with a specific focus on the public sector and the importance of robust digital infrastructure capable of supporting such innovations12.
Infrastructure related challenges can be overcome by encouraging public-private partnership to promote affordable broadband expansion and cloud-based data storage for healthcare facilities. Additionally, the development of “offline-first” AI applications can be prioritized to collect and process the data locally. Likewise, pilot projects can be implemented in urban clinics where the infrastructure is relatively developed, and scale-up projects can be implemented as stand-alone offline applications. By carefully considering phased rollout and allocating dedicated budgets for digital infrastructure, LMICs can graduate building the supporting environment for AI integration into healthcare.
Health workers and task shifting: AI integration to increase accountability
AI presents a multifaceted opportunity to transform the healthcare workforce. Its potential impact extends beyond the automation of routine tasks and improved access to care. AI can also contribute to a shift in the roles and responsibilities of healthcare professionals, potentially increasing accountability and efficiency13,14. Notably, by bridging patient-physician information gaps and facilitating data driven decision-making, AI empowers healthcare providers to deliver more targeted interventions and enhance patient safety through real-time monitoring and early intervention capabilities14. However, the successful implementation of AI platforms requires collaborative effort. Effective partnerships between health systems, investors, and technology entrepreneurs are crucial for translating research into practical guidelines and facilitating task-shifting to less specialized workers15,16. Furthermore, bridging the gap between innovators, specialists, and healthcare implementers is essential to ensure that AI solutions are not only technically sound but also readily adopted within the helath delivery system. In the context of LMICs, where healthcare systems often heavily cater to women and children, it’s particularly important to emphasize the role of women in AI development. Representation of women in AI research and development can help ensure that AI solutions are designed with gender sensitivity in mind, promoting smoother adoption and use by healthcare providers17. Fostering a deeper understanding of how AI solutions can improve efficiency and patient care is paramount to successful implementation in LMICs18.
Ubiquitous and seamless AI integration in healthcare
The proposition of healthcare becoming as accessible as an Automated Teller Machine (ATM), where individuals can effortlessly obtain health solutions, raises critical questions regarding the feasibility and challenges of integrating various AI solutions into existing platforms. This leads to the consideration of whether such integration can create a more seamless and accessible healthcare system.
Integrating AI solutions offers several potential benefits to healthcare systems, including enhanced diagnostic accuracy, improved operational efficiency, a more positive patient experience, advancements in research and development, and increased cost-effectiveness19. However, the primary challenge in leveraging AI within healthcare systems lies in implementing the necessary changes20. Several factors can hinder effective AI integration. Firstly, it often necessitates upgrades to existing technical infrastructure, such as cloud computing capabilities and data storage systems along with ensuring decoding of patient data. Secondly, the accuracy of AI solutions is extremely dependent on high-quality data. Ensuring data quality and accessibility presents a significant challenge21. Thirdly, integrating AI requires additional resources. This may involve worker training, recruitment of AI specialists, and workflow adjustments, all of which contribute to the challenge of developing the necessary organizational capacity22. Finally, widespread adoption hinges on regulatory approval. Clear regulatory guidelines for the use of AI tools in healthcare can address concerns and facilitate seamless integration within the healthcare system23. In the long run, successful AI integration in healthcare systems requires top-down policy changes that facilitate this process and avoid duplication of efforts across different countries. Standardizing approaches can enable the replication of successful solutions, fostering wider adoption24,25. By providing user-friendly tools, AI development can be made more accessible to software engineers and programmers, democratizing AI and transforming it into a wider accessible tool, akin to an Excel spreadsheet. This would empower a broader range of developers to contribute to healthcare innovation.
Safety issues in AI solution
Expanding upon the considerable benefits of AI in healthcare, ensuring its safe implementation necessitates meticulous examination of potential safety risks. Of particular concern is the quality of data, which can significantly impact the performance of AI algorithms, potentially leading to biased outcomes, inaccurate diagnoses, and ultimately harming the patients26,27,28. Furthermore, the inherent complexity of AI systems, often characterized as “black boxes,” raises apprehensions regarding transparency and explainability. Healthcare practitioners must comprehend the rationale behind AI-generated recommendations to make well-informed decisions. To address these concerns, a comprehensive approach is of vital importance. In the first step AI systems intended for healthcare applications should be developed with robustness, transparency, accuracy, and reliability as primary objectives. Adhering to established standards for trustworthy AI development is crucial to ensure the intended functionality of these systems and to mitigate the risk of unintended harm28.
In the second step, safety considerations must be customized according to the specific context of AI utilization within healthcare settings. The potential repercussions of errors may vary depending on the application. For instance, a false positive in tuberculosis screening may result in unnecessary treatment, whereas a false negative could have life-threatening consequences29. Therefore, a rigorous evaluation of AI performance and safety within the context of its intended application is paramount30. In the next step, seamless integration of AI systems into existing healthcare workflows is essential. AI solutions should not disrupt established protocols or confuse healthcare professionals. Safety considerations should be integrated throughout the entirety of the healthcare service delivery process, ensuring clear decision-making at each stage31.
While maintaining strict safety measures is vital, it is equally important to underscore AI’s potential for transforming healthcare efficiency and outcomes. For example, AI-driven decision support tools can reduce patient wait times and improve diagnostic accuracy by rapidly analyzing medical images or lab results. Predictive analytics can enhance resource allocation by forecasting patient influx or disease outbreaks, thereby optimizing staffing and supply chains. In mental health and maternal care, AI chatbots and telemedicine applications can offer round-the-clock support, reaching underserved populations quickly. By spotlighting these proven benefits alongside robust safety measures, AI implementations can be designed to deliver measurable value while minimizing risk
Bias in development and overfitting
AI holds significant promise for enhancing healthcare outcomes, its advancement and deployment are susceptible to biases and overfitting, posing notable challenges to its efficacy and fairness. Bias within AI systems has the potential to exacerbate existing health disparities by perpetuating societal inequities based on factors such as race, gender, or socioeconomic status32. Such biases can infiltrate AI systems at various stages, ranging from the data used for training to the design of algorithms themselves33. Effectively mitigating bias demands a multifaceted strategy that encompasses advancements in training methodologies, algorithmic design, and meticulous data curation. Moreover, it necessitates a concerted effort to address societal biases ingrained within the data itself31. Overfitting is another obstacle faced during the development of AI models in healthcare. Overfitting arises when an AI model performs exceptionally well on the data it was trained on but struggles to generalize effectively to new, unseen data, thereby compromising its reliability in real-world applications34.
Societal bias is another major challenge in adaptation of AI system in healthcare which can arise during multiple time steps during the AI project. To overcome this, a multi-pronged strategy at all phases of AI project needs to be implemented. For instance, during the data collection, it must be ensured that a broad spectrum of demographic groups is represented in the dataset. Further, implementation of bias detection and mitigation techniques (such as over-sampling) and transparent reporting practices (use of model card etc.) can help in identification of potential biases early. Importantly, collaboration with local communities to understand local norms and historical inequalities can also help in reducing unintended biases.
The emphasis on equity in AI development underscores the gravity of these challenges. It is of utmost importance that AI solutions should be tested across diverse populations and contexts to ensure impartial outcomes35. Furthermore, the advocacy for assembling a multidisciplinary team comprising domain experts, medical professionals, and AI specialists bolsters this assertion. Such a diverse team possesses the requisite expertise to detect and rectify potential biases early in the development phase36.
Responsible AI: Equity Justice, and Quality in Health Care
While AI holds immense promise for revolutionizing healthcare delivery, ensuring its equitable and responsible implementation in South Asia and LMICs requires careful consideration of several factors. Here is a roadmap for scientists in these regions to envision a future healthcare system characterized by justice, equity, quality care, and gender sensitivity.
Collaboration and stakeholder engagement
Effective digital implementation in LMICs needs strong public and private partnership, where NGOs, private companies, academic institutes and government can collaborate to leverage each other’s strength. It is the role of the government to lead by setting effective policies and by providing technical support while NGOs must focus on providing support for these innovative technologies like AI to harness their way into society. Multidisciplinary team such as engineers, technicians, consultants, policy makers, ethicist, medical professionals and community leaders play a significant role in development of AI solutions to address the specific challenges within their respective communities. These challenges can be cultural, gender disparities in the healthcare provision, and resource limitations. Gender disparity is very common among various communities, and it should be core focus throughout the process, from collecting data separated by gender to making sure that AI systems don’t continue existing gender biases. Furthermore, it is very important to strength the local networks amongst the community leaders, researchers, policymakers, and healthcare providers to promote knowledge sharing, since these collaborations and stakeholder engagements can play a significant role in the long-term effectiveness and sustainability of AI interventions.
Responsible AI development and implementation
Responsible AI development in healthcare requires various key considerations. First and most important one is transparency, for healthcare professionals it is important to understand the significance of transparency and explainability as explainable AI techniques can help building trust and allow the doctors to make informed decisions along with AI insights. The second aspect is privacy and efficient security measures; it is very important to protect patient’s data. Effective security regulations around data ownership and usage are essential for responsible AI development in the healthcare sector, along with maintaining security protocols, it is important to have regular ethical impact assessment to identify and address potential concerns. Ethical principles such as fairness, non-maleficence and patient autonomy should guide AI development and deployment. Lastly AI systems should be rigorously tested in real world surroundings which will help in increasing its effectiveness across diverse populations and healthcare surroundings.
Beyond bias and safety, comprehensive ethical guidelines for AI in healthcare must address privacy, data governance, and informed consent. First, robust data protection policies and encryption standards are necessary to secure patient records against breaches. Second, clear protocols for informed consent—explaining how AI will be used in diagnosis or care—can empower patients and reinforce trust. Third, role-based access control ensures only authorized personnel handle sensitive information, reducing the risk of misuse. Finally, continuous ethical reviews, possibly through oversight committees or institutional review boards, can help sustain ethical practices over the AI lifecycle. These measures collectively reinforce public confidence, ensuring AI solutions truly serve patient-centered goals
Building capacity and user adoption
Building capacity and promoting user adoption of AI in healthcare requires a multifaceted approach. The first approach is training and upskilling of healthcare professionals in AI literacy and understanding how AI can complement their work is crucial. Culturally relevant training programs should address potential challenges and concerns, for example, the fear of AI replacing jobs. Through continuing professional development, healthcare professionals should undergo regular training to keep up with advanced AI algorithms. Additionally, AI tools should be designed with user-friendly interfaces, considering the existing digital literacy levels of healthcare professionals. Intuitive interfaces can facilitate the seamless integration of AI into workflows. Community engagement is also very crucial for building capacity and user adoption of AI, engaging with local and sister communities throughout the process is essential for building trust and ensuring AI solutions to address their specific needs. Community input can be invaluable in tailoring AI applications to local contexts and cultural sensitivities. Finally establishing robust feedback mechanism allows for continuous improvement of AI systems which will gather insights from end-users and communities to iteratively improve the AI system.
Governance, policy and regulatory frameworks in AI-Drive healthcare
While the expectation is for the government to take the lead, the private sector must spearhead the discussions involving innovative healthcare technology developments. The openness of governments, especially LMICs, to incorporate digitalization and Information Technology (IT) in the health and development sector has been slow. Accountability within the healthcare system, where all healthcare professionals are expected to adhere to Ministry of Health regulations and circulars, remains a significant obstacle to the adoption of new interventions. In countries like Sri Lanka, where healthcare is predominantly public, government-managed and has technical independence that encourages innovative ideas, the challenges often lie in the effective implementation of these innovations, this is due to the scale of infrastructure, financial constraints, and the need for skilled human resources.37,38
Given the potential for AI to enhance the efficiency of healthcare delivery, hospitals are best positioned to lead the digital transformation. This is primarily because AI tools excel in diagnostics and decision-making. However, reliance on central governments in most LMICs for infrastructure support, coupled with a lack of proactive efforts, can hinder the implementation of AI solutions39,40. AI scientists, and companies at the forefront of AI development in LMICs should actively explore avenues for creating commercialized products. By doing so, they can play a pivotal role in reducing the reliance on government involvement in healthcare. This approach not only fosters innovation but also empowers local entities to take ownership of healthcare solutions tailored to their specific needs.
The development and implementation of AI tools in health will need careful evaluation, at the local level, and at the international level. A critical area of emphasis, especially in the context of LMICs, since many countries do not have regulatory bodies, lack expertise within the public sector (in most of cases, public sector is responsible for regulation), or do not have clear guidelines for review. The world health organization (WHO) published guidelines for countries on use of AI in healthcare sector. These guidelines emphasize transparency in development, sharing approaches in data collection and use, making sure data are collected without an intended bias, and that a complete lifecycle of AI tool development is shared with the regulators. Additionally, frameworks from the regional bodies such as African Union Data Policy Framework, or the European Union AI Act can also offer insights and guidelines for local adaptions.
To operationalize these frameworks, LMICs could establish national AI governance bodies tasked with contextualizing global standards to their unique healthcare needs. For example, a tiered regulatory system, modeled after the EU’s risk-based approach, could classify healthcare AI applications based on their potential impact on patient outcomes, ensuring proportionate oversight.
Enablers, Barriers, and Strategic Recommendations for AI Integration in LMICs
The integration of AI into healthcare systems in LMICs offers immense potential but faces significant challenges. Insights from the AI-Sarosh Co-Design Workshop, supported by literature, underscore key enablers and barriers while providing actionable recommendations to address them. A systematic approach is essential to overcome these challenges and harness the transformative power of AI.
Enablers for AI integration
Several factors have been identified as critical enablers for successful AI integration:
-
Public-Private Partnerships: Strong collaborations between governments, private sector stakeholders, and non-governmental organizations play a pivotal role in addressing infrastructure gaps. Such partnerships can mobilize resources for broadband expansion, cloud-based data storage, and pilot AI applications. For example, Kenya’s telemedicine initiatives demonstrate how public-private collaboration can enhance digital health solutions.
-
Localized Data and Context-Specific AI Tools: The development of localized datasets tailored to LMIC demographics is crucial. AI tools built on such data are more effective and equitable, addressing the specific disease burden and healthcare challenges of local populations. The AI-Sarosh workshop emphasized the importance of data stewardship and context-specific tools to enhance AI adoption in LMICs.
-
Capacity Building in Digital Skills: Training healthcare professionals to effectively use AI tools is essential for adoption and scale-up. Capacity-building initiatives, including AI literacy programs, help reduce resistance and ensure that healthcare workers are equipped to use technology effectively. Countries such as India have initiated scalable training programs, highlighting the benefits of workforce development in digital health.
Barriers to AI integration
Despite these enablers, LMICs face persistent barriers that hinder AI adoption:
-
Fragmented Health Data Systems: Fragmented and incomplete health data systems make it difficult to create cohesive datasets for AI training and implementation. The lack of interoperability across health systems remains a critical issue, as highlighted by the AI-Sarosh workshop.
-
Regulatory and Governance Gaps: Many LMICs lack clear regulatory frameworks for the ethical and safe deployment of AI tools. This results in concerns around data privacy, transparency, and accountability. While global guidelines such as the WHO’s framework exist, local adaptation remains a challenge.
-
Resource Constraints: Limited financial and human resources hinder investments in digital infrastructure, algorithm development, and training programs. Additionally, shortages of skilled professionals in both technology and healthcare further impede progress.
Strategic recommendations
To address these barriers and capitalize on the enablers, a phased approach is recommended:
-
Short-Term Pilots and Capacity Building: To effectively integrate AI into healthcare systems, small-scale pilot projects should be launched in well-equipped health centers to test AI tools, refine them for local contexts, and demonstrate their value in addressing specific challenges. Simultaneously, hands-on training programs for healthcare workers must be implemented to ensure they are equipped with the necessary skills and confidence to effectively use these AI-driven solutions, fostering a ensuring smooth implementation and effective utilization.
-
Medium-Term Infrastructure and Policy Development: Public-private partnerships should be established to enhance digital infrastructure and strengthen data governance frameworks. This will provide a solid foundation for the successful integration of AI into healthcare systems. Simultaneously, country-specific AI regulations must be developed, aligned with global best practices, and designed to emphasize ethical use, transparency, and accountability in healthcare applications.
-
Long-Term Sustainability and Scaling: Validated AI tools should be scaled to rural or hard-to reach areas through the use of mobile technology and telemedicine platforms, enabling broader reach and significant benefits. Additionally, continuous monitoring and evaluation of AI implementations are essential to assess their effectiveness in improving cost-efficiency, promoting health equity, and expanding coverage.
-
Addressing Research Gaps: The role of cultural factors in AI adoption and their influence on health equity should be thoroughly examined to address potential barriers and promote inclusivity. Furthermore, strategies to reduce operational burdens should be prioritized. These approaches must ensure that AI complements healthcare professionals rather than displacing their expertise.
The AI-SAROSH co-design workshop brought together experts and provided a platform to explore opportunities and challenges in leveraging AI for healthcare transformation. Key discussions focused on government leadership, responsible AI practices, and mitigating bias in AI development. Collaboration, multidisciplinary teams, and ethical considerations were highlighted as an important factor for successful AI integration. The workshop emphasized aligning projects with government policies, rigorous testing of AI models, and early stakeholder involvement. These takeaways provide valuable guidance for developing responsible and equitable AI solutions in improving healthcare delivery for all. A critical consideration lies in how AI can coexist and complement existing digital health systems. Ideally, AI solutions should empower healthcare workers, particularly at the grassroot level. This can be achieved by leveraging Internet of Things (IoT) devices and AI to scale up healthcare solutions and deliver greater benefits to communities.
Responses