What You Need To Know About WHO’s AI Guidelines

by | Feb 15, 2024 | Artificial Intelligence

The World Health Organization (WHO) has recognized the rapid growth and potential of Large Multi-Modal Models (LMMs) in healthcare, outlining their capabilities to transform medical diagnostics, treatment, and public health strategies. LMMs, which process and synthesize diverse data inputs to generate actionable insights, will change health services by offering sophisticated diagnostic tools, enhancing patient care, and streamlining administrative processes. The adoption of LMMs also brings forth ethical, governance, and operational challenges. The WHO’s guidance on the ethics and governance of artificial intelligence (AI) for health details the framework for addressing these challenges, emphasizing the need for strong ethical principles, transparency, and inclusiveness in the development and deployment of AI technologies in healthcare settings.

The potential benefits of LMMs in healthcare are substantial, ranging from improved diagnostic accuracy to personalized patient care and more efficient healthcare delivery systems. The quick deployment of LMMs has raised concerns regarding their reliability, the quality of underlying data, and potential biases. These issues necessitate a governance approach that ensures AI technologies are developed and utilized in a manner that protects autonomy, promotes human well-being, and upholds equity. The WHO’s guidance advocates for the integration of six ethical principles into the policies and practices surrounding AI in healthcare, aiming to guide stakeholders, including governments, developers, and healthcare providers, towards responsible AI use.

To navigate the complexities of LMM deployment in health care, WHO proposes governance strategies encompassing company practices, governmental policies, and international collaborations. These strategies are designed to align with the guiding ethical principles, addressing the unique challenges of using generative AI in health contexts. The recommendations call for governance mechanisms that encourage transparency, accountability, and public interest, facilitating the ethical development, provision, and utilization of LMMs in healthcare. The main principles are as follows:

  • Protecting autonomy
  • Encouraging human well-being
  • Ensuring transparency
  • Fostering responsibility
  • Safeguarding inclusiveness
  • Promoting responsive and sustainable AI

The societal and regulatory implications of LMM integration into healthcare include issues such as data privacy, algorithmic bias, environmental impact, and the potential undermining of human expertise. WHO’s guidance points out the importance of sustainable and responsive AI development, advocating for policies that mitigate risks and maximize benefits. It emphasizes the requirement for international governance to ensure that LMMs and other AI technologies are deployed in ways that respect human rights, ethical standards, and global health priorities. This approach calls for collective action and cooperation among global stakeholders to establish standards and frameworks that allow for ethical AI use.

Stay Informed

Subscribe To Our Newsletter To Receive Healthcare Industry News Via Email

View our privacy policy

Categories