Dec 11, 2025

WHO Releases First Global Standards for Ethical Use of AI in Healthcare

11 December, 2025, 1:14 pm

The World Health Organization has introduced its first worldwide guidelines on the ethical use of artificial intelligence in healthcare, setting a comprehensive roadmap for governments, researchers, and technology companies. Released on May 14, the framework is designed to ensure that the rapid adoption of AI in hospitals and clinics supports patient safety, protects personal data, and promotes equal access to care.

The guidelines arrive at a time when AI systems are increasingly used in diagnosing disease, managing patient information, and supporting clinical decisions. Reuters reports that WHO officials developed the recommendations in response to concerns that poorly regulated AI tools could deepen inequalities or compromise medical privacy.

The new framework includes more than forty recommendations focused on preserving human oversight, enhancing transparency, and safeguarding sensitive data. A central principle is that clinicians—not machines—must maintain ultimate control over medical decisions. The WHO stresses that AI should assist healthcare workers, not replace them.

Transparency is another key pillar. Developers are urged to clearly document how their systems work, what data they were trained on, and where limitations may exist. Patients should be informed whenever an AI tool contributes to their medical evaluation or treatment. According to the document, openly sharing this information builds trust and helps users understand the strengths and risks of AI-supported care.

The guidelines also highlight the importance of high-quality and representative training data. WHO warns that AI systems trained on narrow or biased data can produce discriminatory outcomes, particularly for marginalized populations. To prevent this, the framework requires strict data-protection standards, informed consent protocols, and regular evaluation of datasets for fairness and accuracy.

In addition to ethical principles, the WHO identifies several critical risks associated with the current wave of AI development. Algorithmic bias remains one of the most pressing issues, with the potential to amplify existing disparities in access and treatment. The framework calls for continuous auditing throughout an AI system’s lifespan to detect and correct such biases.

The organization also raises concerns about misinformation generated by large language models, which may produce inaccurate medical guidance if used without sufficient oversight. The guidelines caution against relying on unverified AI tools for diagnostic decisions or treatment recommendations.

WHO officials say the framework is ready for immediate use. Governments can adopt the recommendations when drafting national AI regulations, while technology developers can embed the principles into their design and testing processes. The overarching goal is to create a consistent global foundation for safe and responsible AI innovation.

The release of these guidelines marks an important milestone in shaping the future of AI in medicine. By balancing technological progress with strong ethical safeguards, WHO aims to support a healthcare environment where artificial intelligence is both trusted and beneficial to patients around the world.