By Joke Kujenya
THE FUTURE of health demands responsible Artificial Intelligence according to the World Health Organization (WHO).
Sweeping across hospital corridors and rural clinics alike, the march of artificial intelligence is already reshaping the way patients are diagnosed, medicines are discovered, and epidemics are tracked, the world health organ reveals.
From voice-powered health assistants to smart diagnostic tools, digital health technology has become an increasingly central part of medical response in both high-income and resource-limited settings, it says.
Yet as the power of these systems grows, the WHO cautions, so too does the risk of harm.

Inaccurate diagnoses, unchecked misinformation, biased algorithms, and threats to privacy now accompany the promise of precision and efficiency. For the WHO, the moment to act is not in the future—it is now.
Artificial intelligence, WHO says, is already playing a key role in clinical care, drug development, health system management, and public health surveillance.
In many regions, these tools are helping to close critical gaps, reaching patients in remote areas, automating basic screenings, and aiding decision-making where human resources are stretched thin.
But the speed of deployment has outpaced regulation in many parts of the world.
While the WHO did not attribute the caution to any official or experts, still, it has raised concerns that, without clear national oversight and global ethical standards, these technologies could entrench health disparities rather than resolve them.
In low-resource settings in particular, AI systems may amplify existing inequities if the data used to train them reflects the biases of underrepresented populations, or worse, if they are deployed without safeguards at all, WHO cautions further.
The organisation has therefore issued formal guidance urging countries to adopt clear governance mechanisms around AI in health.
These frameworks, it says, must ensure that all AI tools are safe, transparent, ethical, and accountable. Privacy and data rights must be respected.
Clinical tools must be independently validated. Developers must be held to strict standards, particularly where vulnerable populations are concerned, the WHO also adds.

Technology, WHO argues, should not be a substitute for robust public health systems.
It adds that in environments where infrastructure is lacking, dependence on AI could lead to misinformed decisions or the sidelining of skilled human professionals.
Even well-designed systems can falter if they are not adapted to the local context, or if healthcare workers are not properly trained to interpret and use them.
What’s needed is not just innovation, but responsible innovation—the kind that puts patient safety and equity at its centre.
To support countries, WHO said it is working on comprehensive technical guidance and regulatory tools to help health ministries manage the safe use of AI.
This includes evaluating how digital health technologies impact health delivery, assessing risks and benefits, and helping governments adopt legislation to protect public interests.
The agency also warns against over-reliance on private sector platforms that may prioritise commercial value over clinical validity.
According to WHO, its central message is “AI must serve everyone, not just the digitally connected. If properly governed, artificial intelligence can help accelerate progress toward the Sustainable Development Goals (SDGs), particularly in universal health coverage. But that future cannot be left to chance.
Whether used in hospitals in London, health centres in Lagos, or mobile clinics in rural Nepal, AI must be held to the same high ethical standards as any other medical intervention. The tools may be digital, but the consequences are profoundly human, the WHO affirms.

