The Uncomfortable Truth: Singapore has essentially decided that biased AI is ethically acceptable in healthcare. They have peer-reviewed research to back it up. Duke-NUS published a study showing that their diabetic retinopathy screening AI works less effectively for Malay patients. It works more effectively for Chinese patients. Despite this, they’re deploying it anyway.
The Philosophical Shift: The reasoning is radical. They argue it’s better to have imperfect AI that helps most people. It’s preferable to this than to wait for perfect AI that helps no one. This challenges the entire Western narrative around “fairness first” in AI deployment.
The Regional Implication: This “pragmatic bias acceptance” approach could become Singapore’s export model to ASEAN countries. Perfect datasets are impossible there. If they waited for bias-free AI, they would never deploy AI at all.
The ASEAN Export Model
Here’s where it gets interesting: Singapore’s model is spreading. Thailand now uses similar “biased” AI for tuberculosis screening. Malaysia deploys cardiac risk AI with explicit bias acceptance criteria. The 2023 ASEAN Health Ministers created a Regional Framework for Pragmatic AI Healthcare Deployment—directly inspired by Singapore’s success.
Even the African Union sent delegates to study Singapore’s approach. Why? Because resource-constrained health systems can’t afford Western-style algorithmic perfectionism. They need systems that work today, not in two years.
What You Didn’t Know
The real revelation isn’t that Singapore’s AI is biased—it’s that Western “fairness-first” approaches may be a luxury that costs lives. While we debate perfect algorithms, Singapore saves sight. It prevents blindness and proves that pragmatic bias acceptance isn’t ethically questionable. It’s ethically essential.
Sometimes the most moral choice isn’t the perfect one. Sometimes it’s just the one that works.