Here’s what nobody told you about Ngee Ann Polytechnic’s AI-powered admissions: when they deployed EVA (Early Admissions Exercise Virtual Assistant) to evaluate aptitude-based applications, they didn’t just add another screening tool. They fundamentally redefined what responsible AI governance in education looks like. Let me reveal why this matters more than you think—and why every institution deploying agentic AI systems should pay close attention.

The Challenge That Changed Everything

In 2018, Ngee Ann Polytechnic faced a critical bottleneck. Their Early Admissions Exercise (EAE) program—designed to identify students with strong aptitudes beyond academic grades—was drowning in applications. Manual review processes couldn’t scale, yet the stakes were enormous: these decisions shaped young lives and the institution’s reputation.

The traditional approach? Have human reviewers assess every application based on criteria like passion, motivation, and relevant experience. The problem? This method was time-intensive, prone to inconsistency, and couldn’t handle growing application volumes.

Here’s the crucial insight: rather than simply automating the old process, Ngee Ann Polytechnic saw an opportunity to build something fundamentally different—an AI system governed by principles of transparency, accountability, and human oversight from day one.

The Governance Architecture: Why Structure Matters More Than Algorithms

Most organizations deploying AI focus obsessively on model performance. Ngee Ann Polytechnic did something smarter: they built a governance structure that made trustworthy AI inevitable.

Management Oversight: The Command Layer

At the top sits a management committee that doesn’t just rubber-stamp decisions. This body:

  • Approves all AI deployment policies
  • Reviews model performance metrics regularly
  • Makes final calls on governance framework updates
  • Ensures alignment with institutional values and legal requirements

This isn’t ceremonial oversight—it’s active governance that keeps AI systems accountable to human leadership.

The Academic Affairs Office: Data Custodians

Here’s where it gets interesting. The Academic Affairs Office serves as the data custodian, creating a critical checkpoint:

  • They control what data feeds into the AI system
  • They validate data quality and relevance
  • They ensure compliance with privacy regulations
  • They maintain audit trails of data usage

This separation of duties—those who manage the AI vendor relationship don’t control the data—creates natural checks and balances.

AI Vendor Collaboration: Partnership, Not Outsourcing

Ngee Ann Polytechnic’s relationship with their AI vendor (Aicadium) reveals a mature approach:

  • Clearly defined roles and responsibilities
  • Regular performance reviews
  • Transparent documentation of model decisions
  • Joint accountability for outcomes

They didn’t just buy software—they built a collaborative governance relationship.

Human-Over-the-Loop: The Real Innovation

Here’s what makes EVA different from typical AI deployments: it’s designed with “human-over-the-loop” architecture, not just “human-in-the-loop.”

What’s the difference? It’s massive.

How EVA Actually Works

EVA processes applications through a sophisticated NLP pipeline:

  1. Text extraction from application documents
  2. BERT-based classification analyzing passion, motivation, and experience
  3. Scoring and preliminary recommendations
  4. Mandatory human review for all non-selected applications

That last point is critical. Unlike systems that only flag edge cases for human review, EVA ensures human educators review every application the AI suggests rejecting. This asymmetric review process recognizes that false negatives (rejecting qualified students) carry higher ethical stakes than false positives.

Transparency by Design

Applicants aren’t kept in the dark about EVA’s role:

  • The AI’s participation in the review process is disclosed
  • Explanations of evaluation criteria are provided
  • Applicants understand both human and AI elements are involved

This transparency isn’t just good ethics—it builds institutional trust.

Feedback Loops That Matter

The system includes structured feedback mechanisms:

  • Human reviewers can override AI recommendations
  • Override patterns are analyzed to improve the model
  • Performance metrics are continuously monitored
  • Edge cases trigger governance reviews

This creates a learning system—not just a learning algorithm—where governance itself evolves.

The Technical Foundation: Model Development Done Right

Data Discipline

Ngee Ann Polytechnic followed rigorous data practices:

  • Training data: Historical applications with known outcomes
  • Testing data: Separate set for initial validation
  • Validation data: Held-out set for final performance assessment

This separation prevents overfitting and ensures the model generalizes to new applications.

Bias Minimization

The team actively worked to minimize bias:

  • Analyzed demographic patterns in training data
  • Tested for disparate impact across student groups
  • Implemented fairness constraints in model training
  • Monitored ongoing performance for bias indicators

This isn’t about achieving perfect fairness (an impossible standard)—it’s about systematic bias reduction.

BERT and NLP: The Right Tool for the Job

Why BERT? Because admissions applications are nuanced text that requires contextual understanding:

  • BERT captures semantic meaning, not just keywords
  • It understands context and relationships between concepts
  • It handles varied writing styles and expression forms
  • It can be fine-tuned on domain-specific admissions data

The choice of technology followed the problem, not the hype.

Performance Stats That Tell the Real Story

While specific accuracy metrics aren’t publicly disclosed in detail, Ngee Ann Polytechnic has shared that:

  • EVA’s recommendations show high concordance with human expert judgments
  • The system successfully handles thousands of applications efficiently
  • False negative rates (missing qualified students) are closely monitored
  • Continuous improvement cycles show performance gains over time

More importantly, they measure success not just by accuracy but by fairness, transparency, and trust indicators.

Continuous Improvement and Documentation

The system includes:

  • Regular model retraining with new data
  • Version control for model updates
  • Impact assessments before deploying changes
  • Comprehensive documentation of policies and procedures

Every decision is documented. Every change is reviewed. Every outcome is tracked.

Lessons for Trustworthy AI in Education

What can other institutions learn from Ngee Ann Polytechnic’s approach?

1. Governance Before Algorithms

Don’t start with the AI model. Start with the governance structure. Who decides? Who reviews? Who is accountable? Answer these questions first.

2. Transparency as Default

Make AI involvement visible to stakeholders. Hiding AI usage creates distrust; disclosing it builds confidence—if you can explain your approach.

3. Asymmetric Review Strategies

Not all AI decisions carry equal risk. Design review processes that reflect the asymmetric costs of different errors.

4. Data Custodianship

Separate data control from AI deployment. This creates natural accountability and reduces single points of failure.

5. Vendor Relationships as Partnerships

Treat AI vendors as partners in a governance framework, not just software suppliers. Define roles, responsibilities, and joint accountability.

6. Human Expertise Remains Central

AI augments human judgment; it doesn’t replace it. Systems that recognize this design principle build better outcomes.

7. Continuous Learning Systems

Build feedback loops that improve not just the AI model but the entire governance framework.

8. Documentation as Culture

Make comprehensive documentation a cultural norm, not a compliance burden. It enables accountability and continuous improvement.

The Path Forward: Scalable, Explainable, Trustworthy

Ngee Ann Polytechnic’s EVA system demonstrates that responsible AI in high-stakes educational decisions isn’t just possible—it’s achievable with deliberate design choices.

The key insights:

  • Scalability comes from smart automation combined with strategic human oversight
  • Explainability comes from transparent processes and clear communication
  • Trustworthiness comes from robust governance and genuine accountability

As educational institutions worldwide face growing application volumes and pressure to make fair, efficient decisions, the EVA model offers a blueprint. Not a perfect system—no system is—but a thoughtfully designed approach that respects both the power of AI and the irreplaceable value of human judgment.

The future of agentic AI in education won’t be determined by which institution has the most sophisticated algorithms. It will be determined by which institutions build the most trustworthy governance frameworks around those algorithms.

Ngee Ann Polytechnic just showed us how.

Sources

Primary Sources:

  1. Personal Data Protection Commission Singapore (PDPC). “Ngee Ann Polytechnic – Early Admissions Exercise Virtual Assistant (EVA).” AI Use Cases. Accessed September 2025. [PDPC AI Use Cases Database]
  2. Aicadium. “Case Study: Ngee Ann Polytechnic AI-Powered Admissions Assessment.” Company Resources. 2023.
  3. Ngee Ann Polytechnic. “Early Admissions Exercise (EAE) Information for Applicants.” Official Website. 2024.

Related Research and Framework Documents:

  1. Personal Data Protection Commission Singapore. “Model AI Governance Framework.” 2nd Edition. 2020.
  2. UNESCO. “Recommendation on the Ethics of Artificial Intelligence.” 2021.
  3. Devlin, Jacob, et al. “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” NAACL-HLT. 2019.
  4. Barocas, Solon, Moritz Hardt, and Arvind Narayanan. “Fairness and Machine Learning: Limitations and Opportunities.” fairmlbook.org. 2023.

Related Context:

  1. Singapore Government. “National AI Strategy.” Smart Nation Singapore. Updated 2023.
  2. Ministry of Education Singapore. “EdTech Plan and AI in Education Initiatives.” 2024.

This case study represents an important example of responsible AI deployment in educational settings, demonstrating that governance, transparency, and human oversight can create systems that are both efficient and trustworthy.

Discover more from Agile Mindset & Execution - Agile ME

Subscribe now to keep reading and get access to the full archive.

Continue reading