Translation Notice
This is a courtesy translation. The English version is the legally binding version. In the event of any discrepancy, the English version shall prevail.
AI Policy
Last Updated: August 1, 2025
1. AI System Classification
EduQ AI's tutoring system is classified as a high-risk AI system under the EU AI Act (Annex III, Category 3 — Education and Vocational Training). As an AI system that influences access to educational resources and evaluates learning outcomes for K-12 students, we apply the risk management measures required for high-risk AI systems.
We take this classification seriously. The following sections describe how we manage risk, ensure human oversight, and protect the students who use our platform.
Risk Management Measures
- Continuous monitoring of AI output quality and appropriateness
- Content safety filtering before responses are shown to students
- Parent oversight tools via the Parent Dashboard
- Regular review of AI system performance and bias indicators
- Incident logging and escalation procedures for harmful outputs
2. AI System Purpose & Capabilities
EduQ AI uses large language models (LLMs) to provide supplementary educational tutoring for K-12 students. The AI system is designed to:
- Answer questions — respond to student questions across core subjects (mathematics, science, English, Chinese, history, and more)
- Generate explanations — break down complex concepts into age-appropriate language
- Create quizzes — generate practice questions to reinforce learning
- Provide feedback — offer constructive feedback on student responses and written work
- Support document analysis — help students understand uploaded study materials (processed by AWS Textract; originals deleted within 24 hours)
The AI system is intended as a supplementary learning tool, not a replacement for classroom instruction, qualified teachers, or parental guidance.
3. AI System Limitations
⚠️ Important Limitations to Understand
- May produce inaccurate content — AI-generated responses can contain factual errors, outdated information, or incomplete explanations. Always verify important information with a teacher or trusted source.
- Cannot replace human teachers — The AI does not understand a student's full learning context, emotional state, or individual needs the way a qualified teacher does.
- Not designed for high-stakes assessments — EduQ AI should not be used as the sole preparation tool for formal examinations, standardised tests, or academic assessments with significant consequences.
- Subject to bias — AI models are trained on large datasets that may contain biases. Responses may inadvertently reflect those biases.
- No real-time information — The AI's knowledge has a training cutoff date and does not have access to current news or events.
- Language limitations — While the platform supports English and Chinese, nuanced language, idioms, or regional curriculum variations may not always be handled accurately.
4. Human Oversight Measures
We believe meaningful human oversight is essential for AI systems used by children. EduQ AI provides the following oversight mechanisms:
- Parent Dashboard — Parents can review all AI interactions their child has had on the platform, including full conversation history, at any time.
- Restrict or disable AI features — Parents can restrict specific AI capabilities or disable AI tutoring entirely for their child's account from the Parent Dashboard.
- Safety gate system — All AI responses pass through a content safety filter before being shown to students. Responses flagged as potentially harmful, inappropriate, or off-topic are blocked.
- Reporting mechanism — Students and parents can flag any AI response they find inappropriate or incorrect. Flagged responses are reviewed by our team.
- No autonomous decisions — The AI does not make any autonomous decisions that affect a student's account, access, or standing. All consequential actions require human confirmation.
5. Data Governance for AI
What data IS sent to AI models
- The student's chat messages (the question or prompt they type)
- Subject context (e.g., "mathematics", "science") to improve response relevance
- Grade level (to calibrate language complexity and curriculum alignment)
- Conversation history within the current session (for context continuity)
- Uploaded document content (text extracted by AWS Textract, for document Q&A sessions only)
What data is NOT sent to AI models
- Full name, email address, or any personal identifiers
- Payment data or subscription information
- Data from other students' accounts
- Parent account information
- Device identifiers or IP addresses
- Mastery scores or historical performance data (beyond the current session)
AI Provider Data Processing
AI features are powered by third-party large language model providers (including Anthropic Claude via AWS Bedrock). Data sent to these providers is subject to their data processing agreements. Our providers have contractually agreed that:
- Chat messages are not used to train AI models
- Data is processed only to generate the requested response
- Data is not retained beyond the processing window
6. AI Output Monitoring
We actively monitor AI output quality to maintain a safe and effective learning environment:
- Content filtering — Automated filters screen all AI responses for harmful, inappropriate, or off-topic content before delivery to students.
- Quality sampling — A random sample of AI interactions is reviewed periodically to assess accuracy, appropriateness, and curriculum alignment.
- Incident response — Any AI output that causes harm or generates a complaint is investigated and used to improve our safety systems.
- Bias monitoring — We periodically audit AI responses for patterns of bias across subject areas, languages, and student demographics.
7. Automated Decision-Making
EduQ AI does not use automated decision-making that produces legal or similarly significant effects on users. Specifically:
- AI-generated educational recommendations are advisory only — they do not determine grades, academic standing, or access to educational opportunities.
- No AI output is used to make decisions about a student's eligibility for any programme, institution, or benefit.
- Mastery scores and progress tracking are generated algorithmically but are provided as informational tools for students and parents, not as official assessments.
- Account decisions (suspension, termination) are made by human staff, not by the AI system.