
At LanguageMate, we take data privacy and compliance with regulations and guidance very seriously, which is why we want our compliance to be accessible to our users. Below you will find answers to the most common questions about how we protect your data, comply with UK GDPR, meet the expectations of the EU AI Act, and align with the Department for Education's Generative AI product safety standards. For full details, you can access our Terms of Use and Privacy Policy at any time.
We collect only what is necessary to deliver and improve our service. This includes:
This closely defined scope reflects our commitment to data minimisation and purpose limitation under UK GDPR. If your institution uses Single Sign-On (SSO), the personal data we handle is even more limited because we do not process your login credentials directly.
For the full list of data categories and how each is used, see Sections 3 and 4 of our Privacy Policy.
All data is stored securely on servers located in the UK or Europe (our European servers are in Frankfurt). We do not store user data outside these regions.
For more on how we protect data in transit and at rest, see Section 10 (Data Security) of our Privacy Policy.
We maintain strong technical and organisational security measures, including:
Further details are set out in Section 10 (Data Security) of our Privacy Policy.
Yes. Learners and teachers have the right to request deletion of their personal data at any time. You can do this by:
Once your account is deleted, your account credentials, profile information and contact details will be permanently deleted within 30 days. Please note that data which has already been anonymised (as described in our Privacy Policy) cannot be retrieved or deleted because it can no longer be linked to you.
For full details on data retention periods and account deletion, see Section 12 of our Privacy Policy and Clause 11.3 of our Terms of Use.
In addition to the right to erasure, you have a number of rights under UK GDPR and the Data Protection Act 2018, including the right of access, rectification, restriction of processing, data portability, objection, and the right not to be subject to automated decision-making. You also have the right to withdraw consent at any time and to lodge a complaint with the Information Commissioner's Office (ICO).
How you exercise these rights depends on the type of data involved. For usage and platform data, your institution is the data controller and you may contact them in the first instance. For account registration data, LanguageMate LTD is the data controller and you can contact us directly. If you are unsure which route to take, simply reach out to us at [email protected] and we will handle your request appropriately.
See Section 8 of our Privacy Policy for the full list of your rights and how to exercise them.
The roles are split between your institution and LanguageMate LTD:
For more detail, see Clause 8.2 of our Terms of Use and Section 2.1 of our Privacy Policy.
No. LanguageMate ensures meaningful human control by using AI only as a support tool, not as a decision-maker. Teachers and schools remain fully responsible for assessment and interpreting learning outcomes. The system does not make automated decisions that could affect a student's rights or opportunities.
Teachers and administrators can also override, flag or correct AI outputs within the platform, ensuring that human judgement always has the final say.
Your right not to be subject to solely automated decision-making is also set out in Section 8.1 of our Privacy Policy.
We make it clear to all users that they are interacting with an AI-powered learning tool. All AI feedback, suggestions and explanations are presented as assistive content. Our product does not attempt to impersonate a human teacher.
For more on the nature and limitations of AI-generated content, see Clause 7 (Nature of AI-Generated Content) of our Terms of Use.
No. We do not use identifiable personal data to train our AI models. Only data that has been fully and irreversibly anonymised, meaning it can no longer be linked back to you, may be used for research, development and improving our language-learning features. You can object to the anonymisation of your data at any time before it takes place by contacting us.
Full details are in Section 5 (Anonymisation and AI Training) of our Privacy Policy and Clause 9.3 of our Terms of Use.
The table below sets out how LanguageMate meets each of the DfE Generative AI Product Safety Standards, reflecting our ongoing commitment to the safe, responsible and transparent use of AI in education. For further information, please refer to our Privacy Policy and Terms of Use, or contact us at [email protected].
| DfE Standard | How LanguageMate Complies |
|---|---|
| Filtering | LanguageMate uses AI models with robust built-in content filtering, combined with our own additional filtering and moderation layer to prevent harmful or inappropriate content.Read moreThis dual approach helps to ensure high levels of accuracy, safety and educational relevance in all AI-generated responses. Our Terms of Use (Clause 10) also reserve our right to monitor and remove content that violates the Terms or is deemed inappropriate. |
| Monitoring and Reporting | LanguageMate provides institutions with full oversight of student activity on the platform.Read moreTeachers and administrators can review student interactions at a granular level, including the ability to listen back to student audio recordings and leave personalised feedback on scenario performance. Our teacher-in-the-loop approach supports transparency, enables real-time monitoring and allows educators to intervene where necessary. Conversation and interaction data, performance data and usage analytics are collected and made available to institutions as described in Sections 3 and 4 of our Privacy Policy. |
| Security | LanguageMate employs appropriate technical and organisational measures to protect personal data, including the use of encryption, access controls and regular security reviews.Read moreFull details are set out in Section 10 of our Privacy Policy. Our Terms of Use explicitly prohibit jailbreaking, reverse-engineering or otherwise attempting to circumvent the AI models or security measures used on the platform (Clause 6(d)), as well as unauthorised access to servers or networks (Clause 6(f)). Account sharing is strictly prohibited (Clause 4.3), with each account assigned to a single individual. Institutional administrators manage user access through the provisioning and revocation of licences. We also have a formal data breach notification process, committing to notify institutions within 72 hours of becoming aware of a breach (Section 11 of our Privacy Policy). |
| Privacy and Data Protection | Our Privacy Policy, which covers all required information, is publicly available at all times.Read moreData controller and processor roles are clearly defined: the institution is the data controller for usage and platform data, while LanguageMate LTD acts as the data processor under a Data Processing Agreement (DPA). LanguageMate LTD is the data controller only for account registration data (Section 2.1 of our Privacy Policy). We do not use identifiable personal data to train our AI models. Only fully anonymised data is used for this purpose (Section 5 of our Privacy Policy). Users have the right to object to anonymisation before it takes place by contacting us at [email protected]. Our platform complies with UK GDPR and the Data Protection Act 2018, with specific provisions for children and young people (Sections 6 and 9 of our Privacy Policy), including requirements for institutional and parental consent for users under 18. |
| Intellectual Property | LanguageMate does not collect, store or share any identifiable user data for commercial purposes, including model training, product improvement or product development.Read moreOnly fully anonymised data is used for these purposes (Clause 9.3 of our Terms of Use; Section 5 of our Privacy Policy). We do not claim ownership of users' original content. To the extent that user inputs constitute copyrightable material, the licence granted in our Terms of Use allows LanguageMate to use such material only in anonymised, aggregated form for improving our service (Clause 9.3). |
| Design and Testing | LanguageMate uses AI models with built-in safety filtering and applies its own additional filtering layer.Read moreThe platform is continuously monitored, with content moderation processes in place to review and remove inappropriate material (Clause 10 of our Terms of Use). As generative AI models evolve over time, output quality is monitored and the platform is reviewed regularly to maintain high standards of educational relevance and safety. |
| Governance | Formal mechanisms for raising concerns or lodging complaints are set out in our Terms of Use and Privacy Policy.Read moreUsers and institutions can report violations or raise queries by contacting [email protected] (see Clause 18 of our Terms of Use and Section 15 of our Privacy Policy). We review risks when making changes to AI models, introducing new features and updating tools, considering the impact on both educators and learners. The platform operates under clear data processing agreements with institutions, and our Terms of Use and Privacy Policy are available on our website at all times. |
| Cognitive Development | Our platform provides scaffolded language learning support through interactive, real-world conversational scenarios designed to build learner autonomy and confidence.Read moreAI-generated responses provide personalised feedback to support skill development. LanguageMate is designed to be used alongside teacher instruction, supporting rather than replacing the educator's role. Teachers maintain full visibility of student progress and can provide direct feedback, enabling our teacher-in-the-loop approach and ensuring that professional pedagogical judgement remains central to the learning process. |
| Emotional and Social Development | Our platform uses different characters across its roleplay scenarios, but these characters do not persist across sessions or build ongoing relationships with students.Read moreEach scenario features a different character (such as a shopkeeper, hotel receptionist or doctor), preventing the formation of emotional bonds or dependency on AI personas. This important design decision ensures AI is used as a professional educational tool rather than a substitute for human interaction, and avoids emotionally persuasive design elements. |
| Mental Health | Our platform is designed to detect signs of learner distress, including negative emotional cues in language or behaviour and patterns of use that may indicate crisis.Read moreDetection includes sudden escalation in help-seeking or night-time usage spikes, references to mental health conditions, mentions of suicide or self-harm, use of isolation phrases and repeated refusal to end sessions. Where distress is detected, the platform follows an appropriate response pathway, including soft signposting to age-appropriate support pages and resources and raising a safeguarding flag to the institution's designated safeguarding lead. All response language is designed to be safe and supportive: non-validating and non-pathologising, always directing the learner to human help (such as teachers, family, peers or crisis services) and avoiding any language that suggests isolation or secrecy. Our teacher-in-the-loop model, where educators have full oversight of student interactions and can review audio recordings, provides an additional layer of safeguarding and ensures that any concerns can be identified and addressed by qualified professionals within the institution. |
| Manipulation | Our platform does not engage in sycophancy or flattery; positive feedback is tied to specific performance within language practice scenarios.Read moreOur AI does not deceive or mislead users, does not portray absolute or unjustified confidence, and does not stimulate negative emotions such as guilt or fear for motivational purposes. It does not threaten harm, loss, punishment or the withholding of benefits, nor does it apply pressure to socially conform. There are no paid upgrade options, premium features or in-app purchases visible to individual users within the platform; all commercial matters are handled at institutional level through a separate service agreement (Clause 5.4 of our Terms of Use). Our platform does not blend pedagogical assistance with advertisements or promotional content, does not steer users towards paid options through biased wording or layouts, and does not employ dark patterns that might deceive a user into taking unintended actions. Our Terms of Use (Clause 5.3) include a fair use policy, and we reserve the right to restrict access for accounts exhibiting excessive usage patterns, as our platform is not designed to maximise time spent. |