.jpg)
How is AI challenging the way we test knowledge and capacity?
The rise of powerful, widely available AI tools has put pressure on some of the most taken‑for‑granted features of schooling: homework essays, take‑home problem sets, and even timed exams (in person or online). If a chatbot can generate a competent five‑paragraph essay or step‑by‑step solution to a maths problem from a short prompt, then assignments that once served as evidence of individual understanding can become tests of how well students can copy and paste.
In response, institutions have experimented with a mix of approaches: banning certain tools, using handwritten exams, redesigning tasks to focus more on process and reflection, and, in some cases, explicitly allowing AI use under defined conditions. Particularly at university level, exam boards are piloting on‑device secure browsers and proctoring systems which try to block or detect unauthorised AI assistance during tests. At the same time, some educators argue that the presence of AI is an opportunity to rethink assessment altogether, moving away from products which can be easily automated towards demonstrations of reasoning, collaboration, and judgement.
For students and parents, this can feel confusing. We don’t currently have overarching policy or clearly established norms, and so during this period, rules can differ between subjects, schools, and even individual teachers. A tool that is encouraged in one context (for brainstorming or language support) may be excluded in another (for drafts of graded essays). During this period, young people may either avoid useful tools out of fear or use them in ways that unintentionally cross ethical lines.
Rethinking What “Showing Your Learning” Looks Like
At the heart of the AI‑assessment debate is a simple question: what, exactly, do we want to see and be assured of when we assess? If the goal is to understand how a student can work with tools to solve complex problems, critique sources, and communicate clearly, then blanket bans become less tenable. If the goal is to measure a student’s capacity for recall and basic explanation under pressure, then strict controls on AI make sense. In these scenarios, handwritten or in‑class tasks, oral exams, and practical demonstrations can be more applicable.
A more nuanced approach is emerging in some places:
- Process‑rich assignments, where students submit planning notes, drafts, and reflections alongside a final product, making it easier to see their own thinking even if AI has been used for support.
- Tasks that foreground evaluation and adaptation, such as critiquing an AI‑generated answer, improving it, or explaining its limitations in context.
- Mixed assessment portfolios, combining timed in‑person work with longer projects that can legitimately involve digital tools, so that no single mode dominates.
For the families and teachers of our students, it can help to talk explicitly about the why behind different rules. Instead of framing policies as arbitrary (“The teacher says no AI”), discuss what each task is trying to measure and how AI would obscure or support that. For example: “This in‑class essay is about your ability to organise ideas yourself under time pressure; that’s why outside tools are off‑limits. However, for your extended project, using AI to find counter‑arguments might be allowed, as long as you cite and critique them.”
Tutoring such as Turtle & elephant can play a bridging role here. For example, we can model and engage with more AI‑aware study habits: using tools for brainstorming, vocabulary expansion, or practice questions in ungraded settings, while also rehearsing the kinds of reasoning and writing that students will need to perform independently in exams. Done well, this can prepare learners not just to “get around” new rules, but to understand and navigate the deeper shift in what academic honesty and intellectual effort mean when LLMs and chatbots's ability to replicate knowledge should not be at the expense of individual understanding.
Note: the first draft of this article was done by AI Chatbot Claude with the support of Max Capacity. The text was then edited and adapted by Jaye Sergeant of Turtle & elephant, who is responsible for the published version.
Latest Insights
Practical Advice for Students and Families
Real-world learning tips, expert writing strategies, and behind-the-scenes stories from our tutoring experience.
.jpg)
Another Useful Economic Source: Investopedia
.jpg)
Subtext #6

Another Useful Social Science Resource: The Observatory of Economic Complexity
Join Our Learning Community
Get Thoughtful
Learning Insights—
Straight to Your Inbox
Writing prompts, practical resources, and updates from Turtle & Elephant—designed for learners, families, and educators. No spam, just meaningful learning support.





