AI has turned college exams into a ‘wicked problem’ with no obvious fix, researchers warn
Summary
A new paper from researchers at Deakin University argues that generative AI has transformed university assessment into a “wicked problem” — a messy, interconnected challenge with no single, correct solution. The authors interviewed 20 unit chairs at a large Australian university and found widespread confusion, heavier workloads, and difficult trade-offs as staff try to design assessments that remain authentic and fair while resisting easy outsourcing to AI tools.
The study documents a range of staff responses: some embrace AI as a tool students should learn to use, others treat it as a form of cheating. Tried-and-tested fixes (handwritten tasks, oral exams, personalised prompts) are either labour-intensive, scale poorly, or shift what is being assessed. The researchers urge institutions to stop hunting for a silver bullet and instead accept compromise, diversity of approaches, and continuous iteration.
Key Points
- The paper labels AI-disrupted assessment a “wicked problem”: complex, resistant to final solutions, and liable to create new tensions when patched.
- Interviews with 20 unit chairs revealed confusion, burnout and impossible trade-offs in assessment design and delivery.
- Practical tactics being used include handwritten or in-person baselines, reflective elements, live presentations and personalised prompts — but each has limits (scale, workload, fairness).
- Some educators permit AI as a professional tool; others ban it after finding hallucinated citations and indistinguishable prose.
- Oral exams resist AI but are often logistically unrealistic for large cohorts; hybrid approaches can double staff workload.
- The researchers warn against chasing a single ‘right’ fix and recommend permission to compromise, diverge and iterate on assessment strategies.
- The debate highlights broader systemic issues: reliance on easy-to-grade tests and homework that AI can now game, pushing institutions toward rethinking what they assess.
Why should I read this?
Short version: if you care about higher education, assessment integrity or how AI is rewriting what students can and can’t outsource, this is a must-see. The paper doesn’t hand you a neat fix — but it does explain why there won’t be one, and why continual, pragmatic adjustments beat fruitless silver-bullet hunting. Think of it as a reality check and a practical nudge: stop expecting one-size-fits-all answers and start planning for compromise and iteration.